October 1, 2024

    Ensuring the Security of Mobile On-Device Machine Learning

    As businesses evolve to offer increasingly smarter, personalized, and responsive experiences, and AI processing capabilities become more common in mobile devices, the local or on-device integration of machine learning (ML) into mobile applications and SDKs is becoming increasingly popular. A report from Google I/O 2024 highlighted this trend, with Google urging developers to build on-device ML applications to improve privacy, reduce latency, and cut costs associated with cloud-based AI. This trend, however, needs to be accompanied by a more robust mobile app security posture.

    In this blog, we will explore the use cases of on-device ML, the associated risks, and the best practices to ensure the security of your ML models, mobile applications, and SDKs from attacks.

    Key takeaways:
    • By processing data locally, on-device machine learning ensures real-time decision-making, reduced dependence on networks, and heightened user trust through enhanced privacy—crucial for industries like healthcare and finance.
    • Despite its benefits, on-device ML is susceptible to many security risks including reverse engineering, model and training data theft, exposing sensitive data, and business-critical IP to exploitation.
    • Developers should adopt a layered security approach, protecting both the ML model and the mobile application/SDK, through code protection, security testing, and threat monitoring.

    Why developers are moving to on-device machine learning

    The shift towards on-device ML is driven by a range of practical, technical, and strategic benefits that address several limitations of its cloud-based counterpart. These limitations include data privacy concerns, network dependency, latency, and rising cloud-hosting costs.

    User privacy & data security

    On-device ML enables the processing and storage of sensitive user data locally on the device, eliminating the need for cloud-based data processing. This helps mitigate risks related to data exposure, especially in highly regulated industries like healthcare, financial, and legal services. For example, in healthcare apps and SDKs that analyze user health metrics or medical data, keeping ML computations on the device ensures that data remains secure and compliant with regulations like GDPR and HIPAA. This approach also helps build user trust, as data is not being transmitted to external servers, addressing growing concerns over data privacy.

    Reduced latency & network limitations

    On-device ML provides real-time processing capabilities, allowing applications and SDKs to function without the need to be connected to the internet to communicate with cloud servers. This leads to improved accessibility and significantly reduced latency, enabling a more seamless user experience. A prime example is the use of ML models in photography applications, such as Google Photos, which applies filters, enhances images, or even identifies objects within pictures - directly on the device even when users are not connected to the internet. This real-time, offline processing is only possible because the AI models are embedded within the app/SDK, eliminating any need for server-side computation.

    Cost-efficiency & scalability

    Cloud-based ML solutions often come with recurring data storage bandwidth and computational power costs. For popular mobile applications and SDKs, these data processing costs can accumulate quickly. With on-device ML, the “heavy lifting” is done on the users’ devices, significantly reducing the burden on centralized servers. This decentralized approach makes it easier for developers to scale without having to invest heavily in cloud infrastructure upgrades.

    Hidden risks of on-device machine learning

    While on-device ML provides a host of benefits, it is a lucrative target for attackers, requiring developers to prioritize security throughout the development lifecycle. Unfortunately, according to a 2024 study, 82.73% of top mobile applications that use on-device ML can easily be reverse-engineered, tampered with, extracted, and reused without limitation. This lack of proper security hygiene can lead to numerous app security risks that could lead to financial and reputational damage for developers and organizations.

    Unauthorized modification

    Without sufficient code security, malicious actors can easily tamper with the app/SDK as well as the embedded ML model’s behavior. Threat actors can do this in several ways including input manipulation attacks, model skewing attacks, and output integrity attacks. Consequently, this may result in serious consequences, especially in regulated sectors such as healthcare and financial services.

    For example, with the rising use of machine learning in the KYC verification process in the financial services sector, it’s only a matter of time before mobile banking apps utilizing embedded ML for KYC flow become the norm. In this case, attackers could tamper with the model to bypass the identity verification process. For instance, the model may be designed to verify users by analyzing images of ID documents or biometric data, such as facial recognition. An attacker could manipulate the model to incorrectly verify fraudulent or stolen identities, allowing unauthorized individuals to create accounts for malicious purposes. A team of researchers demonstrated this risk by showing how adversarial input manipulation can bypass facial recognition systems to fool the ML model. Left unmitigated, this security risk could result in a range of issues including money laundering, identity theft, and fraudulent transactions - potentially resulting in regulatory penalties and reputational damage due to weak security protocols.

    Intellectual property & training data theft

    Threat actors can easily reverse-engineer insecure mobile applications by decompiling and analyzing the code to extract sensitive information and embedded Intellectual Property (IP) such as the embedded ML models. Once they gain access to the model, threat actors can then create a competing mobile app/SDK, or even sell the model to the competitors - negatively impacting the business. A researcher demonstrated this risk in a keynote speech at a popular cybersecurity conference, showing how easy it was for him to break into a popular image-processing mobile application, gain access to the ML model and parameters, and successfully reuse the model in an entirely new application.

    Additionally, threat actors can use model inversion attacks to reverse engineer and reconstruct the model’s potentially sensitive training data. In 2015 and 2020, researchers were able to separately demonstrate how attackers could reconstruct and extract sensitive images used by ML models using this technique. Publishers could be held accountable for the data privacy breaches caused by these attacks.

    Data & model poisoning

    While embedded ML models may be less susceptible to direct attacks compared to server-side ML models that are shared across many users, they are still vulnerable to indirect threats, particularly through data and model poisoning attacks. In these cases, attackers inject malicious or misleading data into the shared datasets that inform the server-side models, which are used to train and update the embedded models. This can degrade performance or introduce vulnerabilities into the next version of the model, creating opportunities for exploitation. Many real-life cases have underscored that these risks are not theoretical but rather immediate and practical risks for many industries including automotive, healthcare, and financial services.

    Mitigation strategies: A layered security approach

    A comprehensive mobile application security posture demands developers to holistically apply mutually reinforcing layers of mitigation strategies. On top of securing the ML model to protect its integrity and confidentiality, developers should ensure that the mobile application in which the ML model is implemented cannot be tampered with or reverse-engineered. This layered security approach will make it exponentially more difficult for threat actors to break into the app/SDK to get access to the ML model. OWASP’s Machine Learning Security Top Ten and Mobile Security Top Ten can help developers and organizations secure their ML systems as well as mobile applications and SDKs. These comprehensive guides outline common risks and offer guidelines for how to address those severe vulnerabilities.

    OWASP Machine Learning Security Top Ten OWASP Mobile Security Top Ten
    1. ML01:2023 Input Manipulation Attack
    2. ML02:2023 Data Poisoning Attack
    3. ML03:2023 Model Inversion Attack
    4. ML04:2023 Membership Inference Attack
    5. ML05:2023 Model Theft
    6. ML06:2023 AI Supply Chain Attacks
    7. ML07:2023 Transfer Learning Attack
    8. ML08:2023 Model Skewing
    9. ML09:2023 Output Integrity Attack
    10. ML10:2023 Model Poisoning
    1. M1: Improper Credential Usage
    2. M2: Inadequate Supply Chain Security
    3. M3: Insecure Authentication/Authorization
    4. M4: Insufficient Input/Output Validation
    5. M5: Insecure Communication
    6. M6: Inadequate Privacy Controls
    7. M7: Insufficient Binary Protections
    8. M8: Security Misconfiguration
    9. M9: Insecure Data Storage
    10. M10: Insufficient Cryptography

    The highest level of mobile application security, in the easiest possible way

    Guardsquare enables developers to achieve the most robust mobile application and SDK security posture by empowering them with a complete suite of security code protection, security testing, and threat monitoring tools.

    “DexGuard and ThreatCast have made the app more secure, faster, and better for users.”
    - Founder and developer, AI tool developer

    With DexGuard and iXGuard, you can seamlessly apply multiple layers of code hardening and automated runtime application self-protection (RASP) to protect your Android and iOS mobile applications and SDKs, respectively, from static and dynamic attacks. Thanks to our compiler-based approach, these tools will polymorphically interweave security controls within the code, making sure that this in-depth protection is unique with every build.

    Lastly, our new implementation workflow ensures that you can easily implement DexGuard and iXGuard and achieve optimal results even with limited mobile app security knowledge.

    Developers can easily find and address security issues in Android and iOS mobile applications and SDKs early and automatically using AppSweep. The tool conveniently groups the security findings according to OWASP MASVS making it easier for developers and the security team to classify and prioritize the identified vulnerabilities. With the built-in actionable recommendations, developers can rapidly make informed decisions to mitigate security issues.

    Last but not least, using ThreatCast, developers and organizations can gain real-time visibility into how threat actors attempt to attack their Android and iOS mobile applications at runtime. The tool provides a wealth of contextual metadata about each detected threat, allowing your team to figure out which part of the code is most frequently attacked, who the perpetrators are, and how they attempt to compromise the application’s integrity.

    Need to ensure the security of your ML mobile apps and SDKs? Talk to an expert now >

    Discover how Guardsquare provides industry-leading protection for mobile apps.

    Request Pricing

    Other posts you might be interested in