January 14, 2025

    Mobile App Development Practices That May Unintentionally Facilitate Reverse Engineering

    In the world of app development, security is often top of mind, but even small details and oversights can lead to big consequences. An illustrative example is Chirp Systems, where such an oversight allowed strangers to remotely unlock smart locks installed in over 50,000 households. The issue arose from hardcoded passwords and private keys being left within Chirp's published Android app. Once discovered, these credentials could be used to access an API managed by a smart lock provider, giving attackers control over Chirp-powered locks and making it possible to unlock doors that were meant to be secure.

    Leaving sensitive information like credentials and keys in a published application is clearly risky and often there are ways to avoid this need at all. Nevertheless, it's important to recognize that even smaller details, such as log statements or hardcoded strings, can also expose sensitive information to reverse engineers. In fact, our analysis reveals that the majority of apps unintentionally include such data, further amplifying the potential risk.

    Reverse engineers typically use tools like decompilers and disassemblers to dissect an application and understand its structure and functionality. This is often used to steal intellectual property, uncover sensitive information, and identify security vulnerabilities, similar to the case described above.

    Primary targets in code for reverse engineers

    Attackers involved in reverse engineering often target areas of code that can provide valuable insights or expose vulnerabilities, for example:

    • Disclosure of sensitive data or internal state information via log statements, exception messages and non-obfuscated strings
    • Presence of hardcoded secrets such as any credentials or keys in the code
    • Exposure of code logic and structure via Kotlin metadata and assertions
    • Presence of mapping files, which let the attackers reverse obfuscation

    We collected data from 14000 Android apps that were analyzed with AppSweep, our free mobile app security testing tool. Below you can see the percentages of apps containing certain types of security concerns that our analysis has discovered.

    Internal_image_The Weakest Links-Mobile App Features That Aid Reverse Engineering_2

    In this blog, we’ll delve into these areas, explain why they are targeted, and share best practices to safeguard your app against them.

    Log statements

    Log statements are often left in code for debugging purposes, yet they can be a treasure trove for reverse engineers. Even if the logs are disabled and the application is obfuscated before being published, log statements often still reveal sensitive information to someone looking at the decompiled code of the app:

    Example of log statement in actual code:

    Log.d("LoginActivity", "User login attempt: username = " + username + ", password = " + password);

    Example of decompiled, name-obfuscated code:

    Log.d("a", "User login attempt: username = " + b + ", password = " + c);

    In the decompiled code, even though the class names and variables are obfuscated (LoginActivity to a, username to b, password to c), the log message still exposes the application logic and code structure as the message remains partially readable.

    Even when not logging sensitive info, logging statements can serve as a hint in understanding code logic and revealing vulnerabilities if, for example, debug info or error messages are logged.

    Mitigations:

    To mitigate this risk, developers should remove or minimize log statements in production code:

    • Use shrinkers to remove log statements, see the manual for configuration details
    • Use linters and static code analysis tools to flag log messages and manually remove them
    • Implement logging based on build configurations, ensuring that logging is only active for development builds and remove the debug logic from the production code

    Hardcoded secrets

    Hardcoding secrets and sensitive data like API keys or credentials directly into an Android app's code leaves these sensitive details exposed to anyone who decompiles or reverse-engineers the app. It makes it easy for attackers to misuse the app’s functionalities, e.g. letting them access backend services or get their hands on sensitive data. Unfortunately, default shrinkers don’t hide hardcoded strings, neither does hiding strings in the native code provide sufficient protection as the strings can be easily extracted from the binaries with the strings command.

    Mitigations:
    • Avoid storing secrets in the app’s code and retrieve secrets dynamically from the server instead

    To better understand the implications and explore safer alternatives, check out this blog post.

    Kotlin metadata

    For applications developed with the Kotlin programming language, the Kotlin compiler injects code and metadata into the classes it generates to work around the limitations of the JVM. The injected metadata takes the shape of an annotation added to classes.

    While the metadata is required to support certain Kotlin features, it still exposes the API representation of the original Kotlin class in a custom format, providing the attackers with important details about the app’s code.

    Let’s take a look at this example of a Kotlin class:

    
    class Greeter {
        var greeting: String = "Hello"
        ...
    }
    
    fun Greeter.setGreeting(newGreeting: String): Greeter {
        greeting = newGreeting + ", nice to meet you!"
        return this
    }
    

    Below you can see an example of the decompiled version of the class, which contains the extension function. In this snippet the application was generated without shrinking and obfuscation and the method name is clearly visible in the metadata.

    
    @Metadata(d1 = {"..."}, d2 = {"setGreeting", "Lcom/example/Greeter;", "newGreeting", "", "app_release"}, k = 2, mv = {1, 9, 0}, xi = 48)
    
    public final class GreeterKt {
        public static final Greeter setGreeting(Greeter greeter, String newGreeting) {
            Intrinsics.checkNotNullParameter(greeter, "<this>");
            Intrinsics.checkNotNullParameter(newGreeting, "newGreeting");
            greeter.setGreeting$app_release(newGreeting + ", nice to meet you!");
            return greeter;
        }
    }
    
    

    And this is the example of the metadata of same class, but obfuscated along with the class and method names:

    @Metadata(d1 = {"..."}, d2 = {"LB/e;", "", "newGreeting", "a", "(LB/e;Ljava/lang/String;)LB/e;", "app_release"}, k = 2, mv = {1, 9, 0})

    In most cases the metadata can be just removed from the code by the shrinkers. However, for instance, if the app uses kotlin-reflect library and calls extension functions via reflection, the metadata is essential. Otherwise, the kotlin-reflect library will not be able to understand the code as Kotlin code and the extension function won’t be found. Similar applies to cases when you are developing a Kotlin library, the metadata of the public API needs to be preserved.

    Mitigations:
    • If possible, remove the metadata from your app.
    • Otherwise, use shrinkers like R8 to obfuscate Kotlin metadata. R8 can be configured to rewrite Kotlin metadata using obfuscated class names via setting a keep rule for the Kotlin metadata library. Keep in mind that if the app relies on reflection, it might be needed to keep some names of the methods that are explicitly called via reflection.

    The aforementioned edge cases and shrinker configuration options are explained in more detail in this article.

    Kotlin assertions

    Kotlin has explicit nullable and non-nullable types, while Java doesn’t. Therefore, when we assign the result of a Java call to a non-nullable variable, an assertion is generated by the Kotlin compiler in order to check that the reference is actually not null.

    For example, consider this Kotlin code snippet:

    val nullable: String? = SomeJavaClass.staticMethodReturningString()
    val nonNullable: String = SomeJavaClass.staticMethodReturningString()
    

    It results in the following compiled code:

    String E2 = AbstractC0040a.C();
    String E3 = AbstractC0040a.C();
    Intrinsics.checkNotNullExpressionValue(E3, "staticMethodReturningString()");
    

    The Intrinsics calls, inserted by the Kotlin compiler, expose original variable and method names. And while the variable can be successfully obfuscated, the string representing the method name won’t, which will leak the information about the original code that you tried to hide with obfuscation.

    Mitigations:

    It is possible to set relevant Kotlin compiler arguments in order to avoid inserting part of the assertions (e.g. -Xno-call-assertions and -Xno-param-assertions) or remove assertions with shrinkers. However, discovering potential null values early and having more precise error messages is very valuable, so removing them in release builds should be taken into serious consideration.

    For more information you can read this blog post on Kotlin Assertions.

    JavaScript mapping files

    JavaScript source map files hold a detailed mapping between a minified or bundled JavaScript file and the original source files. Mapping files adhere to the following structure:

    
    {
      "version": 3,                  // Source map version
      "file": "out.js",              // Name of the generated file
      "sources": ["foo.js"],         // Original source files
      "sourcesContent": [            // Original source file content (optional)
        "function add(a, b) { return a + b; }"
      ],
      "names": ["schema", "apiKey", "message",],    // A list of identifiers used in the source code
      "mappings": "AAAA,SAASA,GAAG,CAACC,CAAD,EAAI,CAACD,CAAL,CAAR", // Encoded values that point from every position in the output back to positions in the input sources.
      "sourceRoot": "/some/path"     // Root path for resolving sources (optional)
    }
    

    While developers can use source maps to ease the debugging of their minified app, reverse engineers in turn can use mapping files to obtain information about the original JavaScript code. In our analysis, 68% of the applications that contain source maps have the “sourcesContent” property present, which leaks the original source files’ content. Therefore, the attackers can even fully reconstruct the sources and use them to dig deeper into the app’s logic.

    Mitigations:
    • Make sure that published apps don’t contain source map files.

    Conclusion

    Reverse engineering poses a significant threat to the security and integrity of your application, especially when logs, metadata, and other overlooked details are left exposed. By understanding why these components are targeted and implementing best practices – such as minimizing sensitive information in logs, obfuscating and shrinking the code – you can significantly reduce the risk of your application being compromised.

    If you are interested in quickly scanning your app for the issues described above and many more - check out AppSweep, Guardsquare’s free mobile application security scanning tool.

    Olesya Subbotina - Software Engineer

    Discover how Guardsquare provides industry-leading protection for mobile apps.

    Request Pricing

    Other posts you might be interested in