Mobile applications are increasingly targeted by reverse engineers and hackers— both Android and iOS. Yet, many app developers still believe iOS apps are virtually immune to reverse engineering and don’t need any protection. This belief is based on three common misconceptions:
(1) App Store code encryption alone is enough
(2) iOS Apps are very hard to reverse engineer
(3) Apple’s code signing mechanism prevents code tampering and re-distribution.
The false sense of security with regard to iOS apps partly stems from Apple’s solid reputation in security. Apple has designed and implemented a comprehensive security architecture with the aim of creating a secure ecosystem. The limited number of malware and exploits for iOS indicates that the implemented measures are quite effective. However, a closer look at Apple’s security architecture reveals that it is primarily designed to safeguard the end-user and not to protect developers and their applications against reverse engineering and its possible uses (such as cloning or code tampering).
This article debunks three popular misconceptions about iOS app security and provides insight into some of the most prevalent threats iOS apps face today.
Misconception #1: App Store encryption alone is enough
Apple introduced the FairPlay system for DRM content in the early 2000s and has since extended it to encrypt the code of applications submitted to the App Store. The system restricts access to the machine code of the apps after download with the aim of preventing static analysis of the application. It is important to note that Apple only encrypts app code and not the resources or assets that are part of the IPA.
The Achilles’ heel of Apple’s code encryption is that all applications must be decrypted before they can be executed. The decrypted instructions can be dumped from memory and reconverted into the original unencrypted application by a variety of tools available for jailbroken devices. The conclusion? As long as an application can be installed and run on a jailbroken device, Apple’s code encryption will not prevent it from being reverse engineered.
On top of that, it is nearly impossible to prevent applications from being installed on jailbroken devices. To assess the likeliness of this happening, it is important to consider that:
-
all iOS versions have eventually been jailbroken
-
since jailbreaks are not always distributed publicly, the absence of a public jailbreak for a newer iOS version does not mean that hasn’t actually been jailbroken
-
to reach a broad audience, most applications also work on older versions of iOS for which jailbreaks are publicly available.
All in all, this means that App Store encryption does not provide adequate protection for iOS applications against iOS reverse engineering, code analysis and patching. (The August 2019 publication of a jailbreak for iOS 12.4 perfectly illustrates this point. Read more about it here.)
Misconception #2: iOS Apps are difficult to reverse engineer
iOS applications are mainly written in Objective-C or Swift. These languages are compiled to machine code, which is inherently less structured and more diverse than, for instance, Java or C# bytecode. The nature of machine code makes it less straightforward to translate the code back to the original source code. This has given rise to the misconception that iOS apps are nearly - if not entirely - impossible to reverse engineer and don’t need additional protection. The misconception is especially prevalent among reverse engineers who earned their stripes on Android and switched to iOS applications.
However, this view fails in considering that the first higher level computer languages (for example, C in the early 70s) were compiled and not managed languages, such as Java. This means that (1) the interest in analyzing and understanding machine code is not new and has culminated in decades worth of research and expertise, and (2) mature technology that enables the iOS reverse engineering of machine code is available.
Another factor to consider is that both Objective-C and Swift are very dynamic languages and that the compiled binaries must contain a lot of metadata to support this. The included metadata can easily be extracted and parsed into an accurate blueprint of the application by readily available tools. The combination of the discussed factors —the availability of tools to analyze machine code and parse metadata— means that, contrary to popular belief, iOS apps are actually quite easy to reverse engineer and analyze.
Misconception #3: Apple code signing prevents code tampering and re-distribution
Apple has implemented a complex code signing system to enable iOS to identify the creator(s) behind any given mobile application and verify if the app has been modified since it was last signed. It is a common misconception that code signing prevents third parties from tampering with applications and re-distributing them. The actual goal of Apple’s code signing system is to protect the end-user by (1) making it impossible for him to install an app with an invalid signature, i.e. an application that has clearly been tampered with and (2) prompting him to explicitly accept the installation when he is trying to install an app with a valid signature, that is not trusted by default by the device (as the App Store certificate is).
What the code signing system cannot do, is prevent applications from being tampered with. As long as a reverse engineer manages to (re-)sign a modified application, he will be able to use or redistribute it. There are a couple of ways of doing this:
-
Apple's free temporary developer signing certificates can be used to locally modify, install and use an existing IPA on a personal device.
-
Enterprise certificates can be used to set up online signing services. This allows for the distribution of patched applications through third-party app stores. It goes without saying that is illegal, especially since the used certificates are often stolen.
An entirely different option for dealing with modified apps is to rely on jailbreaks to circumvent signature checks. This enables users to install any iOS application, whether it has a valid signature or not.
Apple’s code signing system notifies end-users when they are about to install an untrusted application. Yet, it does not stop malicious actors from modifying and distributing applications or users from downloading these apps— especially since many users actively look for apps that are ad- and paywall-free or contain hacks for games, for instance.
Conclusion
The discussed misconceptions cause many developers to underestimate the need for iOS application protection, leaving their applications vulnerable to iOS reverse engineering and hacking. A number of real-life use-cases attest to the vulnerability of iOS applications:
-
Streaming apps are patched to remove advertisements, leading to million-dollar losses in ad revenue.
-
Banking apps are tweaked to run on jailbroken devices, which compromises the platform security mechanisms they require to provide their customers with the level of service they expect.
-
Applications are patched to expose the way in which they communicate with the back-end; this can be used to build custom front-ends, provide unintended functionality or even carry out back-end attacks.
Every app developer and publisher should be aware that Apple’s security mechanisms primarily focus on protecting the end-user and fall short of providing adequate app protection. Only mobile application protection software is specifically designed to safeguard apps, their developers and the companies that deploy them. Since iOS applications are ultimately just as susceptible to reverse engineering, tampering, cloning, etc. as their Android counterparts, mobile app protection is not only advised but necessary.
Learn More
Mobile application security testing is critical in the early development process. Learn about iOS app development protection and Guardsquare’s free mobile app security testing product.