iOS does not utilize built-in encryption measures as much as it could do, allowing for potentially unnecessary security vulnerabilities, according to cryptographers at Johns Hopkins University (via Wired).
Using publicly available documentation from Apple and Google, law enforcement reports about bypassing mobile security features, and their own analysis, the cryptographers assessed the robustness of iOS and Android encryption. The research found that while encryption infrastructure on iOS « sounds really good, » it is largely left unused:
« On iOS in particular, the infrastructure is in place for this hierarchical encryption that sounds really good, » said Maximilian Zinkus, lead iOS researcher. « But I was definitely surprised to see then how much of it is unused. »
When an iPhone boots up, all stored data is in a state of « Complete Protection, » and the user must unlock the device before anything can be decrypted. While this is extremely secure, the researchers highlighted that once the device has been unlocked for the first time after a reboot, a large amount of data moves into a state Apple calls « Protected Until First User Authentication. »
Since devices are rarely restarted, most data is in a state of « Protected Until First User Authentication » rather than « Complete Protection » most of the time. The advantage of this less secure state is that decryption keys are stored in quick access memory, where they can be swiftly accessed by applications.
In theory, an attacker could find and exploit certain types of security vulnerabilities in iOS to obtain encryption keys in the quick access memory, enabling them to decrypt large amounts of data from the device. It is believed that this is how many smartphone access tools work, such as those from the forensic access company Grayshift.
While it is true that attackers require a specific operating system vulnerability to access the keys, and both Apple and Google patch many of these flaws as they are noticed, it may be avoidable by hiding encryption keys more deeply.
« It just really shocked me, because I came into this project thinking that these phones are really protecting user data well, » says Johns Hopkins cryptographer Matthew Green. « Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad? »
The researchers also shared their findings and a number of technical recommendations with Apple directly. A spokesperson for Apple offered a public statement in response:
« Apple devices are designed with multiple layers of security in order to protect against a wide range of potential threats, and we work constantly to add new protections for our users’ data. As customers continue to increase the amount of sensitive information they store on their devices, we will continue to develop additional protections in both hardware and software to protect their data. »
The spokesperson also told Wired that Apple’s security work is primarily focused on protecting users from hackers, thieves, and criminals looking to steal personal information. They also noted that the types of attacks the researchers highlighted are very costly to develop, require physical access to the target device, and only work until Apple releases a patch. Apple also emphasized that its objective with iOS is to balance security and convenience.