an adversary could find a vulnerability in the LLVM backend to obtain remote code execution on Apple’s Bitcode compilation infrastructure to inject a compiler trojan that would affect every single app on the App Store that was submitted with Bitcode
I'm not clear on how Bitcode changes this, apart from expanding the already huge attack surface of the app store. Code execution inside app store servers is already game over for end-user privacy, isn't it?
The bit about how Bitcode makes backdoor injection harder is also very hand-wavy, since backdoor injection into fully compiled binaries is a teenaged pastime going back to the late 1980s. About the most you can say here is that Bitcode makes a trivial task trivial-er.
The concern about introducing side-channels into crypto apps seems legitimate (though: if you believe this is a real issue, and I do too, you're also saying "I will never trust Javascript cryptography"). I wouldn't want Signal submitting Bitcode.
> The concern about introducing side-channels into crypto apps seems legitimate
If you're relying on compiler optimization levels to enforce stuff about crypto, you're gonna have a bad time. What happens when compiler behavior changes from minor revision to minor revision? Are you going to stare at the assembly code to make sure that something you did that was fair game for the optimizer to play with changed, or are you going to write your code so that it does the same thing no matter a (relatively correct) compiler and optimizer?
If your code had UB in it, you're already hosed and it's a matter of when. Fix that. There are primitives like RtlSecureZeroMemory to deal with the intersection of cryptography/security and compiler optimizations. We've mapped this terrain.
That works once. Are you going to do it every time you compile?
I guess the side channel crew can be appeased in the llvm bitcode view of the world - write your side channel resistant stuff in inline assembler. The bitcode optimizer can't touch it then. (edit-apple probably keeps you from doing this though)
Constant time C? Some instructions have data dependent execution times, even when no branches are used. Some memory operations have also data dependent timings.
Close, but you're actually using inline asm in most cases.
Then you're verifying the microarchitectural details of every CPU it executes on. (Intel's OpenSSL countermeasure to an RSA timing attack was dependent on CPUs not coming out with narrower cache lines than a predefined constant, but one did later.)
It's difficult to get high assurance in software. Best bet is to design algorithms to avoid data-dependent accesses (see: djb).
I agree most of his concerns are not very relevant. The legitimate concern I see is verifiability of code custody from the developer to the end user. However, Fairplay encryption of the exe means you have to dump the plaintext already in order to perform that step since the developer doesn't get to see it otherwise.
I actually think it would be better if Xcode did both encryption and code-signing at the developer side, then Apple just added another signature on top of the full binary once it was approved for App Store distribution. This way, developers could easily verify that their signature was intact and that nothing had been modified in between.
> The legitimate concern I see is verifiability of code custody from the developer to the end user
Why is this a legitimate concern? I've never heard this even suggested as a remote possibility of a concern before. If you're already so paranoid as to suspect Apple of inserting malicious code into your application, it's a very tiny step to believing that they'd still serve you the original app and only serve the modified copy to certain targets, thus making any binary verification you can do meaningless.
> I actually think it would be better if Xcode did both encryption and code-signing at the developer side
Apple doesn't serve the same encrypted binary to everyone. Each user has an independent set of FairPlay keys and Apple serves up content encrypted for that particular user. That's kind of the whole point of FairPlay. If Apple served the same encrypted blob to everyone, then the encryption would be completely useless as everyone could decrypt it.
> Apple doesn't serve the same encrypted binary to everyone. Each user has an independent set of FairPlay keys and Apple serves up content encrypted for that particular user. That's kind of the whole point of FairPlay. If Apple served the same encrypted blob to everyone, then the encryption would be completely useless as everyone could decrypt it.
I used to think this as well. However, (my memory is fuzzy on this so take it with a grain of salt) a year ago I downloaded the same app from two different iTunes accounts. Both encrypted apps were identical, and had the same encrypted blob. So it seems Apple is not distributing a unique encrypted blob to each user, at least as far as apps are concerned.
There's no need to trust Apple with re-signing the binary, it just happens to be implemented that way. It throws away the reason for the original code signature, which was for the developer to securely say "I created this." You could just upload unsigned binaries to the App Store in this case, saving a lot of complicated work with maintaining signing profiles.
It's not that I don't trust Apple, it's that there's no reason to trust them in this case. Either stop forcing developers to deal with code signing for App Store apps or stop destroying their signature, allowing verification and logging of builds.
And yes, the same encrypted binary IPA is served to everyone, but it gets customized by the device/iTunes.
> I'm not clear on how Bitcode changes this, apart from expanding the already huge attack surface of the app store. Code execution inside app store servers is already game over for end-user privacy, isn't it?
I think the point is that it would be harder to detect if this happened, as, with Bitcode, the author loses the ability to compare binaries downloaded from the App Store to those produced by Xcode on their machine.
> FairPlay encryption already made it really difficult for developers to make sure that Apple distributes the build you think they are distributing. But at least, with jailbroken iPhone, I’m currently able to get some idea of what the binary I’m distributing looks like and compare it to the one I submitted to Apple.
No, they lose one naive way of performing that comparison.
But even stipulating that you're right, this still doesn't make sense. The argument here is that someone with control over Apple's code distribution infrastructure could not, in the absence of an all-bitcode submission system, surreptitiously backdoor iOS apps. But of course, there's a million ways Apple (or someone who owns up Apple) can backdoor apps in ways that don't alter the app binaries themselves directly at all.
Are you aware of any side channels introduced by compiler optimizations that weren't present in unoptimized code? Recompiling with -O0 vs -O2 doesn't seem likely to fix your pkcs padding bug. Or hmac strcmp.
Interesting. Looks like it affects all version of llvm (at least those that compile).
I guess with LTO the result could be the same, but asking to always inline seems like asking the compiler to fuse operations, and then it sees shortcuts. Dunno.
`always_inline` is only there to make -O0 smaller for presentation purposes. It has no impact on -O2.
The trick here is that -march=i386 predates CMOV, and that LLVM specializes code emitting for bool (and _Bool). If the secret bit were an uint32_t, there wouldn't be a branch anymore.
There is one class of cryptographic code, however, that is entirely unsuitable to distribute in Bitcode---DPA/EM-protected code. EM attacks on middle-end ARM chips have been demonstrated recently [1, 2].
Protecting against these attacks usually involves splitting the computation into 2 or more "shares" (see, for example, [3]); these require strict control of which register each word goes into, and which registers overwrite which. This cannot be enforced in Bitcode---or any other bytecode, for that matter---and direct assembly must be used.
> I'm not clear on how Bitcode changes this, apart from expanding the already huge attack surface of the app store.
It massively expands the attack surface, since Apple is running untrusted IR input through LLVM, server-side.
> The bit about how Bitcode makes backdoor injection harder is also very hand-wavy, since backdoor injection into fully compiled binaries is a teenaged pastime going back to the late 1980s. About the most you can say here is that Bitcode makes a trivial task trivial-er.
It makes it WAY more trivial, expands the scope of what can be inserted by allowing essentially unlimited modification of the source without concern for linking/symbols/fixed offsets/section sizes, and allows for a degree of automation of complex changes across applications that would be difficult otherwise.
None of that is strictly earth-shattering, but add to this Apple's shipping a binary that the developer can't trivially verify by comparing against their local build, and you have what amounts to total opacity coupled with trivially easy application-wide code modifications.
> It massively expands the attack surface, since Apple is running untrusted IR input through LLVM, server-side.
Apple already runs your binary through a bunch of binary analysis tools, I'm not sure why LLVM is significantly different. A remote exploit is a remote exploit, and LLVM doesn't need escalated permissions to compile things. Besides, it's probably safe to assume Apple sandboxes all of this stuff.
> Apple's shipping a binary that the developer can't trivially verify by comparing against their local build
Who actually does that? I've never heard of anyone even trying to do this before. Besides, if you are that concerned about Apple tampering with your binary, it's a very small step to believing that Apple would still serve you the original binary and only send the tampered version to a limited audience, thus making any such verification meaningless.
You don't know how large the existing attack surface is. I suspect that it's already large enough that adding LLVM to the mix isn't terribly meaningful. Especially once you assume that all this stuff is sandboxed anyway.
> Anyone that debugs an issue in their shipped code.
You're going to have to do better than that. Speaking as someone who's been developing iOS apps since the moment the app store opened, and who knows a lot of other iOS developers, I've never heard of anyone downloading an un-DRM'd version of their app store app for debugging purposes. The closest I've come to that is verifying what entitlements file the App Store version was built with (but this was done with iTunes Connect, not actually downloading the ipa).
The only benefit that you can get from downloading the binary is inspecting the assembly, which itself is only useful if you're hitting an optimizer bug, but even then, you'd actually want to work with a non-app-store build because you can't debug app store builds. So you won't actually be downloading the app store binary anyway, you'd just use the existing archive you uploaded to begin with, or even build a new one from the source.
Which means that, given all that, the only case here where you'd have a legitimate reason to want to download the app store version is if Apple manages to introduce an optimizer bug when recompiling your bitcode that you can't reproduce yourself. Not only is this likely to be extremely rare to get such a bug, you should still be able to reproduce it yourself if it happens because any new optimizations should be made available in the latest Xcode.
> Anyone doing security research.
Irrelevant to the topic. The case here was a developer verifying their own product. Bitcode changes absolutely nothing with regards to people downloading other developers' apps.
> I've never heard of anyone downloading an un-DRM'd version of their app store app for debugging purposes.
That is because it has never been necessary before, because up until now you have always had a binary copy in Xcode -> Organizer -> Archives. Please think.
Also note that I am refraining from judging you or anyone you know on the telling account that neither you nor anyone you know have, for years, had the need to understand your binaries.
If your binary has a chance of functioning differently on someone else's device than on your testing devices, then you should care. But then again, your business may not depend on having an app that works, but on producing a lot of apps.
If you are religious in trusting Apple's decisions despite all your years as an Apple developer, then you should at least acknowledge that some developers and their companies may have different (less faith-based) values than yourself.
You've crossed the line from making wild unsubstantiated claims to being rather offensive (and, it appears, deliberately so). This is quite unacceptable.
You could still fall back to using inline assembler for code that's sensitive to offering a timing side channel. Inline Assembler doesn't get optimized.
I don't think there any additional security issues regarding Apple altering the binaries.
iOS devices are already running a closed source proprietary OS, which can easily set an hardware breakpoint in a binary and then do anything that could be done by modifying the binary itself without leaving traces anywhere (other than timing and cache effects).
Apple even designs and manufactures the CPU so they could even modify the CPU to detect instruction patterns and siphon off private keys without any detectable timing or cache effects at all.
Of course, that doesn't mean that it's all fine, but rather that the system is already completely broken to begin with in this respect.
As much as I'd like to be upset about the lack of binary verifiability throughout the chain, as the article even points out, without jailbreaking it is already impossible. We have no idea what code Apple is telling our devices to run, regardless of the form in which it is submitted to the App Store.
The security implications of new compiler optimizations potentially eliminating security-essential code really interest me. As do the plain old reliability implications of hitting new compiler bugs.
It's going to be really really really Fun with a capital PH when Apple enables some new optimization that contains a subtle bug, and recompiles everybody's apps, and us third parties are left holding the bag trying to figure out why our apps are crashing with no visibility whatsoever into the process.
Here's something to ponder: security implications of new CPU micro architecture even running the exact same code. From generation to generation, various x86 instructions have varied in latency. Register renaming gets better. Incorrect hazards get eliminated. No reason to believe it can't happen to arm.
There's no guarantee an ARMv99 will run even identical code at a rate directly proportional to an ARMv7.
You're absolutely right, of course. Anything that depends on machine instructions taking a certain amount of time (that aren't guaranteed by the spec, e.g. I believe Intel's AES instructions are guaranteed not to have data-dependent timings) is potentially screwed on newer hardware.
Bitcode does add potentially new and exciting failure modes, though, since you might start failing even in the absence of new hardware, and you might encounter problems other than timing (e.g. failing to zero out sensitive data because dead store elimination got smarter, or buggier).
Even more fun when Apple recompiles the apps so that new users get one behavior but people who had already downloaded the supposedly-same app get quite another behavior. This takes all of the existing problems with non-reproducible builds and makes them worse.
The real issue is who gets to sign the code. Formerly, you could sign the code yourself, so the cryptographic "chain of custody" between you and your users would be preserved. Now, since Apple generates the code that users see, you can't check and sign it. Apple can sign it, somebody who has compromised Apple's infrastructure can sign it, but you can't (unless you're the one who compromised their infrastructure). So, basically, this is a giant MITM just waiting to be exploited.
Apple already re-signs all apps sent through the App Store. That plus Fairplay encryption makes it unnecessarily difficult for end recipients to verify the chain of custody for the code.
I don't quite get why Xcode couldn't generate both the LLVM bitcode + the fat binary for all architectures, along with any flags. Then Apple could regenerate the same binary (to verify the two match) but use the bitcode for their program analysis tools. Then, the developer's signature could be left intact on the binary and Apple would add an additional signature over the whole thing that indicated it was approved for distribution via the App Store.
Because Apple explicitly wants to be able to regenerate binaries from the bitcode, which means that any locally-generated binary is meaningless. The claimed reason for this is to take advantage of new optimizations added to LLVM (which is a perfectly sensible thing to take advantage of), but probably the more useful reason is to take advantage of new microarchitectures that Apple introduces in new devices.
Yes, this is the real thing that should be debated -- where does the developer's responsibility end and the App Store's begin?
If I created an app, both distributed internally (say, to my corporate users) and via the App Store, but the App Store version has a bug because they're using a different ARM backend than my Xcode, that will be a real pain to debug.
Translating bitcode to binary must be done by open-source tool (llvm?) and bitcode must be available for end-user. Then end-user can download bitcode, download llvm, translate bitcode and compare it to the binary he got from AppStore. But it unlikely to happen, because this feature would be hard to sell.
True, but what is now a mutable choice becomes less so when the technology involved doesn't support any other option. Their approach also precludes a second signature embedded and checked within the app itself, independent of the app-store signature.
It's not significantly different, although iOS hasn't had bytecode before.
One thing that is different is that there's no transparency into the process. If your Java code is crashing, you can look at the code the JIT is generating. If your Java code is only crashing on certain hardware or OS revisions or whatever, you can see what's different about the JIT in those circumstances and at least make an attempt to investigate. With Apple's system, you have no idea what compiler version they're running, when they make changes, what machine code your users end up running, etc.
java/dalvik/whatever bytecode is simple, has a clear formal specification, and is not vulnerable to the same scale of "undefined behavior" traps that you'll find in C/C++ and LLVM IR.
He's right, but he's almost too right: absolutely everything he says is a problem with bitcode applies to normal app store distribution and all other centralized cloud-powered OSes and apps. Therefore I don't see this as a good argument against bitcode -- not using bitcode does not prevent anything he says from occurring since anything that can be done to bitcode can be done to machine code.
(1) Apple has the signing master keys for app store certificates. It would be easy for them to binary-patch your app with anything they want, generate their own signing certificate in your name, and re-sign the app as they push it out to users. Since app store transactions are on a per-customer basis this could be done only for some customers -- such as only residents of a certain country, people whose names are on a list, etc. It's very unlikely that anyone would notice.
(2) It would also be easy for an attacker to do this if they compromised Apple's core infrastructure as described in this article. Then they could generate signing certs in other developers' names and trojanize apps in exactly the same way. A "doomsday hack" that trojanized all Apple devices in the world within hours is not impossible. If they all started, say, DDOSing core BGP and DNS systems at the same time, such an attack could take down the entire Internet.
(3) Apple, since they control the platform and can remotely patch it at any time, needn't bother with trojanizing apps. They could simply trojanize the OS kernel or user-space OS components and accomplish anything a trojanized app accomplishes.
The same also applies, by the way, to Windows and also to Linux if you trust remote code repositories and do not compile everything yourself from verified known-safe source code. It also applies to from-source-compiled OSes and distributions if you did not painstakingly check every line of code yourself. Google the "underhanded C contest."
If you run code that is not yours, you are trusting someone else. It's a subset of a larger principle: civilization implies trust. Without trust you cannot have specialization of labor, commerce, or law. Without those there could be no advanced technology or infrastructure, so none of this would matter.
To use an Apple device is to trust Apple, at least insofar as that device and any data you keep on it goes. To use Ubuntu and its repositories implies the same level of trust in Canonical, etc.
The privacy problems of the digital age are not entirely technical in nature and do not have entirely technical solutions. They can only really be solved legislatively and socially.
>The same also applies, by the way, to Windows and also to Linux if you trust remote code repositories
Curiously, Google/Android has developers sign the app, and that exact signed package is what's distributed, so at least the app developer can verify that the distributed app is identical to the one shipped to the store.
On the other hand, Amazon/Android signs apps with their own keys, so they suffer from the same issue as Apple.
Users don't check the signature of apps that they download, their phone does it for them.
So even if apps are signed by the developer, the app store owner can throw away the signature, modify the app and then sign it with their own key. People downloading and running the app won't see a difference.
If a developer downloads the app, they can verify it.
But more to the point, I published an app on the Play Store that actually verified its own signatures (as an anti-piracy measure) and it worked correctly. Not only that, but it used the signature as a key to decrypt some of the assets, so a changed signature would mean the app would completely fail to work.
So it's a solvable problem on Android. But not at all on other ecosystems.
> He's right, but he's almost too right: absolutely everything he says is a problem with bitcode applies to normal app store distribution and all other centralized cloud-powered OSes and apps. Therefore I don't see this as a good argument against bitcode -- not using bitcode does not prevent anything he says from occurring since anything that can be done to bitcode can be done to machine code.
Not true. Not using bitcode allows you to select optimization options and have a "receipt" on what CPU:s your code will run, by providing those slices. Now, Apple can also (in theory at least) change endianness of the target code. Instead of users being unable to install the app (a bad idea), they will install an app that breaks (an even worse idea). Endianness is one example. Optimizations another. There can be more. Hiding even MORE stuff from a developer who in turn is responsible to a customer and to users can only raise the development cost (because maintenance contract is usually a part of development cost), not lower it. I do not need MORE hidden issues and LESS control over the work I put my face on.
Something we're a little worried about (shipping middleware for games on iOS) is the potential difficulty of debugging client issues when we don't (I assume?) have native binary identical libs.
There are intellectual property concerns. You are now giving Apple your LLVM IR along with the program symbols. What happens if you have sensitive code in your application that you don't want Apple reverse engineering and having access to?
LLVM bitcode also limits the kind of tamper resistance and obfuscation you can put into your apps. Fairplay offers almost no security if your attacker has a jailbroken device. He can simply dump the decrypted app from memory and reverse enegineer it. If you care about protecting your secrets, or preventing your app from being pirated and redistributed on Chinese app stores, you could at least apply fairly sophisticated obfuscation & integrity checks to the app binary. With LLVM bitcode that all becomes much harder, and applying integrity checks to the app becomes impossible.
Also, Apple doesn't really need the LLVM bitcode of the app to carry out app thinning. All they really need to do is extract the proper architecture from a FAT binary, and send that to the user's device, instead of the entire FAT binary.
It makes me wonder what will they even be using the LLVM bitcode for? Will it become a requirement for all apps submitted to the app store sometime in the future? If so, why?
Symbols are obfuscated or removed entirely if you choose to build without them; you don't have to hand Apple your .dSYM if you don't want. I'm not sure what other metadata you're referring to.
I am referring to ObjectiveC metadata, which programs like IDA Pro can use to identify function names/parameters, which give valuable insights to reverse engineers.
With LLVM bitcode, it doesn't matter if you don't hand over your .dSYM file to Apple, all the symbol information (and more!) is available in the bitcode.
There is metadata in a fat binary (or in bitcode) that makes reversing more convenient, but there is none that makes reversing possible; reversing raw ARM assembly spat out by a normal compiler backend is always possible.
It's 2015. Lack of source code is no longer a meaningful obstacle for attackers.
I agree, reverse engineering is always possible :) However, reverse engineering with the LLVM bitcode present is much much easier, which is a problem.
Also, if Apple requires apps to be submitted as LLVM bitcode, then certain things like integrity checks and certain useful types of obfuscation will become impossible, making apps much easier to reverse engineer, or to pirate and repackage on another app store.
I disagree that LLVM bitcode makes reverse-engineering any easier on iOS. You can do everything in bitcode that you can do in native ARM code. The limitations are already there in terms of code signing, sandboxing, and page protection and bitcode distribution doesn't change any of these OS attributes.
Are you concerned about Apple reversing your code? You can still use arbitrary types (e.g. int64 instead of a pointer) and give them no additional data about the structure of your code. As you pointed out elsewhere, tools like OLLVM perform obfuscation at the bitcode level already.
I'm guessing Apple made this change to make it easier to do program analysis of iOS apps being published. You can certainly find bugs easier in bitcode that has proper type information and clear differentiation between safe and unsafe branches. One of the first steps in program analysis tools is to lift native code to an IR in order to determine its correctness.
With bitcode distribution, Apple has potentially made it easier on themselves to skip the lifting and type recovery steps for conforming apps in order to look for bugs. But unless they start requiring bitcode conform to certain additional standards, you can always transform bitcode to obfuscated bitcode, destroy type information via aliasing, etc.
I concede that LLVM bitcode can be obfuscated just like machine code ( with enough effort, and LLVM bugs aside ) . However, you're still losing the ability to self-checksum your code, which in turn means you can't protect against things like piracy or unwanted modification of your binary.
I agree also that Apple probably made this change to better analyze programs being submitted to the app store. That and to recompile programs to use intrinsics more efficiently, as new intrinsics become available.
And finally, while I would not be concerned about Apple reversing my code, certain companies are and have to undertake steps to make that as difficult as possible.
If you are concerned about anyone reversing your code (Apple or otherwise), you must obfuscate it. Either you work at the ARM level or the bitcode level (watchOS for now), but the basic techniques are the same. Or, you can avoid changing tools by doing source-level transformation.
One of the companies probably impacted by this change is Arxan or other obfuscators. They have to change both front and backend to be LLVM -> LLVM.
[1] Except for the case I mentioned before, where you predict the generated code for known architectures and use that to generate your constants. This still breaks if Apple generates code for a new architecture without your help.
I dispute the claim that bitcode makes this task so much easier that it presents a real threat. Everyone with a financial or policy interest in backdooring Signal can do so on stripped ARM binaries without any trouble.
Iirc Signal is open source, so anyone can simply insert a backdoor into the source code and compile it, and then distribute the modified binary.
However, I'm talking about applications for who the source code is not available. If you properly use integrity checks, signature verification, and obfuscation, you can make repackaging and piracy of applications significantly more difficult. With LLVM bitcode this is no longer the case.
Thank you, I had forgotten about class guard. It's a great tool, but I recall having problems getting it to work on larger applications.
It's also possible for you to remove the ObjectiveC metadata from a binary entirely, obfuscate/encrypt it, and then add it back to the ObjectiveC runtime as needed. With LLVM bitcode this becomes much more difficult (but maybe not impossible...)
I am actually looking at that bitcode right now :) You can use this program ( https://github.com/AlexDenisov/bitcode_retriever ) to extract it from the MachO file, then use Xarc to decompress the archive, then use llvm-dis to disassemble the bitcode. You will see it's chalk full of information that is otherwise not available in the machine code.
This entire time I was referring to code obfuscation/integrity checks, and why bitcode is a problem for people who want to obfuscate their code. I was not referring to apps like Signal at all.
Yes, if you were going to write an obfuscator, LLVM would be a good place to start. But, to a first approximation, nobody in the world obfuscates their mobile apps.
Actually, I should raise the 'NateLawson alarm; he can probably tell us precisely how many mobile developers obfuscate on either platform.
Good question. Obfuscation is more common on Android (10% of apps) than iOS (1%), but it's also pretty interesting where it's used most.
First, throw out Proguard on Android. That's not really obfuscation, it's an optimizer. Sure it renames variables and removes dead code, but that's only a problem if you have a naive system that relies on class & method names. Proguard is used in 20% of Android apps though.
The most common issue with Android is the malleability of its bytecode it inherited from Java. It's common to patch apps or repackage them for other app stores (after swapping advertising API keys). There's no work required to rebase addresses or anything after patching, unlike native code. This drives the need for obfuscation, but also provides a useful springboard for implementing it.
We've found that many of the obfuscation schemes on Android are custom. This is because of the liberal policy towards using native code via JNI and the ease of inserting a custom Java classloader. With iOS's code signing and memory protection, you can't write a custom loader in your app that decrypts and executes new code.
However, every other obfuscation measure is available in iOS. You can build opaque predicates, mangle control-flow, do arbitrary pointer arithmetic, and lots of things that are impossible in Dalvik. (However, you can just write a thin Java wrapper around your .so in Android and you've got the same capabilities and more.)
With LLVM bitcode distribution, you're really potentially losing only one thing in your obfuscation toolbox: self-checksumming. You couldn't due self-modifying code already due to code signing, so nothing changed there.
While I haven't implemented it myself, I believe you could still even do self-checksumming via clever use of intrinsics. That is, you carefully lay out the instructions and data and predict what you'll expect to see when it's translated to armv7, v7s, arm64, etc. That's the value you'll check for.
Actually, before LLVM bitcode, self-checksumming was quite possible on iOS, and is widely used by many commercial obfuscators. Furthermore, self-checksumming is the root of all good anti-tamper schemes. It prevents someone from automatically patching out your license checks/anti debug/anti jailbreak/whatever. Without self-checksumming, someone can easily patch your app to pirate it, inject a backdoor, or modify it in some unwanted way. So it's really a big loss if you care about those kinds of things.
Self-checksumming via intrinsics and clever layouts is a very interesting idea, but it's impossible for you to checksum a program when you're not 100% sure what it will look like after compilation. For example, even if you manage to correctly 'guess' what machine code your LLVM bitcode would be translated to and adjusted your checksums accordingly, Apple could just update their LLVM bitcode compiler at some point in the future, and invalidate your checksums.
Also Android is a much more interesting platform than iOS, because it allows for self-modifying code, which various commercial obfuscators use in various interesting ways :) Also there are very interesting things you can do with reflection or the JNI in Java.
Regarding Android's flexibility, I've written an obfuscator (on another Java-based platform) that took a DSL and generated half the code in Java and half in C, intertwining the computation between the two processors via IPC.
The majority of people might not obfuscate their mobile applications, but I think that's a problem in tooling and awareness. A lot more would if they could, and if they knew how.
That being said, there is also a very sizable minority of companies that do obfuscate their mobile applications.
Aside from the security - Apple made much of write once run anywhere being a terrible idea (e.g. Java, Flash and iOS/Android frameworks) yet now it is a fabulous idea that Apple just thought of
Obviously you're welcome to do what you like. But it seems how these things go with Apple is: they introduce it, and it's optional, then they build on it some (still optional), then they say "using this is a really good idea", then they say "a year from now you have to use this". And maybe not a year. So you're free to exercise your choice... for a while.
Sometimes there is a secret master plan behind these actions, sometimes they're doing what makes the most sense. I can see how bitcode might make a lot of sense from a centralized distribution standpoint, and it may work for certain types of apps. But you still may prefer to test and experiment with the actual code that will be distributed to your customers. Seems like a reasonable requirement. Apple ought to have an extraordinarily good and transparent reason to require centralized compilation in the future.
I agree with the sentiment. However, if enough people, or Taylor Swift, decide to tell Apple to GTFO, they will back down on a retarded idea, or at least modify it to a more (but not completely) acceptable extent.
A good argument against the bitcode is the huge amount of open source libraries in C used by thousands of popular apps (such as openssl), which may not, or do not, particularly enjoy the bitcode treatment at AppStore.
Not only that, but when people's excuses for not shipping it are "Bitcode's gonna get the AppStore hacked and my apps backdoored"... they'll be more inclined to realise they know better than most of their app developers.
Java-Objc bridge, requiring Xcode (people used to use Metrowerks and other tools). I don't do this for my day job so there are probably lots of others... something around ARC maybe?
OK, I though you meant some kind of evil plan, lockdown etc impossed to people by Apple progressively. That's why I said we should exclude ARC and transition to Swift. I meant in general that we shouldn't count "migration to newer technologies/frameworks" as such a thing.
If you meant, Apple imposing newer stuff to developers, then yes Apple does that.
Though, I'd say most of that is for the platform's good. Even stuff like deprecating Java -- it never got popular for OS X apps as Apple intented, the situation with SUN had changed, and keeping it going forward would be dead weight.
And Metrowerks might be good for early '00s, but not going forward with iOS, and multiple languages, and Storyboards, and all those things.
Still, the forced transition is PITA for anyone who has had a successful app on the AppStore for several years. Essentially, you are being forced making a new app every other year, just to have an app, in spite of the fact that the old one(s) work perfectly well and everyone is happy with them. In my book, that is throwing money - it's cost us half the price to keep three other competing platforms afloat.
And I find it disheartening that most developers seem so accepting of unnecessary costs to their time and health. Much like the worker class before the unions - any work conditions go. I for one am pretty tired of years of iOS-update introduced incidents and overtime work to manage new products, while juggling undue maintenance on old products. I also find it disheartening developer companies don't take a good look at costs and demand improvements, instead of accepting a steady trickle of punishment for success.
If it is for the platform's good, I wish the platform ended so we can develop for platforms that deliver better working conditions and less motivated uncertainty, fear, and doubt.
I'm not clear on how Bitcode changes this, apart from expanding the already huge attack surface of the app store. Code execution inside app store servers is already game over for end-user privacy, isn't it?
The bit about how Bitcode makes backdoor injection harder is also very hand-wavy, since backdoor injection into fully compiled binaries is a teenaged pastime going back to the late 1980s. About the most you can say here is that Bitcode makes a trivial task trivial-er.
The concern about introducing side-channels into crypto apps seems legitimate (though: if you believe this is a real issue, and I do too, you're also saying "I will never trust Javascript cryptography"). I wouldn't want Signal submitting Bitcode.