Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The security implications of new compiler optimizations potentially eliminating security-essential code really interest me. As do the plain old reliability implications of hitting new compiler bugs.

It's going to be really really really Fun with a capital PH when Apple enables some new optimization that contains a subtle bug, and recompiles everybody's apps, and us third parties are left holding the bag trying to figure out why our apps are crashing with no visibility whatsoever into the process.



Here's something to ponder: security implications of new CPU micro architecture even running the exact same code. From generation to generation, various x86 instructions have varied in latency. Register renaming gets better. Incorrect hazards get eliminated. No reason to believe it can't happen to arm.

There's no guarantee an ARMv99 will run even identical code at a rate directly proportional to an ARMv7.


You're absolutely right, of course. Anything that depends on machine instructions taking a certain amount of time (that aren't guaranteed by the spec, e.g. I believe Intel's AES instructions are guaranteed not to have data-dependent timings) is potentially screwed on newer hardware.

Bitcode does add potentially new and exciting failure modes, though, since you might start failing even in the absence of new hardware, and you might encounter problems other than timing (e.g. failing to zero out sensitive data because dead store elimination got smarter, or buggier).


Even more fun when Apple recompiles the apps so that new users get one behavior but people who had already downloaded the supposedly-same app get quite another behavior. This takes all of the existing problems with non-reproducible builds and makes them worse.


How is this significantly different from shipping java/dalvik/whatever bytecode?


The real issue is who gets to sign the code. Formerly, you could sign the code yourself, so the cryptographic "chain of custody" between you and your users would be preserved. Now, since Apple generates the code that users see, you can't check and sign it. Apple can sign it, somebody who has compromised Apple's infrastructure can sign it, but you can't (unless you're the one who compromised their infrastructure). So, basically, this is a giant MITM just waiting to be exploited.


Apple already re-signs all apps sent through the App Store. That plus Fairplay encryption makes it unnecessarily difficult for end recipients to verify the chain of custody for the code.

I don't quite get why Xcode couldn't generate both the LLVM bitcode + the fat binary for all architectures, along with any flags. Then Apple could regenerate the same binary (to verify the two match) but use the bitcode for their program analysis tools. Then, the developer's signature could be left intact on the binary and Apple would add an additional signature over the whole thing that indicated it was approved for distribution via the App Store.


Because Apple explicitly wants to be able to regenerate binaries from the bitcode, which means that any locally-generated binary is meaningless. The claimed reason for this is to take advantage of new optimizations added to LLVM (which is a perfectly sensible thing to take advantage of), but probably the more useful reason is to take advantage of new microarchitectures that Apple introduces in new devices.


Yes, this is the real thing that should be debated -- where does the developer's responsibility end and the App Store's begin?

If I created an app, both distributed internally (say, to my corporate users) and via the App Store, but the App Store version has a bug because they're using a different ARM backend than my Xcode, that will be a real pain to debug.


Translating bitcode to binary must be done by open-source tool (llvm?) and bitcode must be available for end-user. Then end-user can download bitcode, download llvm, translate bitcode and compare it to the binary he got from AppStore. But it unlikely to happen, because this feature would be hard to sell.


That's not new though, Apple is the one signing the apps on the store from its launch.


True, but what is now a mutable choice becomes less so when the technology involved doesn't support any other option. Their approach also precludes a second signature embedded and checked within the app itself, independent of the app-store signature.


It's not significantly different, although iOS hasn't had bytecode before.

One thing that is different is that there's no transparency into the process. If your Java code is crashing, you can look at the code the JIT is generating. If your Java code is only crashing on certain hardware or OS revisions or whatever, you can see what's different about the JIT in those circumstances and at least make an attempt to investigate. With Apple's system, you have no idea what compiler version they're running, when they make changes, what machine code your users end up running, etc.


java/dalvik/whatever bytecode is simple, has a clear formal specification, and is not vulnerable to the same scale of "undefined behavior" traps that you'll find in C/C++ and LLVM IR.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: