Hacker News new | past | comments | ask | show | jobs | submit login

It’s doable for them. For one thing, even their iPhone chips are currently nipping at the heels of Intel’s low-tdp mobile CPUs, exceeding some ULV parts in performance. For another, they could just push all the difficult stuff into specialized coprocessors. Wrap that into system frameworks and nobody will ever know.



I think they'll get between 5 and 10 percent of Intel by then, but that still means that an x86 emulator is punching light, even before you get into emulation overhead.

By comparison, the 68k emulator on PPC wasn't even a JIT. It just interpreted 68k machine code, and even then it was still 2x faster than a native 68k.

Moore's law really helped them.with their previous transitions, but Moore ain't cashing checks like he used to.


I’m pretty sure Rosetta wasn’t an interpreter, and Intel chips were less than 2x the speed of PowerPC.

https://en.m.wikipedia.org/wiki/Rosetta_(software)


Their x86 cores were about 5x faster when you accounted for everything. And even though Rosetta was a JIT it ended up running a little slower than the PowerPC chips it was replacing.

So to start off with chips that, let's assume are only about 90% of the chips that you're replacing in native code, you're now at least under half the performance of x86 while emulating. And that means half the battery life too.


You’re making the unwarranted assumption that they’ll actually emulate anything. The state of tooling is much better than it was back then, and this time there’s no 32 bit mode to worry about.

According to Anandtech report in 2006, btw, the real world difference was far less than 5x. Indeed, Intel chips had substantially worse FP performance back then, something Apple uses quite a bit in their graphics subsystem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: