Hacker News new | past | comments | ask | show | jobs | submit login

Something i don't get about microsoft not porting its suite to ARM when releasing the surface Pro :

What's so hard about it ? Your code is supposed to use something like a C stdlib, which has been ported to ARM obviously. So what makes it so much harder than to recompile everything?

Once the OS is ported, the system libraries are available, and the programming language has a compiler for the target architecture, i don't understand what's blocking.




I've helped port a few open source systems to ARM. It's easy to write code which looks like C, but ends up being "x86ish C":

* Assume sizes of types, endian, that type of stuff (endian is particularly troublesome).

* The threading model is very different, and it's very easy to write a lot of code which works on the x86 threading model and then explodes in interesting ways when put on ARM.


And doesn’t ARM react quite badly to unaligned loads as well?

> endian is particularly troublesome

Arm is LE by default though.


ARM has a much weaker memory model than x86 for starters, so threaded programs which work on x86 (either accidentally or by specifically taking advantage of the memory model) might not on ARM.

I think unaligned accesses (e.g. packed structs) is also problematic, it’s either super slow or it faults (which might lead to an OS routine emulating it in software which is even slower, but if the OS does not emulate it’ll just crash).


There are a bunch of things which can differ between architectures which can't be reliably abstracted away by the standard library.

Memory coherency is one. Check out the table of what reorderings processors are allowed to do:

https://en.wikipedia.org/wiki/Memory_ordering#In_symmetric_m...

x86(-64) is very conservative, doing very little reordering. ARM is much more liberal, doing lots of reordering. The standard library may give you tools to write code which is correct across all architectures, by defining some portable memory model and then implementing that model on each architecture (in C++, std::atomic does that). But it's easy not to use those tools properly: you can write code which is incorrect according to the standard library, which works on x86 because it is conservative, but which fails on ARM because it is liberal. Detecting bugs like that statically is an open research problem, and cross-platform concurrency bugs are among the hardest there are to debug.

There's more than just reorderings, too. A local wizard tells me "On ARM, you can have cores where there is no implicit "propagation"... i.e. if you write to some memory location from core "A", core "B" may never see those changes. You have to use synchronization primitives to make changes visible. This also means that the old dirty trick of volatile instead of proper synchronization can definitely fail here.". My guess would be that cores like that won't turn up on Apple laptops, but who knows.

Another example is vector operations. If you want to make use of explicit vector operations, rather than relying on compiler autovectorisation, you need to write code around the shapes and operations supported by the processor. Those are different on ARM and x86 - although i think the instruction set on ARM is better-designed than that on x86, so at least it might be easier to port from x86 to ARM than vice versa. Still, here's a taste of the sort of work it takes to make vector code work properly even on different ARM chips:

https://www.cnx-software.com/2017/08/07/how-arm-nerfed-neon-...


Would it be possible to somehow force the threads of an “emulated” Application to run on the same ARM core to simulate the more conservative memory model? Or would it be possible to somehow detect and guard memory thats shared across multiple threads?


You truly underestimate how many things still use raw assembly here and there. Intel provides a library called libhoudini for this very problem. also what would you have end users do ? Somehow recompile arcane sourceless binaries to ARM ?


The Office source code has been ported to ARM multiple times. We're running that code in the iOS and Android ports of Office.


Yes, that's the only explanation that makes sense, but still, that surprised me a lot to know MS Office relies that much on assembly that recompiling to a different target arch isn't an option.


MS Office is ported, but it's a hybrid (CHPE). The executable looks to be x86 when examined in Properties, but if you look at the contents of the file on disk you'll find Arm code as well as x86.

MS did it this way so that any x86 extensions can continue to be run under emulation whilst the bulk of the application can run natively, according to: https://uk.pcmag.com/news-analysis/92340/microsoft-explains-...


Drivers on devices that get plugged in maybe.


It could be due to the fact that the Surface devices and the Office are developed by two different divisions in Microsoft. The group developing Office may not have a huge incentive to create ARM version just because another division decides to go with ARM processor on a new device.

Also think about anti-trust! If Microsoft could develop ARM version of Office in secrecy for the Surface launch, it would not give sufficient possibilities for Office competitors to do the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: