Hacker News new | past | comments | ask | show | jobs | submit login

> These increasingly parallel workloads are likely another reason that the more complex front-ends needed for Intel's instruction set, as well as their stricter memory ordering, are becoming increasingly problematic; it's becoming increasingly hard to fit more cores and threads into the same area, thermal, and power envelopes. Sure, they can do it on big power hungry server processors, but they've been missing out on all of the growth in mobile and embedded processors, which are now starting to scale up into laptops, desktops, and server workloads.

Except ARM CPUs aren't any more parallel in comparable power envelopes than x86 CPUs are, and x86 doesn't seem to have any issue hitting large CPU core counts, either. Most consumer software doesn't scale worth a damn, though. Particularly ~every web app which can't scale past 2 cores if it can even scale past 1.




Parallelism isn't a good idea when scaling down, nor is concurrency often. Going faster is still a good idea on phones (running the CPU at higher speed uses less battery because it can turn off faster) but counting background services there is typically less than one core free, there is overhead to threading and asyncing, and your program will go faster if you take most of it out.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: