Hacker News new | past | comments | ask | show | jobs | submit login

> it must also be possible to forward instructions to GPU/GPGPU in the future with little more effort.

Highly unlikely. On most (all?) high-end systems the GPU or other compute accelerator has its own memory, and is connected to the CPU via a teeny tiny straw (PCI-E). The cost of shipping the data over and shipping the results back is astronomical. It's worth paying if you can ideally leave the data there or if the computation is so intense that the execution is just so much faster to pay for the overhead and then some. But a "smart JIT" is unlikely to figure that out.

If we were talking a homogeneous shared memory system, like most mobile devices, then maybe. But there's still non-trivial costs & setup associated with that.




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: