Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My armchair amateur brain immediately thought about something CUDA-like.


FPGA code takes hours to compile, yet product/model specific


You're not wrong but I expect they'd make it so that the various models would be similar enough (at least within a given CPU generation) so that you could use mostly precompiled artifacts instead of rerouting everything from scratch.

I've always been pretty skeptical of their approach though, in order to be usable they'd need excellent tooling to support the feature, and if there's one thing that existing FPGA software isn't it's "excellent".

Getting FPGAs to perform well is often an art more than a science ("hey guys, let's try a different seed to see if we get better timings") so the idea that non-hardware people would start to routinely generate FPGA bitstreams for their projects is so implausible that it's almost comical to me.

Maybe one day we'll have a GCC/LLVM for FPGAs and it'll be a different story.


Beyond the GCC/LLVM, you also really need a standard library. Nobody is talking about that. Today, if you want a std::map on an FPGA, you have to either pay $100k or build it yourself. That's untenable.


You would use precompiled modules or compositions of these modules (pipeline or parallel).

This can be a relatively fast operation. Seconds or less depending on complexity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: