Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can answer this, as an electrical engineer.

FPGA's are horrendously inefficient, space and power-wise, for many tasks. Much of the core is taken up by the programmable routing between different components, and generally most components will be not fully utilized (such as LUTs and RAM).

Yes, the Virtex-7 (a VERY EXPENSIVE fpga) can hold 1000 very simple 32 bit cores.... but a top end NVIDIA GPU has 2688 CUDA cores. While not entirely independent, these CUDA cores have far deeper pipelines and far superior ALU's to the one in the article. If your software fits the programming model, GPUs will handily beat a FPGA.

FPGA's are great for prototyping ASICs, and cases where timing is of critical importance - try implementing a VGA video generator on a CPU. Basically everywhere an FPGA would excel, an ASIC excels more, but FPGAs are great for low volume and or specialty hardware where a GPU is not good at accelerating the task at hand.



> the Virtex-7 (a VERY EXPENSIVE fpga) can hold 1000 very simple 32 bit cores.... but a top end NVIDIA GPU has 2688 CUDA cores.

But the point of the FPGA is to not build cores inside it; the OP article is just an exercise. Application specific logic built with FPGA might be much more efficient than generic CUDA cores.

Also, it is a given that an ASIC will be more power efficient than an FPGA, but the FPGA will be generic and hence more money efficient.


Agree with yout points that FPGAs are great for low volume prototyping/one-offs, interfacing, and when timing is of critical importance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: