Hacker News new | past | comments | ask | show | jobs | submit login

I think FPGAs (or CGRAs really) will make a comeback once LLMs can directly generate FPGA bitstreams.





No need. I gave ChatGPT this prompt: "Write a data mover in Xilinx HLS with Vitis flow that takes in a stream of bytes, swaps pairs of bytes, then streams the bytes out"

And it did a good job. The code it made probably works fine and will run on most Xilinx FPGAs.


> The code it made probably works fine

Solve your silicon verification workflow with this one weird trick: "looks good to me"!


Its how I saved cost and schedule on this project.

I don't even work in hardware, and yet even I have still heard of the Pentium FDIV bug, which happened despite people looking a lot more closely than "probably works fine".

What does "directly generate FPGA bitstreams" mean?

Placement and routing is an NP-Complete problem.


And I certainly can't imagine how a language model would be of any use here, in a problem which doesn't involve language.

They are "okay" at generating RTL, but are likely never going to be able to generate actual bitstreams without some classical implementation flow in there.

I think in theory, given terabytes of bitstreams, you might be able to get an LLM to output valid designs. Excepting hardened IP blocks, a bitstream is literally a sequence of sram configuration bits to set the routing tables and LUTs. Given the right type of positional encoding I think you could maybe get simple designs working at a small scale.

I'd expect a diffusion model to outperform autoregressive LLMs dramatically.

Certainly possible! Or perhaps a block diffusion+autoregressive model or something like GPT 4o's image gen.

AI could use EDA tools

AMDs FPGAs already come with AI engines.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: