Hacker News new | past | comments | ask | show | jobs | submit login

Something that's not really spoken to but is just as important is that FPGA development isn't software development. You're specifying hardware and usually that hardware doesn't just scale to new platforms.

Time will tell if Open Source can make the transition over to that domain I'm cautiously optimistic but I think there's a larger cultural divide there than called out.




FPGAs are sexy to many people because they are exotic. Imagine a custom chip designed to do whatever you want and able to be reconfigured on the fly?

The problem is that many aspiring learners want to approach it like a software problem. That can be a valid approach as long as they are willing to learn new abstractions. Many are not and then get frustrated and blame the tool ecosystem. Granted, the tool ecosystem does suck, but people (like me and others) are doing real work with these tools. Even if you scoff at FPGA tool quality, you have to remember that almost every ASIC out there was developed with similar (maybe slightly better) quality tools.

There has to be flexibility on all sides. Tool vendors have to adapt to changing times (more open ecosystems) and software developers branching out into FPGAs have to be willing to learn how hardware works.


> The problem is that many aspiring learners want to approach it like a software problem.

That is precisely why I made Blinklight - it's an educational platform for starting at the very bottom.


Maybe it would be better to use the highest abstraction tools possible (chisel? Maybe DSL's for hardware generation and verification) ?

Because compared to let's say software development, or embedded systems development, real chip design and mostly the tons of verification you need, is boring.


Look up Synflow's language for HLS. It's C based and open-source.


I don't want to diminish what they and other similar projects do and actually I'm not really familiar with FPGA development, but I have one thought: what if we don't need one more language, but rather something on a different level of abstraction?

The first neural networks that ran on GPU was written using low-level GPU primitives [1]. This was a non-trivial process that required to do a lot of low-level stuff. It required system programming skills and time to implement new architectures. But a group of researchers at the University of Montreal developed Theano [2], a framework that allows you to define computational graphs in Python programming language and then compile them to CUDA code that could be then executed on GPU. Instead of spending resources on development of a new language they put their efforts on actual thinking out useful abstractions and implementing compiler that works efficiently. It is also notable that they didn't include very high level abstraction in Theano too, but there are libraries like Lasagne [3] and Keras [4] that introduced higher level abstractions (neural network layers and pluggable pre-implemented models) on top of Theano. It is safe to say that Theano boosted Deep Learning research, making programming of new neural networks architectures quicker and accessible.

What if actually we need just the same thing for FPGA? Just a Python library that defines useful abstractions for logic circuits building, allows to construct arbitrary graphs using them, and then compile these graphs to VHDL. Assuming that the basic building blocks defined in Python are well defined and tested, it would be easy to implement verification and some testing in pure Python, the tooling like visualising logic diagrams can be implemented in pure Python too.

In addition to reduced efforts for development (because you don't need to design and implement a new language), it would be easier to pick up by software programmers: they would not need to learn new syntax, but concentrate on core conceptions like gates/summators/other stuff and graphs involiving them. It wouldn't be necessary to develop special-purpose editors, because it just Python, and again, because the tests for the resulting schemas could be written and ran in pure Python, it would be possible to use standard CI tools like Travis for open source development.

Edit: It seems like there is already a project MyHDL [5] that does something very close to what I described above.

[1] https://hal.inria.fr/inria-00112631/en/

[2] https://github.com/Theano/Theano

[3] https://github.com/Lasagne/Lasagne

[4] https://github.com/fchollet/keras

[5] http://www.myhdl.org/


I'm one of the core devs of MyHDL(https://github.com/myhdl/myhdl). It does most of what you're describing and converts to both Verilog and VHDL.

I'll be happy to answer any questions you have.



Ha! Looking at their website, I realized a toy project I just did for a hiring process was actually designed to produce simulatable inputs to Clash!


You didn't want another language but then mention a complicated pile of languages and libraries? And people doing high-level synthesis weren't putting "efforts on thinking out useful abstractions and implementing a compiler that works efficiently?" That's the opposite of true and condescending to HLS or HDL fields where building useful, efficient abstractions on hardware easy for developers has an abysmal success rate despite large brainpower and money invested. Lets look into your recommendation, though.

" Just a Python library that defines useful abstractions for logic circuits building, allows to construct arbitrary graphs using them, and then compile these graphs to VHDL. "

Because you're really describing either an enhanced HDL like MyHDL or a High-Level Synthesis tool. The first requires people to learn hardware to use right. That's hard for developers based on their online comments. If they do, they can make some pretty efficient designs, though. The second has shown to be easier for developers since it can be close to their way of thinking. However, turning high-level abstractions into efficient, low-level code is more like automatic programming than regular compilers given every step is NP-Hard w/ tons of constraint combinations. Like in software, automatic programming never happened: anything doing synthesis usually performs less than hand-made stuff in numerous attributes. This can be significant in affecting say the clock rate. So, CompSci is investing in both directions with numerous HDL's and HLS tools made. For HDL's, BlueSpec, Chisel, and MyHDL are probably most successful since they help hardware people handle hardware better. For HLS, I'm not sure since they don't disclose their numbers. ;) Here's a list of them, though. Bluespec is on same list so maybe its features fit in multiple categories. (shrug)

https://en.wikipedia.org/wiki/High-level_synthesis

Synflow's people were doing HLS research at a University of ironically more similar to what you brought up than most HLS. It had a parallel focus. Must have not gone anywhere or got corporate lock-in. So, they went the other route to build something easy for developers. Then open-sourced the compiler and IDE extensions along with some cheap IP and a cheap board. So, when a developer asks about doing FPGA work I reference easy-to-learn tools like that or NAND to Tetris if they want the hard route.

Note: There's also commercial support for OpenCL on FPGA's by Altera. Maybe others. There's also CUDA to FPGA work. CompSci isn't being narrow: they've been hitting every idea they can think of with low adoption you see being because almost none of it works. They keep trying, though.

http://cadlab.cs.ucla.edu/~cong/papers/FCUDA_SASP09_CR.pdf


The culture clash between the traditional VLSI mindset and programmer may be avoided... all we need is a REPL :)


It can and it can't. It's sort of like microcontroller code. You can define a core set of functions (like an FFT or something) that is very portable, but the peripheral mapping (which pins to output on, where resources are located, etc) is chip specific.


Yeah, even things like Block RAM, DSP units, carry + routing and other parts vary widely from chip/vendor.


I don't disagree. I'm reasonably optimistic that current and near-future FPGAs are overkill enough for the job of motherboard chipset and basic peripherals that that some basic portability abstractions can be put in place (e.g. ASM -> C) without causing such a performance hit that it's not viable any more.


I think it will still be a very uphill battle to get such chips integrated anywhere for some very basic reasons: power and heat.

All those extra transistors that allow FPGAs to be reprogrammable also dissipate a lot of heat and use a lot of power (or did back when I was mounting massive heatsinks on custom networking FPGAs).

For a laptop or phone manufacturer if the choice is between an ASIC and an FPGA that consumes 10x the power it is an easy choice. It's not just dollar cost, but power and heating budgets.

In general I love the idea.


Things have been changing - they put FPGAs in phones now:

https://www.ifixit.com/Teardown/Samsung+Galaxy+S5+Teardown/2...

(search for "FPGA" in the page).

That's a tiny one, granted, but things are certainly getting better in that regard.


CPLDs have been pretty commonplace in complex embedded system designs for a while. They're great for consolidating a bunch of little logic bits into a single package, and allow you to build some control logic using code rather than iterating hardware when you need to make small changes. There's huge advantages for a system like a phone where you are extremely space-constrained, and also with quick turn engineering cycles where the hardware can't go through 6 revisions before being completed. Throw a CPLD or a low-power FPGA like the Lattice ICEstorm series in there and let the HDL do the rest!


I've also seen CPLDs used to recover from board layout mistakes.


That's cool to see, although given the exploding Samsung Galaxy phone situation (probably completely unrelated) I'm not sure their engineering of power consumption/cooling is the best thing to cite. :)


A defective battery is about as far as you can get from chip design.


Maybe we could have a hardware interface designed to be open?


For inter-IC interfaces, AMBA [1] and Wishbone [2] are open standards for connecting various sections of a chip --bridge between cpu, i/o bits, etc. External interfaces are also pretty open. A lot of FPGAs have dedicated logic for talking with a PCI bus, I2C bus, etc. Unless you're doing something extremely interesting and specialized, hardware's pretty open as is.

[1] https://en.wikipedia.org/wiki/Advanced_Microcontroller_Bus_A...

[2] https://en.wikipedia.org/wiki/Wishbone_(computer_bus)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: