Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How to get started developing with FPGA?
247 points by meifun on Oct 28, 2018 | hide | past | favorite | 84 comments
I've been doing a lot of GPU programming these last few years. Now, FPGAs are being used to accelerate tasks (algo trading, etc, etc).

Recommendations on how to get started with FPGA development? What cards are good? Do they make external cards to plug into a laptop like eGPUs? Popular SDKs?




1) FPGAs are used for low-level stuff (high-speed digital logic) and high-level stuff (accelerating algorithms). It sounds like you're only interested in the latter. You can ignore any tutorials that go into Verilog/VHDL or low-level logic design.

2) The two major FPGA manufacturers are Intel and Xilinx. Each has their own tools. The tools are very different. You should pick one (flip a coin) and stick with it, at least starting out.

3) FPGA manufacturers typically have a free tier of their tools, and a paid tier. The paid tier is usually thousands of dollars. You'll only need the paid tier if you're using a high-end FPGA (those are usually also very expensive - thousands of dollars for an evaluation board).

4) Buy a popular FPGA board in the $100-$1000 range. I recommend something from Terasic (Intel FPGA boards) or Digilent (Xilinx FPGA boards).

5) Accept that you won't have a high-speed connection to a PC. Inexpensive FPGA boards normally only have gigabit ethernet, at best. You can get a PCIe FPGA board, or connect a USB 3.0 PHY via FMC connector, but you're looking at more money invested, and you'd have to learn a lot more about the inner workings of the FPGA.

6) The tools you're looking for are Intel Quartus II HLS, or Xilinx Vivado HLS.

7) Look into OpenCL for Intel and Xilinx FPGAs. This is the ultimate goal you're trying to attain - getting arbitrary algorithms optimized for FPGA fabric. Good luck!


FWIW: The latency gains that usually benefit HFT or algorithim design, aren't, for the most part, realized using off-the-shelf tool kits.

Unfortunately, the tooling and hardware isn't currently quite at the point where HLS languages, or openCL for that matter, have had substantial impact for real-world problems, especially if you're targeting a low tier FPGA. This is especially true when implementing interfaces to PHYs or other IP.

I think part of the problem is that successful HDL design requires a fundamentally different approach to program architecture - especially when it comes to closing timing - and there's a fairly steep learning curve to getting everything set up.

I appreciate the idea of skipping verilog or low level logic introductions, but it seems to me like that's asking for substantial problems later on when you actually want to do something practical.

Then again, I can't really think of any better (ie; easier) learning pathway than what you mentioned.


It might be true that latency can't be improved much by using an off-the-shelf FPGA but performance in parallel processing tasks can. For instance an FPGA might be cheaper than a GPU (in relative performance/$) in machine learning implementations because the FPGA architecture can allow for higher throughputs/less bottlenecks.


Great list. The only thing I would add is make sure you have a Windows box. Some tools work on Linux/Mac, but in general, Windows will have easier to use/setup tools.

A fun FPGA project which doesn’t require too much skills/knowledge is to implement a VGA graphics card.

An Ethernet card might also be an interesting first project, albeit a bit harder.


Both the Altera & Xilinx tools seem to work well in a linux (ubuntu) VM on my Mac. (At least at the tier where I can afford FPGA hardware).

For lattice, the open source iceStorm project is a good way to go.


> ignore any tutorials that go into Verilog/VHDL or low-level logic design

I'm slightly surprised by this - are there really toolchains/kits that are turnkey enough that you can drop a compute kernel and some kind of high speed comms onto the board without having to do any HDL?


The Intel OpenCL SDK will allow you to get designs from code to running in hardware with no HDL. The problems you'll face won't be HDL related, they'll be crappy toolchain related.

Having said that, if you don't know what your design would look like in HDL then your performance will never be good enough to justify doing it on FPGA in the first place. Even if you do the performance will likely not be as good as GPU and probably an order of magnitude worse than an HDL implementation.


Hi, professional FPGA developer here. A couple of things I would advise:

Firstly, development kits are great - they provide the hardware you need and all of the IP to drive that hardware. Grab a dev kit - there are lot's of other posts advising you which one. The IP that actually interacts with the real world is by far the most difficult bit.

Secondly, 99% of hardware design is simply know what actually maps to hardware. It's important to understand that hardware languages fit: FPGAs, ASICs, simulators. So what can be coherently written in VHDL/Verilog can't necessarily work if it doesn't map to the hardware platform you're writing for. So you need to actually understand the structures of an FPGA - LUTs, Registers, DSPs and Memories. Once you know what you're writing you're 90% of the way there. Real FPGA devs spend months in simulators before they get to actually hardware test. The simulator will allow you to do things that the compiler will warn you about and the hardware will simply do wrong. Each compiler (modelsim, Vivado, Quartus, VCS etc.) will have a unique interpretation of the language you're using.

Finally, in terms of tools, use whatever comes with your dev kit, but in terms of simulators Modelsim is industry standard and Intel (Altera) ships a free version.

I don't really have advise for learning material- I learnt at Uni.


It’s so surreal that people in the FPGA industry refer to libraries/components as “Intellectual Property”.


I think it's because so many of them are sold/licensed that way. You buy an Arm core, an Ethernet, USB or BT interface, etc. Often you do it from your EDA (electronic design automation) vendor so that it fits into your design/test/layout flow. Even the ones you don't pay for often can't be redistributed. Lots of them are literally black box or "hard" IPs that you can't change or test except through defined interfaces... but they often have been validated much more than anything you can write and may be much smaller or lower power than you would naively write.


I would attribute the weird FPGA dev attitudes to Stockholm syndrome. I always get the most mind boggling replies in defense of the evil vendors when I criticize FPGA’s embarrassingly small amount of freedom.

> [black box components] often have been validated much more than anything you can write and may be much smaller or lower power than you would naively write.

Sure but if an entire community was validating and improving these common components, they’d be way better still. (See gcc/llvm, linux, git, etc).


Hardware validation is like hardware startups, much harder to do. With software, all you need is a computer, time, and will. With hardware, your toolchains have a material cost, a calibration cost, shipping cost, shipping schedules, in addition to a computer, time, and will. If you screw up your equipment or prototype, thats at least another week or month of waiting for replacement parts.

Software is like music, as long as the instrument is works, all you need is time and will to develop/create.


Yup. If you have an engineer who's paid $100 000 a year with three weeks vacation, that's 407.05 a day. It starts to make sense that you'd buy $4000 IP for an Ethernet core, unless by some ungodly powers they can write that in less than 10 days.


If you have an engineer who's paid $100 000 a year with three weeks vacation, that's 407.05 a day. It starts to make sense that you'd buy $4000 IP for an operating system, unless by some ungodly powers they can write that in less than 10 days.


I see what you're trying to say, but the analogy doesn't exactly work. Everyone still uses a COTS operating systems because the cost of building one from scratch is so high. We're lucky that the market provides multiple free options, but if it didn't the rational business move for most projects would be to license one.


It's also worth noting that are clear examples of closed-source software that essentially acted like malware. I've not heard of any such examples with IP cores.


This is ignoring the major part of hardware development that is actually just non-free software tooling.


This whole thread is frankly, so dumb. One of my best friends is a SoC designer at Apple and we talk about this stuff all the time.

First thing people need to understand: this whole world operates at scale. You don't bother making an ASIC unless you really need one -- you're either making millions/hundreds of millions of them, or the individual application is so high-value that the hassle, complexity, and expense of moving off a general-purpose CPU onto something completely custom is worth it.

Second, 4K is chump change. You need an entire team of people to do anything meaningful with FPGAs/ASICs. Less than that, just buy a $100 CPU that will do more, be immediately available, and have an entire software stack (OS, user-space programs, compilers, etc) for it. And teams that know how to build this stuff well cost MONEY. A lot of it. Not $100K/year. More like $250-300K minimum per person at Silicon Valley rates.

I get that the OP wants to do this as a sort of hobbyist undertaking, just realize, he's cutting against the grain of the entire ecosystem. Everything about this stuff is made for huge companies to do massive projects that will ship at gargantuan scale. Not hobbyists in a garage.

The major part of hardware development is NOT tooling. It is manufacturing the hardware.


I disagree with this, you don't necessarily need a whole team of people and massive amounts of cash to do FPGA development and you don't necessarily need expensive tools. For my current company I created a complete FPGA based trading system from scratch on my own with free tools (apart from Vivado which I just used to turn my RTL into an actual design I could put onto the FPGA board). The board I used cost around £2k and the Vivado tools were £4k (athough if I was going to do it again, it appears you can just pay for your usage of Vivado using the cloud (nimbix has machines that have the Vivado suite on them). The cost to the company for this is pretty much my salary + the board costs.


> this whole world operates at scale. You don't bother making an ASIC unless you really need one

You are the first person in this thread about FPGAs to mention making an ASIC.


His point still stands as all the tooling and IPs are the same for FPGA and ASIC and ASIC drives developement of new IPs and tools. FPGA are still the minor market here by far


> * all the tooling and IPs are the same for FPGA and ASIC*

They're really, really not.


Place and route tool is different and IPs might use different memory and multiplier macros depending on technology. Serdes transceives will also be different. Ideally this is all hidden from the integrator by parameter selecting technology.

Linting tools, simulation tools, formal verification and synthesis tools are all the same. Verification methodology is also the same.


Nice perspective taking. To the OP's defense, he was wondering about FPGAs for hobby endeavors.


Tooling is a blip on the radar of the cost of the designs I work on. Verification is usually the most expensive aspect. 10x tooling even if the tools were purchased for, that project alone, which would be very unusual.

However, for open source to flourish, high quality open source simulators are required. I think if we had that, synthesis and place&route would follow. You don't need synthesis or par to do design and verification, but you do need simulation.

I look into the state of OS simulators periodically. Some are impressive, but mostly just handle the design basics... an OS systemverilog, static and dynamic, simulator would be a game changer.

It's a Herculean task though. I don't know how much Xilinx makes off tooling. I have to think they would make more from FPGA sales if tooling were free, and if it were open source, it would no doubt be improved upon by degrees I can only fantasize about.

Simulators, and by extension synthesis and place & route tools, are complementary goods to FPGA hardware. Make your complementary goods cheap, and you make a lot of money.

But there are tool companies out there such as Mentor and Synopsys that make a lot of money from such proprietary tooling. I imagine simulators have a tough patent field to navigate.


> when I criticize FPGA’s embarrassingly small amount of freedom.

It takes a critical mass of knowledgeable people who don't necessarily want to be compensated monetarily for their efforts for this to happen (think early GCC and Linux teams). It took a few decades of largely unpaid effort before GCC and Linux convincingly took over as the leading compiler and OS respectively and people started being paid to work on them. To achieve the end you have in mind, you will need to start building that community


> I would attribute the weird FPGA dev attitudes to Stockholm syndrome. I always get the most mind boggling replies in defense of the evil vendors when I criticize FPGA’s embarrassingly small amount of freedom.

I'm not sure you understand FPGA development. Until FPGA manufacturers do something like software vendors where they bundle actual malware with the closed-source hardware they sell, it doesn't really bother me.

Personally, I kind of prefer having detailed documentation of a black box rather than trying to read source code. The latter is all too common in software development.


I have multiple FPGA designs in use today. What does or doesn’t bother you is irrelevant.

I don’t really care to explain the obvious net win that open source designs (and tools) are for society. The only argument I’d consider reasonable is that the FPGA/hardware world is too small to justify the initial investment.


You are absolutely correct. Keep fighting the good fight :-)


Writing robust cores is usually substantially harder than writing the equivalent software library. You usually have to design the logic to satisfy various different criteria (area, performance, or power), and, especially when implementing high speed design, take into account the targeted device.

Unfortunately, the number of people who are competent to design, test, and implement these cores is fairly small, and usually well funded. Oftentimes, these cores represent very non-trivial time savings, especially when it comes to characterizing performance and closing timing, so there is tremendous value in the marketing and selling of these cores.

On the other hand, sites like opencores are great for finding useful IP that people have contributed and may be adapted to your device.


Not quite the response you wanted but I just wanted to mention project Oberon (http://www.projectoberon.com) by Niklaus Wirth. In 300 some pages, starting with an FPGA, you implement a CPU, a compiler for said CPU, a language targeting the CPU and an OS which includes a GUI and a networking stack.


Thanks for mention this, it looks very interesting.


I'd highly recommend starting with the TinyFpga BX and Project Icestorm:

https://tinyfpga.com/bx/guide.html

I used the lattice icestick (20 dollar board) and IceStorm (quick to install, open source toolchain) to get started a few years back and the community around the toolchain has really come a long way since (new boards and great documentation), check it out : )


This is the board I used for two courses in college and I thought it was pretty good. I wasn't limited at all, but my designs weren't anything too complicated.

https://store.digilentinc.com/nexys-4-artix-7-fpga-trainer-b...

In my reconfigurable computing course we used the Xilinx Zybo board which has a SoC on it which includes two Arm processors and an fpga (I think the new xilinx boards also have a gpu). This was really cool but I wouldn't recommend starting with it since the dynamic reconfiguring has a big learning curve in itself.

I actually don't know what a good way to learn is since I just followed my professors notes which was helpful, but I'm sure there are sources online to help. What I can say is that fpga development isn't the same as software development. You have to think differently when implementing designs since you're literally describing the circuitry instead of designing code that runs on pre-determined hardware. I used Xilinx Vivado Webpack (it's free) throughout my fpga usage.


The books by Pong Chu are really good. You can choose either VHDL or Verilog and then the FPGA you are getting started with. I went through 'FPGA Prototyping by Verilog Examples' as my first starting point with Verilog and it was very easy to follow. This will give you a basic level of understanding of FPGA development, from there you can study more specific topics.

There was a good article yesterday on HN that went through the process of taking the Cordic algorithm and implementing it in Verilog, it's a good example of using FPGAs to implement a specific algorithm.

http://zipcpu.com/dsp/2017/08/30/cordic.html


Start by getting a simulator. Download Vivado Webpack from Xilinx: https://www.xilinx.com/products/design-tools/vivado/vivado-w... . It's free (but phones home). It comes with Vivado simulator that will let you play around with coding before buying any hardware.

Then, take a look at http://www.fpga4fun.com which has lots of tutorials to help you get started.


Great thread with lots of good answers and perfect timing too. As an HW(board level) guy pivoting into this space just recently here is my algorithm so to speak to get into this space. Please keep in my mind that I have always "worked with " FPGAs, just "Not worked on FPGAs". 1. Purcahsed Zynqberry FPGA from Trenz. Why? For the price you get the best documented FPGA dev kit on the planet. Thye have a great Wiki with good documentation. Do your own research but this is based on my own research. If you want to develope anything with FPGAs, then your options are down to the "X" and "A" companies. Solid toolchain and documentation and communities(Both official and SO). 2. work your way through the Vivado tutorials on the Xilinx wesite. They will walk you through creating a project from scratch all the way upto creating a variety of RTL projects integrting IPs(Xilinx or your custom that you created). It includes its own flavor of simulator where you learn to simulate your designs and perform timing analysis. 3. If you don't have the necessary Digital logic background either get a decent book or enroll in the Nand2tetris(someone else suggested that).


You should start by learning VHDL or Verilog and the best environment to do that in is simulation, not FPGAs. FPGAs are hard and annoying to debug, so you'll be spending a lot of your time writing test benches that run in simulation anyway.

A nice free one for Verilog is Icarus Verilog. The FPGA vendors have toolchains that include simulation, so you could also download Xilinx or Altera's free tools and start there.


Icarus is great, but I'd still recommend the free to use simulators from the fpga vendors (e.g. xilinx xsim) as they support more of the modern system verilog and VHDL language. Restricting yourself to the older subset of verilog icarus supports is going to be frustrating in the long run.


Is modern Verilog not frustrating? I don't know much about it, but you're certainly right that the older stuff can be.


It's different for sure. It has a lot of very nice features that make it less easy to shoot yourself in the foot for design. It is more expressive, especially woth multidimensional types and real number modelling, plus a whole host of verification features like constrained random verification, classes, coverpoints, assertions, etc.

I have to say I like it, but the learning curve is steeper and you can still code 'foot shooting solutions as it is backward compatible - so better to learn the new, more correct way than to start with the old verilog and then learn the new features.


I am only half way through it, but you get a lattice icestick for less than $30 and open source tools. I've started the hackaday.io bootcamps which so far have been really good.

https://hackaday.io/list/160076-fpga-tutorials

They had a chat with the guy who wrote them a few weeks ago and there are more coming. Very accessible and you don't have to use the icestick very much -- most of it is done with free tools.


Since nobody mentioned it yet, I think AWS’s f1 instances starting at $1.65/hr are worth considering. The underlying board is fairly expensive, but given that you mentioned algorithmic trading, I assume you’re intending to do this professionally (in which case production will look more like the f1 instance than a small dev kit).


Maybe if you're already familiar with FPGA development and have a specific application in mind. The size and complexity of the F1 accelerator (and, in particular, the way it's interfaced to the computer) will be a major obstacle if you're still learning.


Programming FPGAs by Simon Monk and the Mojo v3 have been what I've been getting started with. It might be a little basic if you've been working in-depth with GPUs already, but the Mojo v3 is a great board to learn on - connects directly to your laptop via USB. It takes a lot of the magic out and makes it more like you're just programming an Arduino.


Are these types of boards suitable for developers, writing code that eventually will run in a production financial environment?


It teaches the same first principles in a smaller format with less room for error, but this particular board won't be able to scale up to a prod environment.

Agree with another commenter that an AWS EC2 F1 instance might be closer to prod for your purposes.


I highly recommend getting the Mojo v3 and the O’reilly book titled “Learning FPGAs”. Both are great low barrier to entry for beginners, yet powerful enough hardware wise to do much of what you’ll need for quite a while until you need to productionize it.


This is really cool. The GPU project they are currently featuring is amazing. I didn't realize that was possible.


Alchitry has the Mojo board. They're going to release their Gold and Copper boards soon, along with expansion boards.

https://alchitry.com/collections/all

There's also the Arduino MKR Vidor board which combines an Arduino programmable processor with an FPGA.

https://store.arduino.cc/usa/arduino-vidor-4000


I’d like to share my two cents about jumping the gap from writing code to writing code to infer hardware (having done it myself).

As some others mentioned, the most important distinction is knowing what the logic synthesizer and place/route tools (could be looked at as compiler passes) can do to infer circuits from you code.

Fundamentally you’re designing digital logic circuits and subsequently describing them in code. So go get an introductory digital logic text book and read it. (I.e. Logic and Computer Design Fundamentals, Mano & Kime) You’ll need to understand the basics of “timing” of data propagating through logic circuits. Further into the book you’ll get some ideas of basic digital logic patterns.

Here’s one nifty trick; if you can make sure that the function to compute the next value of a bit can fit into one lookup table (LUT) you can be pretty sure that your circuit will run fast. For the uninitiated, a typical lut has about 4 to 6 inputs connected to a single bit memory (flip flop) on the output.

You can try to hack it with OpenCL, and you may have some success. But do not expect GPU-like ndrange kernel duplication to get you very far. Instead use single work item kernels and make a systolic array. Look up systolic array matrix multiplier.

Also, FPGA tools suuuuck compared to software tools! So expect bizarre pains in the but.

Good luck! It is doable.


Can you really benefit from this new skill? Because it is very different from GPU programming. In fact, it is mostly describing hardware in some very low level language. You must develop everything by yourself to make that eGPU functionality. All cards are good, you just need to pick right one for your application. Terasic has a nice range boards starting with cheap ones for beginning to advanced ones with high end FPGAs.


I'm not sure how programming FPGAs compares to GPU programming, but it's a completely different way of thinking compared to traditional software.

The last time I really did FPGAs was over 10 years ago, but unless the tools have gotten orders of magnitudes better, in addition to the other concepts mentioned you need to understand clock domains, metastability, pipelining, etc.

I preferred VHDL to Verilog because you shouldn't need a lint checker for your HDL. But unless it's changed, Verilog is a lot more popular.

Training is available, for example https://www.xilinx.com/training/atp.html . It probably costs more than you're interested in paying, and you should have the basics covered first.


Are these trainings worth the cost? Has anybody reading this done them / would be able to provide feedback?

Also seconding the need to understand clock domains and metastability, timing constraints, etc.


This is the core of a computer engineering degree.

You're looking for (1) probably a first course in the basics of circuits, (2/3) a first- and second-course in "digital design" or "logic design", (4) an operating systems course (emphasizes how this stuff all works with "The real world"), a basic programming class if you don't have one, and some domain-specific stuff such as digital signal processing, graphics, statistics, finance, or whatever else you're trying to do with this thing.

Probably 5-6 good university classes in total. If you could do these classes a la carte, it's probably the fastest way to learn this stuff (skip the gen eds and all other non-degree requirements). Frankly, it's a lot if you have no prior experience, but I'm not sure that's true.


"When choosing a development board, consider what you get with it and what you want to use it for. FPGAs are ideal for use with high speed peripherals, and in general it is much easier to buy a board that contains the part you want, rather than trying to add one on later (and inevitably giving up and upgrading to a more capable board). Examples of things you might want, and are quite difficult to add yourself:

Gigabit Ethernet HDMI/DVI PCI/PCI Express External non-serial memory (DDR/Flash etc.) Things that are relatively easy to add, and are not so much of a big deal to wire up yourself.

MMC/SD cards Character (e.g. 16x2) LCDs Anything I2C/SPI and relatively low speed VGA (with low colour depth) I like having a board with many (at least 8) SPST switches and LEDs, and momentary buttons. Unlike a microcontroller where it's relatively easy to spit debug information out of a serial port or to an LCD with a single C function call, debugging FPGA designs is a bit harder. LEDs provide a zero fuss way to break out internal signals for visualisation - if you're tracking the progress of a complex state machine, you can light up an LED when it gets to a certain point without adding any extra logic. While these are easy enough to add yourself, I find that it's better to get a board that has them so that you don't waste valuable user IOs or waste time investigating failures caused by your terrible soldering skills."

Found here: https://joelw.id.au/FPGA/CheapFPGADevelopmentBoards


Like, 8 years ago I got a Spartan 3E board and a book on VHDL programming that didn't exactly use the exact board I had. And that seemed to work -- you'd plug in via USB. I ran some Xilinx software on an EeePC. The act of learning was more about I/O to peripherals than just implementing algorithms on an FPGA.

So that's how my experience went. I don't know whether modern resources are more or less computation-heavy.


Are there any fpga development workflows that use all FOSS (software) components?


Yes, for the Lattice iCE40 and ECP5 families: https://github.com/YosysHQ/nextpnr


you can try the (~$15) upduino 2.0[0]. it's supported out of the box by the free (OSS) icestorm toolchain[1]. The upduino board itself is also OSS.

[0]: http://www.gnarlygrey.com

[1]: http://www.clifford.at/icestorm/


Quick glance reveals these run at around 12MHz, which may be a bit on the slow end for algo trading (?)


I mean, Algo traders I know are working on teams of 10-20 people with deep knowledge of $5k boards over a half-decade, so I think for "getting started" those might be pretty good.


If you have the v2 board you can use the PLL to change that 12MHz up to a high frequency (150 Mhz maybe?). You have to use a block and the icepll tool will generate the code for the frequency you want.


iCE40 ICs have both 10kHz and 48MHz oscillators built-in, as well as multiply/divide capable PLLs that work up to 250 MHz.


Digilent has been making affordable FPGA Development kits using chips from multiple vendors for a long time.

I am sure you will find something suitable there.

https://store.digilentinc.com/fpga-development-boards-kits-f...


For developers with Software background, Xilinx provides Software Defined Flows of developement. It includes features of Xilinx Vivado HLS and goodies that help rookie dev to deploy their algorithm on machine.

You can write your algorithm in C/C++/OpenCL and let SDX tools do their job for creating RTL and creating system. SDX supports X86 flow for functional verification which saves great time over RTL verification. Then you can move to hardware emulation which can verify timing related issue with your algorithm. Finally you can have hardware flow from same tool. SDX supports eclipse based debug perspective in all flows.

As for Hardware, Amazon AWS instance F1 is easiest thing to start with. For people with GPU background, SDAccel will nice way to go.

Xilinx recently announced Versal product which is GPU like card that can be connected with machine having PCIE support.

Happy Programming Hardware! Cheers



I found the Free Range VHDL [1] book a nice starting point, it is open source, you can download it or buy a copy, it is especially useful if you have a strong background in software development, as maybe you case. I also found out VHDL easier to use at the beginning, it may me somehow overwhelming at the beginning, but its rigor also helps during the learning process. Perhaps the most difficult to understand is that when you are programming an FPGA you are really describing hardware and you have to learn to think in parallel. It also took me sometime to understand that not everything can be synthetized and downloaded into real hardware. Anyway, hope it helps and good luck!

[1] Free Range VHDL. The no-frills guide to writing powerful code for your digital implementations by by Fabrizio Tappero (Author), Bryan Mealy (Author), www.freerangefactory.org


I <3 TinyFPGA and also it's open source DO IT


Since you mention using FPGAs for algorithmic trading I can only guess that you wish to employ FPGAs as some sort of signal processor. I would suggest investing in an FPGA driven Software Defined Radio (SDR) You'll get the best of many worlds as an SDR does something that is immediately useful, and demonstrates how the FPGA fits into the larger picture as a part of a signal processing chain. Once you understand the complete signal flow from beginning to end mastered, building the signal processor you have in mind will be far easier. You will need to brace yourself to learning at least 6 different languages, including VHDL, Verilog, C, Python, Javascript, and Go.


You could start by buying an FPGA board, but it will probably sit around for a few weeks while you develop some code to run on it. Instead, you can do an awful lot without having a development board and I suggest getting familiar with the development tools first, with zero $$$ upfront.

Text Editor: There's a very nice article choosing a text editor, here: https://www.fpgarelated.com/showarticle/37.php

Simulation: ModelSim is the industry-standard simulator and (as someone has already mentioned here), there is a free version on the Intel (was Altera) web site. You will need to sign up to get the download. This version of ModelSim will let you run any design up to a certain number of lines of code, but it's plenty to get started with.

Xilinx Vivado has a built-in simulator, but I have spent most of my career using ModelSim, and I haven't had the need to switch. You also need to sign up to download Vivado; the free version is called Vivado HL WebPACK (Device Limited).

Synthesis and Place/Route: Synthesis turns your HDL code into a netlist which uses the primitive logic blocks of the FPGA that you're targeting and defines how they are logically connected to each other. Place and Route takes those blocks, finds a place for them to go in the FPGA and tries to connect them up so that the logic delay from one flip-flop (through combinatorial logic + wire delays) to the next flip-flop will meet your target clock speed.

When it comes to seeing what your HDL code will turn into once it's been put into an FPGA, I find that Xilinx have the most advanced FPGA toolchain (Vivado) with some excellent visualisation features for learners. Stay away from Vivado HLS for now; if you want to learn HDL code, the HLS tool is a distraction that won't teach you core HDL design concepts. Vivado does not support Spartan-6 parts; only 7-series and newer.

Using Vivado, take a sample piece of code and try running it through synthesis (don't place/route yet). Look at the result in the schematic viewer. Look at the hierarchy viewer, too. Try to trace a line in the source code to its synthesised result. Once you're happy that you can do this, try running place/route (Run Implementation). You can cross-probe from the schematic view to the chip view and highlight wires and blocks with your choice of bright colours. Take a look at how the schematic primitive blocks end up getting placed and routed in the chip.

Note: If you don't care about connecting up I/O pins in Vivado and you just want to see what a small bit of logic looks like, you can specify "-mode out_of_context" under "Project Manager -> Settings -> Project Settings -> Synthesis -> Options -> More options".

Advanced Stuff: Anyone can make an FPGA design work at 10MHz or 20MHz, so try something harder. Start off by setting a slow clock speed constraint (50MHz), then try increasing the speed in 50 or 100MHz increments until you hit 300MHz (400MHz or 500MHz if you're adventurous). When does your timing fail? What do you need to do to your logic to make it meet timing?


Slightly off-topic, but does it worth learning FPGA development for existing CS devs when considering future career?

I don't mean FPGA is worthless. I mean, as a former EE student, a FPGA career requires pretty high expertise which are quite far from the CS topic, such as HDL, signal processing, and low-level logic design. I hardly ever have seen job openings for FPGA devs that requires lower than a master degree in that specific field.


It’s probably worth dabbling in. I have only used them to understand how to do threaded programming better and how core CS concepts work in practice as it helps to have different perspectives on how to tackle a task.

For material career help it depends. Having some experience with it will give you more flexibility, though it might be hard to get resume worthy experience with it.


No.

I have an undergraduate degree in computer engineering (basically this stuff) but do mostly software these days.

You're better off going deep in one or the other, either how to design this stuff or the algorithms/math/CS-heavy parts.

I've written a lot about this on my personal blog.

http://www.davidralbrecht.com/notes/2018/06/specialization.h...

http://www.davidralbrecht.com/notes/2018/10/python:-the-new-...


I'm just starting to learn about this field as well, so thanks for posting! This video seems like it's a pretty good introduction: https://www.youtube.com/watch?v=0zrqYy369NQ

Another specific question that comes to my mind -- does anyone have anything to say on languages? VHDL vs Verilog? Others? Which is best for beginners?


I see that as being a religious war with both factions sticking to their resp guns. Its a means to an end. VHDL is more popular across the Atlantic Ocean than its here. In the US more regulated companies(Medical, Mil-Aero,Automotive) prefer VHDL while the consumer(FAANG) prefers verilog. Syntactically, VHDL is more verbose owing to its ADA origins while Verilog attributes its orignis to C. So read through some sample code on github and see which one piques your iterest. If you choose verilog(FPGA4Fun) has you covered.


Hi there, first HN post. It seems to me that FPGA can be simpler to use in conjuction with CPU i. E. Run main program on cpu and complex algorithms on FPGA. Arduino released a small board with Arm and FPGA, might be a good place to start. https://www.arduino.cc/en/Guide/MKRVidor4000


I recommend the pynq from digilent, it'll help ease your way into hardware design.


I'd actually recommend learning digital design first. Go through a course like nand2tetris. If you are writing HDL without a solid idea of what RTL logic you expect to get back you're in for a hard time.


Not an FPGA expert by any means, but I saw that make magazine published a booklet about FPGA development for beginners. Maybe you could use it as a very short intro.


Do you like Haskell? http://www.clash-lang.org/


Altera has the cyclone iv & v chips sold by terasic. Id start with one of those boards.


Just use InAccel :)


I learn best fiddling first and theory later. Check out the DE0 Nano. Cyclone IV with ram, buttons, leds, eeprom, accelerometer, adc, header breakouts, under $100.

Any fpga dev kit in that class will work. I was able to jump right into it with the basic examples and had my own hardware uart up in a couple weeks.

The only downside is when I was using it the software was only for windows and very bloated.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: