Hacker News new | past | comments | ask | show | jobs | submit login
Rules for new FPGA designers (zipcpu.com)
190 points by jsnell on Aug 25, 2017 | hide | past | favorite | 74 comments



> Do not use an asynchronous reset within your design.

This is flat out wrong. Registers that drive output pins should always be reset asynchronously. If you're not careful with your board design, it can happen that your FPGA is powered and configured before your clock signal is available, especially if it is generated by a PLL. An asynchronous reset on the output IO registers ensures that you're not inadvertently driving external electronics with bogus signals.

For internal registers it usually doesn't matter whether they're reset synchronously or asynchronously. There are some cases where the synthesis software cannot move async. reset registers from flip-flops into built-in hard IP cores like DSP elements or block RAMs, so a sync. reset can make sense here for performance reasons.

There's one caveat, though: An async. reset signal must be released synchronously to the clock, or there will be timing errors. However, this is a tiny, simple construct easily written by any non-beginner.

(source: I design FPGAs for a living)


This is actually surprisingly hard to do reliably when you're not using an FPGA but a large board with TTL or CMOS logic. I really struggled with keeping outputs 'muted' until the control system was up and running in one particular design. In the end my solution was to have a single relay in the supply line to the power electronics and to only trigger this relay after the control system output a complex code on it's IO lines. The chances of this happening randomly during the power on sequence were nil and once the system was powered up it did not matter any more (relay held itself and the only way to reset it was an emergency stop after which the whole machine would be cycled anyway).

Hackish but it worked very well.

The root cause of the complexity was that the system contained way too many circuits that could be powered up/down independently, some of those under end user control and that there was no easy way for me to reliably sequence power so the only place where any difference could be made was the end stage ignoring everything in between.

Connected to the end stage were some pretty powerful servos and inputs to inverters driving large AC motors so keeping things 'off' until the time was right was rather crucial to the safe operation of the machinery.

As far as I know these systems never failed to reliably power up in production even with 1000's of systems sold and used by three shifts daily.


Most real world hardware bugs I've come across are problems with power-on-reset as pin state problems (smoke included), garbage data in/out, or such stupid timing requirements that they couldn't be used in an integrated system without some sort of isolator.

The few FPGA projects I completed all had async reset released synchronously. This wasn't taught to me, I was just so paranoid of making a crap device that I wanted to be sure, and it was the only scheme I could come up with. It always ended up being a fairly significant part of the routing, considering what it was doing.


Eh, the design may be able to work without async reset. If the board level reset net causes the FPGA to reconfigure, then the real I/O reset is implicit anyway. In this case you do need to give initial values to registers so that the synthesis tool can figure out the initial state (it may not be able to infer it from the sync reset net alone).

I mean initial value like this in Verilog:

    reg output_reg = 1'd1;
(or at the very least check that the synthesis tool did infer correctly).


I'm sorry but everything you said doesn't make any sense in context of hardware design.

There are two ways in which the nodes of your design can obtain new values: - Through a clock edge - Through an asynchronous event like the reset

Your expression for that initial value doesn't represent any of those. You need to have an initial state on your design in order to have the proper initial conditions. And that initial state is the reset.


On FPGAs there is a third way for a register to get new values: FPGA configuration. When FPGA is re/configured initial values specified (like in previous post) will be set as value of FF.

BTW This Xilinx white paper is a good read about FPGA resets. https://www.xilinx.com/support/documentation/white_papers/wp...


what did you study/how did you learn to design FPGAs? I had an early intro to verilog in school but it all seemed so opaque to me. Do you think it's possible for people to teach themselves? Thanks.


I studied computer science and taught myself FPGA design on my own.

The thing is, implementing non-trivial FPGA designs is much closer to software development than to analog hardware design, both in the problem-solving approaches and in the tools you use (text-based languages instead of schematics); a "low-level mindset" is necessary, though. The association of FPGAs with electrical engineering is mostly due to historical reasons.

As for learning FPGA design: Download the free Xilinx Vivado Webpack (or the Intel/Altera equivalent), it is so feature-complete that it's even sufficient for many commercial designs. Most online tutorials (and university courses) are pretty crap though, because the teach you extremely low-level approaches, like building your own adders, which you should never do in FPGAs. This is great for understanding bit-level arithmetic, but useless for approaching and partitioning larger problems. It's the equivalent of teaching someone assembly (great for understanding the inner operations of a CPU), and then asking them to implement a webserver.

Besides, most online tutorials and university courses teach outdated coding styles, like separating sequential and combinational processes. It's good to read, e.g., the Xilinx forums to stay up-to-date.


I taught myself and now design FPGAs for a living as well. Had 1 class in college that, like your experience, was totally opaque. But it sparked my curiosity.

Not sure I can advise exactly _how_; I'm usually strong at self learning. Just letting you know it's possible. Personally I'd start with IntelFPGA (formerly Altera) products; I found them easier to use.


Personally I taught myself with the aid of an Altera DE2 board (I had a professor at university who was kind enough to lend me one of the class ones on an extended loan).

The DE series still exists and has a wide range: http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=E...

The DE0 Nano is pretty cheap: https://www.digikey.co.uk/product-detail/en/terasic-inc/P008... the more capable DE10 Nano looks interesting too: https://www.digikey.co.uk/product-detail/en/terasic-inc/P049... only slightly more expensive.

They provide a good plug and play solution. Program over USB, software available free from Altera (Ages ago Mentor used to do a free version of ModelSim in conjunction with Altera, something similar is probably still around).

You can also just grab Icarus Verilog (http://iverilog.icarus.com/) it's an open source verilog simulator. Will allow you to get to grips with verilog without needing the hardware. The main problem is you may start building designs that simply cannot be built in hardware (plenty of ways to build stupid circuits in verilog).

There are few resources on the internet (comparing to what's available for learning programming). Maybe try 'Digital Design and Computer Architecture' by David Money Harris & Sarah L. Harris.


I wouldn't recommend Icarus Verilog, it's not a very good simulator. You're much better off downloading either the free Altera/IntelFPGA copy of Modelsim or using the free version of Xilinx's Vivado.

The DE10 Nano is a great board though, huge FPGA that can hold big designs and the HPS gives you a lot of extra capabilities.


> I wouldn't recommend Icarus Verilog, it's not a very good simulator.

Haven't really used it much myself but fair enough :) I suspect many people on HN would be more comfortable with free opensource tools. Which is why I mention it.

The majority of the EDA world is proprietary software and huge licensing fees, free limited versions available for home/educational use if you're lucky. A bit of a culture shock for someone used to the software world!


And the proprietary stuff is better 99% of the time.


> I wouldn't recommend Icarus Verilog, it's not a very good simulator.

I'm going to make you define "good". Icarus, at one point, handled way more of the standard than even Cadence did and had far fewer bugs.

And I can tell you that at least one graphics chip used to run it's validation suite through Icarus.


I seriously doubt that. Care to share more?


Well, a couple of the people whom "The Steves" (inside joke: for a while it seemed like everybody who was committing code to Icarus Verilog was named Steve--I think there were 5 at one point) used to work with were on the Verilog committee itself. Both Cadence and Synopsys used to fail the validation suite that Icarus used to run.

And I, personally, was the person who reduced our Cadence Verilog licenses by 25% once I got Icarus Verilog up and running. At the time, the EDA vendors were dragging their feet on Linux versions because you had to buy significantly more licenses if you were stuck running it on Sun equipment.

Obviously, my info is highly dated. I haven't tracked Icarus Verilog in quite a while, but code doesn't magically get worse as long as it is being actively maintained. And the original writer/maintainer (Steven Williams) is a solid developer and is still on board.


I use Icarus (and GTKwave) all the time for small unit tests. The problem is that they can not handle vendor encrypted designs.


I was using free tools, so maybe that's was the issue, but I always found that trying to even approach abstraction would always totally explode my routing requirements. Is this the case with professional tools, or is abstraction avoided?


What abstraction do you mean?


This helps a lot actually, thank you. I know what works for one may not work for another - but it's good to know that revisiting it as a hobby may not be a complete waste of time.


As the author of the article ... I was also self-taught.

Dan


You're absolutely right, but I feel the author's statement refers to internal logic only. I/O bring up is certainly more tricky - a reset that goes through the FPGA is not a guarantee of proper power-on behaviour anyway, you need to deal with the pre-configuration state as well.

But since we're here talking about resets I'll give my own 10 cents for beginners:

- Do not reset EVERY REGISTER. Before you apply reset to a register ask yourself: does this piece of logic really require an initial state? This is especially true for data registers where the validity of its data is signaled by a separate control signal. Rule of thumb, reset the control signal not the data value - or better, design your circuits so that you only need to reset control signals (state machines included). Excessive use of resets puts a burden on the routing tools, can negatively affect your timing and prevent certain optimizations


I used to think the same way, but I've since changed my mind and consider it now premature optimization: It's totally okay to do it in a just-for-fun project, but otherwise the usual rules for optimizations apply: 1) Only do it once you know if and where it is actually necessary and 2) measure the impact of your changes.

Especially for beginners I would recommend that you reset all your registers, because errors due to uninitialized signals are a pain to debug, and they might not be visible in simulation. For example, if r is uninitialized and has the value 'U' (or 'X'), then

    if r = '0' then ... else ... end if;
will execute the else-branch in simulation, but might take the then-branch in a hardware.


Here are some more rules I like to use:

Use I/O flip flops: the external world should be separated from the internal world via one pipeline stage, and that stage should be in the I/O cells. This is not always easy to pull off (Xilinx, ugh), but when you do it you will get consistent I/O timing from run to run, and basically not have to worry about it once it's working. It's less error-prone than trying to have accurate external timing constraints. This is really for older (board-level) synchronous designs.

Often the FPGA and board designs are happening at the same time. Get a skeleton design working before the board design is complete. This design should include all I/O, clocks and resets. There can be many non-obvious constraints involving these things, so you better have them right before the board design is done or you are in for a world of hurt.

Along the same lines: take baby steps during the design (get blinking LEDs and some kind of software accessible register interface running first).

Oh a big one: use version control. Unfortunately the tools (Xilinx) do not make this easy, but it is really essential. You would be surprised at how often this is not done.


>Do not use an asynchronous reset within your design

It's not bad advice for new designers, but also be aware of the consequences: you might have a performance penalty from this if your FPGA's flip flops do not have direct synchronous resets. Also if your design might have to be ported to an ASIC you might have the same issue.

Better is to use the "asynchronous assert, synchronous release" reset and learn about recovery and removal timing closure.

In the past I've also designed with no timing requirements on reset, but start out all state machines doing nothing, and have a (synchronous) start pulse or edge to get things going. This can allow you to use the already existing slow global reset net. It mattered on older FPGAs with limited routing resources.


> Better is to use the "asynchronous assert, synchronous release" reset

Also put that synchronized async. reset onto a clock buffer if your synthesis tool isn't inferring one since it will be a high fanout net. For an FPGA target you lose most of the benefits from synchronizing it if you let the reset be delayed by running over the normal routing fabric. For an ASIC you're going to have to buffer it no matter what.

A common problem is that most FPGA dev. boards don't have any provision for an external reset and student's never learn the discipline of properly initializing their circuits. Xilinx takes an official stance against global resets [1] which I feel does untold damage to impressionable developers. Thankfully they do have the ROC library component that will simulate a reset after completion of configuration. Use this or your platform's equivalent if you don't have access to an external reset.

[1] https://www.xilinx.com/support/documentation/white_papers/wp...


I would say that not using asynchronous reset is very very advice. Sometimes the clock is not even available to the system in the beginning of execution.


> Do not transition on any negative (falling) edges. > Falling edge clocks should be considered a violation of the one clock principle, as they act like separate clocks.

I think this is a good thing to point out if you are working in a primarily rising-edge system, which is common, but if your _entire design_ uses falling edges for one reason or another, I don't see the problem.

I have had to transition a design to falling edge when I needed interop with existing external hardware that operated on the falling edge. Rather than invert my clock, using the falling edge was fine.


Resets NEED to be asynchronous assertion, what if the clock distribution logic is messed up, or just isn't running yet on power-up? De-assertion does need to be synchronized, though. Method is left as an exercise for the reader.

Also, aside from the latency added, dual-flop synchronizers are pretty good probabilistically for uniformly distributed single-bit random events, but they aren't guaranteed. For mesochronous signals they can actually make things worse, and for periodic signals or buses, there are much better methods.

But those issues usually come up in advanced designs with high-speed data inputs (PCIe or Ethernet) or other reasons to NEED multiple clocks. For beginners, the important thing to remember is that resets assert asynchronously and de-assert synchronously.


Great advice, and clear writing style.

Story time! During my 2nd year of university, I built a small CPU as a project. It was split across 2 breadboards, using an FPGA for the control unit.

It worked fine during debugging, but sometimes it would give really weird problems. Then when we tried to debug again, it was fixed. What was going on?

We'd forgotten to connect a common GND between the breadboards. When we attached the probes to debug it, the ground was passed via the USB port on the PC.


In low power circuits forgotten ground is annoying but that's about it. In power circuits it can cause spectacular and expensive side effects.


And with RF, well, things get really weird really fast...


In my undergraduate I worked in a lab where we made very sensitive voltage measurements on a large apparatus. The experiment was grounded to a large copper bar buried under the floor but the measurement equipment was connected to the building ground thereby creating a gigantic loop that could pick up all kinds of crazy noise. The solution was to break the (literal) ground loop with a buffer amplifier.


With added lottery for spontaneous creation of miniature microwave ovens, depending on the power levels.


Literally the same rules I was taught on day 1 of my engineering co-op 23 years ago.

We also added, "no external capacitors can be used for timing," because people would try to fix things that way.


Good list for starters, but obviously the reset question has nuances (not opening that can of worms).

One thing that bit me when I was a complete n00b: assigning registers from within more than a single always block. On my simulator (at the time) it worked perfectly but the synthesis tool silently ignored one of the blocks.

EDA tools suck. There I said it. Coming from a software it's truly shocking how poor error/warnings are handled. My "favorite" part is that you cannot enforce a "0 warnings" discipline as the libraries and examples from the vendors provoke thousands warnings and the only workaround is to filter the individual instances of the messages.


"One thing that bit me when I was a complete n00b: assigning registers from within more than a single always block. On my simulator (at the time) it worked perfectly but the synthesis tool silently ignored one of the blocks."

It's tool dependent but I believe you should see a warning that two drivers are assigned to the same net.

This is probably where I am guessing you mistakenly thought you were creating a register in Verilog with the keyword "reg". Synthesis tools don't work like that and haven't for quite a while.

Taken from https://blogs.mentor.com/verificationhorizons/blog/2013/05/0... :

"Initially, Verilog used the keyword reg to declare variables representing sequential hardware registers. Eventually, synthesis tools began to use reg to represent both sequential and combinational hardware as shown above and the Verilog documentation was changed to say that reg is just what is used to declare a variable. SystemVerilog renamed reg to logic to avoid confusion with a register – it is just a data type (specifically reg is a 1-bit, 4-state data type). However people get confused because of all the old material that refers to reg."

A lot of people here on HN seem to be self taught and not keeping up with tool and language developments. If you use tools and techniques from the 90s, don't expect wonderful results.


They mostly seem reasonable to me. I'm not sure exactly what they mean by external wire inputs in this statement: "Synchronize all external wire inputs with two clocks before using them."

You can ruin a design by trying to synchronize data. We've had to respin an ASIC because somebody "synchronized" the data on a bus.


And to clarify further, they mean two flip flops in series, not two clocks.

It sort of becomes a short hand to refer to the delay incurred by flip flops in series as "clocks" (short for clock cycles). And to capture data in a flip flop is often referred to as "clocking" the data.


Thank you for that comment. I've adjusted the text so that it's hopefully clearer!

Dan


Heh, I was wondering why "use two clocks" came so soon after "use only one clock".


  >> "Synchronize all external wire inputs with two clocks before using them."
This is why you need to sync external (async) inputs:

"Metastability in electronics is the ability of a digital electronics system to persist for an unbounded time in an unstable equilibrium or metastable state.

In metastable states, the circuit may be unable to settle into a stable '0' or '1' logic level within the time required for proper circuit operation.

As a result, the circuit can act in unpredictable ways, and may lead to a system failure..."

    https://en.wikipedia.org/wiki/Metastability_in_electronics


Formatting related aside: don't put spaces in front of your URLs, as that causes them to be formatted as code and makes them unclickable.


Unfortunately, FPGA designers don't get the big bucks anymore.


There is a wrong link to http://asic-world/verilog/veritut.html . It seems Firefox automatically fixes it to http://www.asic-world.com/verilog/veritut.html . Does anybody else find this behavior surprising?


On the topic of good practices for FPGA design, does anyone have a recommendation for an online course that covers this or similar material?


Not an online course, but my go-to recommendation is "FPGA Prototyping by Verilog Examples" [1]. Or, if you want to learn VHDL, [2].

[1]: https://www.amazon.com/dp/0470185325

[2]: https://www.amazon.com/dp/0470185317


The old edition of the VHDL book had dated coding practices that are no longer necessary like splitting out comb and sequential logic. Is it still like that?


Not necessary, but still easier to read in most cases.


I don't know about online, and it's pricey, but I got a lot of good information from attending training at Xilinx. https://www.xilinx.com/training/atp.html


Depending on who your distributor is and your relationship with them, you may be able to get Xilinx to waive the training fees too.


Nice list. I wish I had something like this as a student. I made all those mistakes and had to learn them the hard way.


What is the best resource for understanding how to create FPGA constraints? Any good resource for floor planning ?


IMHO, If you have to floor plan, you're doing it wrong.

You should try to design FPGAs so that they work like software, in that you should be able to run the tools consistently like you would software through a C compiler and not have difficulty with failed timing. If you had to do floor planning to close timing, you are giving up a huge advantage.


I think that really depends upon the application. If you're using an FPGA because you need custom hardware but can't justify the cost of a full ASIC then by all means floor-plan (indeed you may need to to close timing at your desired frequency).

If you're building a soft-core CPU designed for a wide range of FPGAs and you can't make it go without a custom floor-plan then yes you're probably doing it wrong.


That's good in theory, but ... it's not realistic. The compilers are too unstable. I lost a couple days of my life last year because Quartus forgot how to route its multipliers.

But usually you only have to floor plan if you're near the limits of your target FPGA, or if you're using some of its special IP (like say pinning some of DRAMs or PLLs).


Yup or your design is just large. Due to the non-deterministic nature of the compiler guiding it at a high level makes it less likely to choose resources in weird locations. e.g. I roughly map out block ram assignments for some of the top level modules but still give it plenty of wiggle space.


Not only that but you don’t want to have to redo analysis and verification on blocks that have been already mapped, placed and routed. Especially as one does minor bug fixes towards the end of a design. It’s like refusing to use libraries.


How does one go about learning IC design to begin with? I only have the vaguest notion of what all these terms mean to begin with. Is it analogous to teaching yourself to code, or is it fundamentally different/harder?


It’s very different if you do it right but not really harder. The first thing you learn is digital logic design at an abstract level. Something like this: https://www.csie.ntu.edu.tw/~pjcheng/course/asm2008/asm_ch2_... Then once you can make digital designs you learn to describe them in a hardware description language (HDL) such as Verilog. When reviewing verilog designs it’s very easy to spot the “verilog coders” vs the hardware designers.


No, it is not harder. But in hardware, everything mostly runs in parallel. So if you have learned how to write multithreaded programs, then you basically know the level of difficulty of designing ICs.


No falling edge clocks? Good luck writing a proper SPI peripheral.


For doing SPI within an FPGA under these rules, you would have the input clock as a signal sampled by the FPGA clock not used as a flop clock, and detect a falling edge synchronously with a chain of two flops.

Works very well so long as master clock speed >> SPI speed.


True, but many slow uCs support system clock == SPI clock. Probably not as relevant these days with 99% of stuff happening on 32-bit devices with clocks so fast you would need RF experience to route SPI at system clock.


For the most part, you shouldn't map SPI peripheral clocks directly to FPGA clocks. They are two very different things. Instead, treat the SPI clock as just another I/O signal.


Good list. I've seen use of both synchronous and asynchronous resets. People have some good arguments for both methods sort of like the tabs vs spaces arguments.


The problem is that reset logic requires a bit of design rather than hard fast rules.

Designs are getting larger these days with a lot of 3rd party IP that you can't assume use a particular reset method.

Tips #2 and #5 in this article, http://www.eetimes.com/document.asp?doc_id=1278998 , explain how you have to tailor your reset methods to the modules you are integrating.

If you don't, you chew up resources.

A new FPGA designer should learn the first principles so they can understand how to make decisions and where to look for potential issues when bugs occur.


A bit late to the party, but my advice: unlesss your goal is to learn high-performance design, try to save yourself a lot of effort by using a higher-level HDL. Depending on how cool your professor is and what their pedagogical goals are, they'll be fine with it. Using something like CLaSH will statically prevent you from doing a lot of hardware stuff that's non-synthesizable, flammable, or just plain dumb. You can do a lot of things in standard HDLs that just don't make any sense, and a lot of students get confused and do them. Not only will something like CLaSH or Chisel prevent you from doing dumb things, but they will get you in the mindset of doing things correctly.


I don't agree teaching students experimental high-level languages in lieu of proven industry standards just because those standards are archaic and/or un-intuitive. It's a great academic endeavor but the FPGA (and ASIC) landscape is driven by industry not by academia.

If you're aiming for an FPGA job after school you'll need to be proficient in verilog or vhdl (ideally both), there's no shortcut. The sooner you learn how to deal with their quirks and pitfalls (I agree they have a lot), the better. Sprinkle some good-ol' TCL in there and you're good to go. Yes python is better and more feature/library rich but the industry is still using TCL (which is not bad, just not modern).

Don't get me wrong, I'd like to see a standardized higher level approach to hardware description, but unless the vendors agree and support it there's very little chance it will be useful. The current trend in high level synthesis is non-portable vendor specific tools. The only way I see the trend changing is when FPGAs become more mainstream (already happening in the server/deep learning sectors) and there's a critical mass of customers that ask for FPGA tools in par with software tools (ie. high level languages, open source, etc.)

PS. You forgot the python based myHDL :)


It's the experimental part of the high level language that is the problem. I agree you shouldn't teach it to students. It just leads them down a divergent path away from what is done in industry. It isn't addressing the needs of the student, only their short term "wants".

But the language is just a small part of the design process. You have to be learn to design HW. The HW engineering project tailors the tool choices around the requirements of the product. It is assumed that engineers know the fundamentals. They can adapt to any high level synthesis tool.

Vendors training courses for all fancy HLS tools are done in a few days at most. They don't have a semester for any newbies to learn Verilog/VHDL or C/C++ first. It's assumed you know them.


Using high quality modern HDLs helped me to understand how e.g. Verilog should be used more than using Verilog ever did.


Meh. Anything that tries to fool beginners into thinking they're writing software rather than defining hardware is a bad idea.

Understanding why something is non-synthesizable or inefficient in hardware is 90% of your education as a chip designer. Exposure to HLS tools can, and should, wait.


Neither CLaSH nor Chisel do this. You're probably thinking of stuff like SystemC, which attempts to compile C-like imperative code to hardware.

If your language (in the case of CLaSH) or DSL (in the case of Chisel) is designed correctly, there's a clear and direct correspondence between high-level semantic constructs and low-level hardware constructs.

Take a look at Conal Elliot's Lambda-CCC. The semantics of synthesizable hardware and the semantics of lazy lambda calculus are exactly the same in many fundamental ways. There's no fooling or trickery going on. If you understand why your lambda calculus program is divergent, you understand why your circuit is unstable.


2nd. There's also Lava (and variations). Lava predates both CLaSH and Chisel.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: