Disclaimer: I haven't read through the details of the language yet.
> The merits of using a functional language to describe hardware comes from the fact that combinational circuits can be directly modeled as mathematical functions and that functional languages lend themselves very well at describing and (de-)composing mathematical functions.
Whenever I read something like this, I cannot really take the language or the language designer seriously. The complexity and difficulty of hardware design is not in the combinational part, it's in the sequential (i.e., state-carrying) part of the circuit [1]. One of the major drawbacks of Verilog, SystemVerilog, and VHDL, is that successive sequential statements have to be translated manually to state machines (at least for synthesizable code – simulation code does not suffer from this restriction) [2].
[1] Source: I'm an FPGA design engineer with a computer science background.
[2] There are of course languages with an improved design, but nearly all of them are research prototypes and unsuited for non-toy/example designs. The more innovative commercial products have hardly any marketshare, because electrical engineers seem to be extremely conservative [1].
I used to work in ASIC design and verification. I have to agree with you. I can't take any of this stuff seriously.
They are all toys aimed at newbies. They address problems that are no problem at all to a qualified engineer in the profession.
Combinatorial logic is not hard. I would argue sequential stuff isn't that hard either. It's all undergrad engineering stuff studied in the first year or two.
The design size of a modern ASIC is at a massive scale that makes the problems of whichever description language trivial. Even the cheapest stuff has quite a few modules: memory controllers, CPU, caches, power stuff, bus controllers, various io modules, debugging modules. Nowadays it's multiple cores. Even the sub modules are just wrappers around other cores sometimes.
The teams are large, usually in the dozens because the problems are non-trivial.
This clash stuff is like building a mailbox. Engineers in the profession are designing and verifying the equivalent of an apartment complex.
Reduceron, which I try to maintain, is written in York Lava. It has a module for writing sequential logic (called Recipe) and while it makes sequential logic way easier, I have been working on an alternative for a while.
I completely agree that the sequential parts of a design is the challenging (= bug prone) part and IMO this is the area where an EDSL like Lava can really shine.
If you have worked in ASIC designs that you are almost certainly used to not work with Verilog directly, but with an ad-hoc macro language. I've seen them all, but most commonly Perl is used. It is especially for circuits that we desperately need better tools and abstractions.
When designing with Lava you aren't describing a circuit, but a the method to create that circuit. That makes it relatively trivial to completely parametrize it or statically check properties of the circuit.
As an example, if done right (Recipe isn't), adding a pipeline stage can be a matter of adding a single line. Compare that to what it takes in say Verilog.
This style of ASIC development is simply incomparable to the primitive Verilog-style, but there's not enough experience in the industry to understand it [yet].
Now, Clash isn't (IIUC) an EDSL like Lava so without knowing more about it, I fear it looses much of the point.
Working in industry it IS straight Verilog directly. No macro language in Perl or anything else.
Maybe people use emacs to help make stuff neat but that's it.
Qualcomm has their own tool that created register files for sub modules from a MSWord table and it sucked big time.
Parametrizing is just via `defines.
On the verification side, it is System Verilog or something similar. On top of that is a pile of crap called UVM.
For an engineer it does not matter about the language that describes the circuit. It does not matter how many lines it takes to add a pipeline stage. The effort is in creating the design itself, not how it is represented in text files.
Typical engineers write hundreds of lines every day when they feel like it. Think of the complexity of an SoC in a mobile device. There are millions of lines of Verilog for the design and the same or more lines of System Verilog to do the verification. Typing it in is the easiest part.
The real work is simulation of the design under test running millions of cycles. Say we are verifying the wifi, we have to simulate traffic of transactions to and from the device. Millions of transactions while the design is running at various clock speeds with other modules also creating traffic on the buses etc.
Waveforms and log files is the majority of the work.
These languages from academia don't fix anything I care about.
> For an engineer it does not matter about the language that describes the circuit. [...] Typing it in is the easiest part.
Well... once upon a time, people were writing software with assembly, and they probably didn't care about the number of lines either. After all, typing was easy, in fact the only problem was just porting your program to a different architecture :-)
You talk about the complexity of a SoC, and you are right it is complex. But software is complex, too: an application relies on hundreds of millions of lines of code, counting everything between the hardware and the application (OS, network stack, HTTP server, all libraries, database, etc). So how do software engineers manage all that? They use better languages than assembly. Because what matters is the performance/expressiveness ratio (which is why C is still relatively popular) and how hard it is to shoot yourself in the foot (which is why C is not as popular as it once was :D).
> They are all toys aimed at newbies. They address problems that are no problem at all to a qualified engineer in the profession.
I think this kind of attitude is one of the main problems of the hardware design industry (the other one is the conservative mindset). Today's beginners could be tomorrow's hardware designers, instead they switch to software because the culture is much more open and friendly. Besides, you know what they say about asking users what they want: if Henry Ford has asked his customers what they wanted, they'd have asked for a faster horse ;-)
> So how do software engineers manage all that? They use better languages than assembly.
Well, better languages than assembly, yes, but the situation is not so simple. In the past 40 years or so in the software industry, there were two, and exactly two, language-related major productivity leaps in the production of "serious" software: C and Java. Java didn't add too much in expressivity over C (in fact, it added less than C++ had), and neither did C add a lot of expressivity over assembly (more convenient syntax but not too many new abstractions) -- but both added a lot in terms of safety, modularity, portability and the like.
The problem is that we haven't been able to make yet another significant jump. Much of current PL research focuses on expressivity and proof of correctness -- two things that aren't the really painful problems for the software industry these days, and aren't what made C and Java such productivity boosters, either. In both cases, productivity was enhanced not through some clever language syntax design, but through the ecosystem the language supported (i.e. extra-linguistic features).
Yes, some other languages like Ruby added some productivity benefits, but mostly in the "toy" application department -- the same kind of applications people used to use MS Access or VB for, or other "application generators" prior to those -- and you see Ruby shops transitioning to Java as they become "serious". The thing is that with modern hardware, even toy software projects are quite usable, but this isn't the case with hardware design.
Good point about the reasons of productivity leaps. Funnily enough, I recall that one of the main selling points of Java was portability (the famous "Write Once Run Everywhere"), even though C was already supposed to be portable; but the environment wasn't, and you still had to develop OS-specific code.
Anyway, yes I agree the ecosystem is key. This is why a lot of new languages are either running on the JVM (like Clojure and Scala) or are interoperable with C (like Rust and Nim). In other cases, languages have managed to create an entire ecosystem very rapidly (Node.js and Javascript in general, Ruby and others).
> The thing is that with modern hardware, even toy software projects are quite usable
You're right. I must admit I'm not a big fan of this "throw more at the problem" approach, and I wonder for how long this approach will work. After all, Moore's law is slowing down, as the unit cost per transistor has stopped decreasing after 28nm.
> but this isn't the case with hardware design.
Yes, for one because if you design hardware that is sub-optimal, in other words that requires more transistors, that translates into higher costs, and lower margins. That's some powerful incentive IMO. Not to mention that if the hardware is not powerful enough, the toy software projects that you mention will have trouble running ^^
Therefore I think a central question is the loss of performance compared to the gain in productivity. A naive translation of C to assembly leads to disastrous performance. For years (decades?) people had to resort to assembly for speed-critical routines, and you can still find assembly in video decoders for instance. But the productivity is so much higher than the loss of performance is acceptable.
I must be too old. The new guys can come and bring their new tools. I am retired now.
Back when I started, we used to care how the older engineers did stuff. The academics who trained us at uni had never worked in industry and had no real clue. Hence we had to learn the real stuff from the older guys. These days we are the old guys but just treated like yesterday's newspaper.
Not necessarily, I am a beginner in hardware design, and I am creating a new programming language and IDE for hardware design.
> The academics who trained us at uni had never worked in industry and had no real clue. Hence we had to learn the real stuff from the older guys.
I can relate, this is by talking to older designers that my co-founder and myself learned that CDC is a big thing to address for instance. But a lot of times we were just dismissed as too young and inexperienced. Anyway, I'm curious, in your opinion what would be the biggest pain points in digital hardware design today?
Verification and synthesis are the pain points. Projects revolve around them not the design. In HW, the design of each module itself is usually pretty conservative. We are a conservative bunch remember! But it's feature creep that crams in more and more modules to handle the variety of ways a single SoC can be applied to multiple markets.
The end result is multiple clock domains and buses going everywhere, multiple RAM modules, lots of different clock gating options to shut down parts of the system not in use. Etc.
When I look at a large design, I don't worry about the design effort. The first thing that comes to mind is the verification effort. You spend about four times or more the amount of effort verifying the design because you cannot afford any mistakes. That effort is building everything around the ports of a design that makes it thoroughly verifiable. The stimulus is constrained randomly generated transactions on every input to the design and checkers of every output. Coverage determines whether the random transactions have covered every case. And it not just every combination of inputs. Every module has state internally: you have to cover every possible state transition.
One of the older guys at my old company joked that: These days the design is done by the verification team.
The behavior of the design is modeled by the verification engineer so they can detect when the design has a bug. Often the verification team is working with an empty stub because they get ahead of the design team. It gets to the extreme case of the verification team telling the design team how to write everything.
To whomever downvoted this: would you mind telling me why? Have I offended you in some way? Is it because of the criticism of the hardware industry? Or the sarcasm?
Thank you for this perspective from the top. Sure, when verifying the final ASIC design, the problems of the implementation language don't matter much anymore. However, those IP cores have to be written by someone, and they actually get to suffer the deficiencies of the HDL.
> I would argue sequential stuff isn't that hard either. It's all undergrad engineering stuff studied in the first year or two.
So? That's like saying that "writing software isn't that hard either. It's all undergrad comp science studied in the first year or two", which is obviously false. If your tools and languages suck, they'll slow you down and you'll produce more bugs – in software as in hardware.
I often get the impression, that many hardware engineers don't even realize how much their tools suck, because the lack the perspective on the outside world, specifically the software world, to see better ways of doing things.
I am a pretty junior ASIC engineer who has relatively more exposure to software world compared to many of my peers.
In my last project I did some IP integration work and the lack of decent development tools led to significant chores. I did make use of Emacs verilog-mode like GP mentioned and even wrote some small macros in elisp but overall experience is far from satisfactory. Not to even mention the usability of propriety EDA tools and flows. The software world just looks like heaven with so many awesome development tools available (VS, PyCharm, intelliJ, to name a few I've used). And I can tell the difference because I once managed to convert a C# GUI software into a console application mainly with the help of an IDE (Visual Studio), without first learning C#.
Sadly, I think ASIC (or maybe broader, hardware) industry has a pretty poor ecosystem in general. I'm not aware of good hardware focused community comparable to HN, no high quality active Q&A on sites like stackoverflow; even the HDL languages look inelegant and not well thought out. And you have a point, I'm not even sure how many of my colleagues realize that. I want to make some difference. But I'm not sure where to start...
> I want to make some difference. But I'm not sure where to start...
1. Take a promising language, improve it where necessary.
2. Add excellent support for translation to VHDL and Verilog. The generated HDL code has to be readable, editable, and it has to reflect the structure of the original code more or less 1:1. You also need to support "inline VHDL/Verilog" (like inline assembly in software). Otherwise, your language doesn't integrate into the ecosystem of synthesis software, simulators, vendor-dependent Map/P&R, and existing IP cores, which makes it useless in the real world. This is the main reason why all innovative VHDL/Verilog replacements have failed so far. Without this feature, there's just no way your language is going to gain any significant market share.
Are you referring to a new DSL based on software languages, something like MyHDL? I think this is definitely promising for designs that are started from scratch. However, as soon as there is need to integrate with other legacy code, it falls back to the current painful way of manual integration.
What do you think about an IDE that supplements existing HDL languages? Not as drastic as making a shiny new language, but it avoids many challenges you brought up.
Since you are coming from a CS background, do you have recommendation for good IDE frameworks that can be leveraged?
I wrote and verified IP and I worked at the SoC integration level. I would say hardware engineers realize the tools suck but this stuff is targeting issues I don't care about.
The bugs are not due to language generally. If software guys want to write a new language for HW engineers, they need to ask HW engineers what the issues are rather then go off on a tangent.
As it is they don't even know what the workflow is like and where the real pain lies. They are just writing tools for themselves. The problem is their applications are not commercial.
New commercial tools are targeted at the verification side: UVM etc. This is where most of the time is consumed. This is where the software guys could offer some actual solutions because UVM is object oriented all over the place. Us HW guys are idiots and don't understand the performance costs of making all the object hierarchies and using reflection everywhere. When simulations are so long and on such large designs, the computing resources needed becomes huge.
That's like saying that buffer overflows are not due to how strings are represented in C. Yes a bug is the programmer's fault, but languages can make it harder to have bugs in the first place (for instance you can't have buffer overflows in any other higher level language than C).
In the case of the hardware industry, we see a language with a weak type system (Verilog). So the "solution" is to use lint tools to check that your design is properly typed. Similarly, nothing in VHDL/Verilog prevents you from accidentally creating latches, or crossing domains badly, or many other small bugs. So again, the "solution" is to do extensive verification. That's the Haskell VS Python debate: you can either make it impossible to have a certain class of bugs (and detect them very early), or you can just write a lot of tests (which, in the case of hardware, take forever to run).
The same logic extends at a higher level. You can have a language that models common patterns (like a synchronous loop, a "ready before valid" port) so you can reduce the amount of verification needed because you don't need to always re-verify everything. One could even argue that having such higher-level mechanisms would also make static verification easier, since you might reason in terms of transactions rather than updates to registers.
Latches are caught by the synthesis tools generally. Designers usually run a unconstrained synthesis as a matter of course to determine whether they have written junk.
As for the other bugs, I look forward to this new paradigm of spotting them early without verification effort.
Reasoning about transactions is already how verification works. It is transaction based. UVM is a library of SystemVerilog classes aimed at abstracting the verification to higher levels.
Interesting, I feel like you would like what we've done with the Cx language at Synflow. Probably the exact opposite of CλaSH, the language is sequential imperative (C-like even) and focuses on making the sequential part easier (Cx still has first class support for parallel tasks and hierarchical descriptions though). You have synchronous "for", "while", "if", this kind of thing :-)
http://cx-lang.org
Enjoy!
I've actually had a brief look at Cx a couple of months ago, and it looked very promising! Unfortunately, due to private and job-related reasons I didn't have the time to look at it in depth.
There's one thing that irritated me though: Reading from a port twice seems to trigger a clock cycle (did I get that right?). My intuition tells me that this is a huge source of bugs, comparable to the infered-latch-instead-of-combinational problem in VHDL/Verilog. I might be wrong though, since I haven't actually designed anything with it.
> Reading from a port twice seems to trigger a clock cycle (did I get that right?)
You did! It is by design that reading from the same port twice will trigger a clock cycle :-) There are several reasons why we did it this way. First, having to always explicitly declare a new clock cycle (rather than having it inferred) is kind of ugly, because your code is full of "fence;" instructions. The second reason is that we thought this would actually prevent bugs ^^ (the third is for symmetry with writes I think)
The thing with Cx is that, unlike VHDL/Verilog, reading a port can mean more than accessing a single signal, and similarly writing to a port can be more than writing a single signal. For example "sync" ports have an additional "valid" signal that is set to true for one cycle when data is written to the port. This is very handy, allowing synchronization between tasks (read becomes blocking) and is useful as a control signal (the "valid" signal serves as the write enable on a RAM for example). We also have a "sync ready" that adds an additional "ready" signal computed asynchronously and again useful for back-pressure control in pipelines, FIFOs, etc.
I don't have experience with Esterel in particular, but I can make some statements about imperative synchronous languages in general:
- They're a step in the right direction (e.g., implicit state machines), but then they're doing too much of a good thing. For example, in my experience, "abort" and especially "suspend" statements are not that important in non-toy examples, but they make the compiler more complex.
- Integrating external IP cores is difficult, but is absolutely essential for any real-world design.
- Even small FPGA designs contain usually at least two clock domains. This situation is typically impossible to design in imperative synchronous languages.
There's been a few different proposals to use functional languages to write hardware: Chisel (https://chisel.eecs.berkeley.edu/) and Bluespec (http://www.bluespec.com/) are two others. They haven't really taken off because the productivity bottleneck in hardware is not in design but verification and specifically sequential verification. Combinational verification is quote easy, because modern SAT solvers can prove almost all combinational properties you can throw at them.
The trouble comes in when you start dealing with complex state machines with non-obvious invariants. I don't think these functional languages can really help too much here because unlike C or C++ in the software world, there isn't unnecessary complexity introduced by a language, e.g. verification becoming harder due to aliasing. It's the properties themselves that are complex. Lots of progress is happening though: Aaron Bradley's PDR (http://theory.stanford.edu/~arbrad/) and Arie Gurfinkel and Yakir Vizel's AVY (http://arieg.bitbucket.org/avy/) have made pretty major breakthroughs in proving a lot of complex sequential properties and these algorithms have made their way into industrial formal tools as well.
Having said all that, I'm currently writing a lot of Verilog for a system design that I'm working on. I also learned to program Haskell at university, (although it's been a few years...), so this language would seem PERFECT for me. But it isn't...
Reading through the CλaSH documentation/tutorial I've found the examples are baffling. I have no idea what is being written in either Haskell or Verilog space. The examples seem to be focusing on the language aspects of the tool rather than how to express actual hardware in it.
I agree, it is an interesting idea. I suspect that just having "Haskell" got the link a lot of upvotes :-) As stated on the "Why CλaSH", the advantage is obvious for combinational circuits. But it doesn't seem to help much for synchronous logic (which arguably represents the majority of hardware designs). You'll still be writing everything as "how to update register X in state S".
Helpful explanation. Thanks! Looking at this I have a very specific question, from a practical getting work done point of view, is this better than what we have, or just different to what we have. If the answer is "different" I don't mind, but it will hinder my adoption ;-) I guess this is why I want to see a side-by-side comparison of real hardware constructs. Things I use regularly. It would aid greatly in understanding the (potential) benefit of expressing things these ways.
Hard to say, it depends on what you consider better. It certainly is more concise than the equivalent Verilog (just like Haskell is more concise than pretty much any language I know). This seems especially true when you want to describe repetitive structures (such as their FIR filter).
CλaSH also has a much better type system than Verilog (again, thanks to Haskell), but if you wanted a good type system when describing hardware, you might as well just switch to VHDL ^^
My concern is with the description of state machines. You need to specify if you want a Mealy or a Moore machine, something that is usually implicit. And you're still describing the transfer function between states; CλaSH does not seem to allow you to describe your program in a structured way (such as loop until x becomes true, wait for 3 cycles, read z, while z > 0 decrement z, etc.)
Well, when I used VHDL back in college, I noticed that it had a really hard time with math. So for example it could do look up tables all day (basically switch commands) but if you tried to encode that logic into the kind of math we're used to in a C-based language where A = B (insert operator here) C, it fell down hard and the circuit would be so unstable that it would only run a few cycles before spinning off into some exceptional state that was nothing like we expected.
I think that's because humans have a hard time considering the ramifications of things like boundary conditions and edge cases with respect to types. So maybe we can visualize one register being added to another, but we can't intuitively extrapolate what happens when one is signed and one is unsigned, or their widths are different, or one is floating point, etc etc etc. VHDL doesn't touch on all of these edge cases very well (because for one thing they are hard!) it just does exactly what it’s told. That often flies in the face of intuition, once we’ve analyzed the circuit and seen how much we underestimated the complexity of what we were asking. In other words elegant math doesn’t always translate to simple circuits, and vice versa. So it really needs a meta language that can grapple with these subtle nuances and compile to VHDL without a lot of friction.
Probably what’s going to happen is we’ll see DSP logic (and limited subsets of it like GPU shaders/OpenCL/CUDA) and VHDL/Verilog merge into a functional concurrent language that can cover all of it. It won’t be as explicit as Rust because it will infer what the user is after but allow for overriding default assumptions. It won’t have opaque syntax either like most functional languages today. I’m thinking probably it will look more like MATLAB/Octave but have access to some of the more concise notation of Mathematica. So think Excel except having cells arranged arbitrarily in some ND space rather than 2D, and we’ll be able to specify formulas on groups of cells rather than individually, and in any language we desire that’s then compiled to Lisp and either run on distributed economy hardware or translated to a hardware description language. CλaSH probably isn’t it, but its approach and open source license is certainly a start.
Realized I didn't answer the question - different yes, but probably not different enough to be compelling for mainstream use at this point. Without having ever used it, I have concerns that circuits will still fall down or take up gratuitous chip area because handling the edge cases is one of the more complex problems to solve, and I'm not convinced that functional programming alone is enough.
This is very interesting. I have always felt that having a language that has immutable constructs translates very well into a hardware description language. This is because most of the hardware modules are immutable, except for state machines, which can be modelled using state monads(or something similar).
Considering the hardware tooling right now, I would love to see a language with more abstractions than verilog or vhdl. I would love to see programming languages, that just don't compile to system verilog / VHDL, but can directly move to synthesizing step. I have worked with Bluespec and I don't see that as the successor.
I would say, a language like haskell(or a derivative) can map to a hardware really well, and this is a very good attempt.
Please pursue it / turn it into a product, so that hacking on hardware(a FPGA) is much more easier and beautiful.
I tried this out briefly while working on a VHDL project some time ago. I absolutely love how quick it is to run and test your code, compared to the toolings for VHDL. Also, Haskell's terse syntax is very much a plus! :)
When I was writing the chips from nand2tetris in HDL I noticed how cleanly they could be expressed with function composition in SML. I dismissed the ideas as a bit weird, if cool, but after a few weeks I came across HardCaml http://www.ujamjar.com/hardcaml/ and some references to HML (Hardware ML). Unfortunately only the thesis survives and not any actual code, but still.
The moral of the story is to take your weird ideas more seriously I guess ;-)
I don't know a first thing about hardware design, so it's a good opportunity to ask a question.
So say it would be something actually useful and after some tinkering I'm actually generating nice Verilog specifications. Can I apply it somehow if I'm not working in Intel or something? What can I do with it?
That's the beauty (and sometimes the curse) of hardware design: it doesn't need a processor to run. You essentially describe an electronic circuit, and "program" (configure would be more appropriate) an FPGA to act like the circuit you described. A circuit can be as simple as you want (like a sequential counter) or very complex (people have implemented entire H.264 and HEVC decoders in hardware).
This is so powerful that in fact you can even describe your own processor, and prototype it on an FPGA for testing.
Perhaps. Perhaps not. If you have an algorithm that is better implemented in hardware, you might well see better performance executing it on a GPU (for example, if you want to mine bitcoins).
Really, FPGAs are meant for prototyping hardware; you synthesize your ASIC onto the FPGA and check it for functionality.
Of course, there's a reason you're making that ASIC.
Or you don't do an ASIC and ship the FPGA. That's common as well (for the big manufacturers it's cheaper to do an ASIC, for the not so big, it's more common to sell the FPGA)
I've used CLaSH over most of the pass year to do all my Verilog assignments (CS major doing EE electives), and also contributed a bit to the project, so figure I'd offer some insight.
For all you saying "but sequential is the hard part". Yes functional programming is most clearly a smash-hit with combinatorial circuits, but CLaSH shines with sequential circuits too. Basically, time-varying nets are represented as infinite streams where the nth element is the value at the nth cycle. Registers are basically `cons`, they just "delay" the stream by one cycle, tacking on the initial value in front. To make complex sequential code you just "tie the knot" around the streams with `letrec`s -- which actually corresponds to exactly what the feedback loop looks like on the schematic. [Anyone that's done FRP should recognize this ideom.] In this way CLaSH is both more high-level and more low level than Verilog/VHDL: clocks nets are derived automatically, but feed back loops are explicit.
Now if you are an electrical engineer, subsisting one tedious task (routing clocks) for another (programming without "blocking assignment") might seem like no net gain. But us functional programmers are fluent at working with such fix-points, and abstracting both what we tie together an the knot-tying itself. The Moore and Mealy combinatorial are the tip of the iceberg -- examples that we hope will be more accessible to electrical engineers unfamiliar with functional programming.
For all you saying that "the hard part isn't working with the HDL at all, but lower level concerns like timing, layout, etc", I have two things.
First you are acting like HDL writing is not on the "critical path" of your development process, and thus of no concern. Well that's not just true--you can't have one engineer do HDL writing, one do layout, and one engineer do testing completely independently because there are some basic data dependencies here that linearized the development workflow. It may not be the component with the "most delay" but it's still on that critical path, and thus improving it will yield at least some speedup to some extent. Automatic layout and timing analysis is great too, but unless you have a massive amount of computing power at your disposal, AFIAK you can't get very far, so improving the HDL side of things might be the /best/ you can do.
Second, there is the development cost of finding all your bugs with low-level tools. Yes timing analysis is essential, but it's not great in diagnosing the underlying problem. If you have lots of code that, well, isn't very aesthetically pleasing, and you do all your debugging on FPGA or with timing analysis, I suppose just about all bugs look like timing issues. With CLaSH:
- You have far less code, and it's more high-level, so just reading looking for errors it is more productive.
- You can try out your code on the repl, seeing providing a stream of inputs and getting a stream of output. High level state machine errors (do you really nail this the first time with verilog?) are easily caught this way.
- Because you have more opportunities to modularize your code, you have more opportunities to test components in isolation. Unit tests vs. System tests--y'all know the deal. The former is no panacea, but obviously it makes complete code/path coverage way more tractable computationally.
- QuickCheck. I generate programs, run my single-cycle and pipeline processor for n cycles, see if they both halted and compare register/mem stage, otherwise throw out the test. I /suppose/ you could do this with C-augmented testbenches, but it would be way, way, way more code and effort. QuichCheck worked so well that I never wrote a test bench.
- EVENTUALLY, with idris or [faking it with] dependent Haskell prove your circuit correct up to the synchronous model CLaSH is built around.
In practice I can say I honestly wrote and debugged programs all from GHCi (the Haskell REPL, so very much in software land), and saw them work first time on the FPGA. Where this didn't happen was usually do to a black boxes, like megfunctions and other components on the dev board. Obviously my Haskell testing is of no use if I model them wrong in CLaSH.
Finally, it would be dishonest and misleading to not mention CLaSH's downsides. CLaSH is designed assuming your circuit is totally synchronous (or purely combinatorial, but that's the trivial case). I don't know often this comes up in the real world, but in interfacing with the components on the dead board, I often had to do things that violated rigidly synchronous circuit design --- inverting my clock to get a second 180-degree-off clock domain, asynchronous communication with SRAM. [CLaSH supports multiple clock domains, but only knows about their relative frequency, not phase.] You can often still describe these circuit in CLaSH, but because it violates its synchronous model, it won't understand them and neither will your Haskell-land testing infrastructure. Basically you loose the benefits that made CLaSH great in the first place. Fundamentally, I think true fixing these cases means designing a lower-level "asynchronous CLaSH" that both normal CLaSH and these cases can elaborate to. Trying to tack them on as special cases to CLaSH and it's synchronous modle won't fly.
But all is not lost, if you can contain the model-violation to one bit of code and give it kosher synchronous interface, you are all good. Write some Haskell to simulate what it does (need not be even in the subset CLaSH can understand), and make a Verilog/VHDL black box. CLaSH doesn't help you with that module, but that module doesn't pollute the rest of your program either. Most real-world designs are by and large synchronous, unless the world has been lying to me. So the quarantined modules would never form a significant part of your program.
That about wraps it up, ...hope somebody's still reading this thread after writing all that.
1. So I've actually never used multiple clock domains with/ CLaSH. [The inverted clock I mentioned went to the RAM megafunction, which was instantiated in Verilog. For testing purposes my RAM (in CLaSH) had zero-cycle-delay reads, which is what the RAM w/ phase-shifted clock was supposed to simulate. Also the circuit topology is the same either way (but for the inverter on the clock), just the circuit "works" for different reasons, and thus the timing is different.]
I ask about clock domains because you stated real world designs are by and large synchronous. The issue is when we have data crossing clock domains we have a potential area for bugs dependent on the possible combinations of clock speeds.
It becomes effectively asynchronous because we need to determine when data from one clock domain arrives relative to the edge of the other clock.
I can't understand the Haskell stuff you are linking to. I don't know whether it is capable of finding the issues I am talking about.
The second question was just trying to figure whether CLaSH can be used with Verilog/VHDL in some way. I was hoping against hope that there was a usable aspect to it in industry.
I can't figure out whether there is though.
The Haskell aspect is not a sweetener. We don't usually study that and it doesn't look good. You think VHDL looks bad but I think Haskell looks bad. It's like saying you've been real keen on a new beer made from brussel sprouts.
10 years ago, people were going on about SystemC. It didn't really catch on and it was a lot more normal looking.
Ok, yeah sorry the docs other than the tutorial assume some familiarity with Haskell.
2. What do you mean by "verification IP"? It was that phrase that made me mention testbenches.
Basically, while CLaSH is hard coded to understand certain types such as the Signal type, almost all primitive function are just defined with Verilog/VHDL templates which it instantiates. One can write their own templates that work just the same way. So for any piece of CLaSH-compilable Haskell, you get VHDL/Verilog for free, and for any bit of piece of VHDL/Verilog, you can use it in CLaSH by writing some Haskell (with the same ports, and that hopefully does the same thing) and then telling CLaSH the Haskell is to be replaced with your Verilog/VHDL.
This is about as good bidirectional comparability as one can get. Automatic Verilog/VHDL -> CLaSH compilation would be an improvement, but I don't think it is possible: I'm not sure to what degree the semantics of Verilog/VHDL are formalized, and even if they are, there's no way the implementations all respect those semantics.
The testbench functions are just templated like any other primitive function.
1.
UnsafeSynchronizer "casts" one signal to another -- it's compiled to a plain net in Verilog/VHDL. At each output cycle, n, it looks at the round(n*fin/fout) input cycle and give it that value.
Obviously this is unsafe because, as you say, in the real world the problem is asynchronous. You don't know the exact frequency ratios and phase differences, nor are they constant, and even if you did you'd get subtle timing errors with an incoming value that doesn't change on the clock edge.
The trick is it is a pretty basic "black box" to augment CLaSH with, so proper synthesizers can be written in pure CLaSH. if that's not enough for some super-asynchronous synchronizer design, one can always fall back on writing their own black-box as described above.
I don't think anyone imagines that CLaSH will be immediately understandable to someone who has never used Haskell. So no way does anyone expect the benefits will be immediately clear. So are you saying the restrictions I mention sound too onerous, or are you saying "I dunno, it looks weird"?
If the former, that's perfectly acceptable, thank you for reading.
If the latter, I'm sorry but this is a pet peeve of mine--we get this a lot in the function programming community. Understand that we are claiming the benefits are worth the non-trivial learning curve. If it was so damn obvious, it couldn't offer much benefit over the status quo---people would have already switched en mass and it would be the status quo.
While C-esque cuteness looks nice, I agree such things are doomed to failure. The C model is easy enough to stand, but it's linearity, implicit state, and notion of control flow have nothing to do with the hardware---you can understand both models, but the compilation process is necessarily non-trivial and sufficiently "far from subjective" that many designs cannot be expressed at all, and many more must be expressed through very round about means.
Functional HDLs like CLaSH have a dead-simple structural compilation model, so while they may not understand every circuit, they can express it---the compiler is near subjective but not homomorphic counting these like this SR flip-flop:
\ r s -> let q = nor r q'
q' = nor s q
in (q, q')
This compiles to exactly what it looks like, but diverges (i.e. infinite loops) under Haskell's semantics.
Verification IP is reusable code created by verification engineers. E.g. Say the designers are developing a networking module. The verification engineers would build the verification IP to generate the network packets. They also build the monitors to check the network protocols. For any design under verification, there is a corresponding amount of verification providing stimulus and checking.
The reason I bring this up is: verification is the hard part of the HW workflow. The other similarly tough part is synthesis. Every single project I have been in, verification and synthesis are the toughest tasks that consumes the most team effort. Not design. When we plan projects, it all revolves around the verification task.
For every bus, every module and at various stages of SoC integration, we are writing verification code using System Verilog.
If you want to improve our tools, I would look at the verification/simulation or the synthesis side.
> The merits of using a functional language to describe hardware comes from the fact that combinational circuits can be directly modeled as mathematical functions and that functional languages lend themselves very well at describing and (de-)composing mathematical functions.
Whenever I read something like this, I cannot really take the language or the language designer seriously. The complexity and difficulty of hardware design is not in the combinational part, it's in the sequential (i.e., state-carrying) part of the circuit [1]. One of the major drawbacks of Verilog, SystemVerilog, and VHDL, is that successive sequential statements have to be translated manually to state machines (at least for synthesizable code – simulation code does not suffer from this restriction) [2].
[1] Source: I'm an FPGA design engineer with a computer science background.
[2] There are of course languages with an improved design, but nearly all of them are research prototypes and unsuited for non-toy/example designs. The more innovative commercial products have hardly any marketshare, because electrical engineers seem to be extremely conservative [1].