Hacker News new | past | comments | ask | show | jobs | submit login
The Evolution of Bitcoin Hardware [pdf] (ucsd.edu)
116 points by Katydid on Feb 2, 2018 | hide | past | favorite | 39 comments



Nice history of money printing machines. Very nice and very easy FPGA application back then, just 2 sha256 pipes.

Is there something worth pursuing nowadays for FPGAs? Machine learning?


FPGAs are worth it for anything that's too fast for software and is too niche for an ASIC. In my experience they are very popular for "pro tools" which are too specialized and don't have a large enough volume to justify creating an ASIC and are so expensive that the price of the BoM is not really a concern. For instance I work on products that do a lot of video processing on FPGAs (way too high bandwidth for software, too specialized for an ASIC).

Now if you look at consumer electronics you obviously won't find many FPGAs. Too expensive, too power hungry, too costly and at large scales ASICs are a better match.


Bernie Meyers of IBM once told me that he believed FPGA would be the future of Moore's law. he felt strongly that they provided a performance and flexibility tradeoff that made them ideal for using in virtually all servers as coprocessors for certain functions. The fact that the FPGA code can be audited was a plus as well.


FPGAs start from far, far behind on the performance end though (by a factor of several dozen). FPGA vendors compete to shift as much as they can out of configurable logic and into lithographically-defined IP blocks because of this severe disadvantage. A modern FPGA will be strewn with hardwired serial interfaces, memory controllers, CPU cores, and the like. Taking advantage of these is almost always a gigantic performance win, and that's assuming the functionality they provide is even possible in configurable logic (e.g. try implementing a 32Gbps serial port using only configurable 500MHz logic).

FPGAs don't trend towards openness and you have to get a 100x win from their flexibility to break even on performance.


I'm curious about what Intel will do now that they own Altera. Maybe they'll be able to push FPGA technology into mainstream CPUs.


Not sure about economics, but separate PCIe card with FPGA and it’s own storage is not that hard to build. Such setup could accelerate big data operations for sure.


You can audit the FPGA code but good luck auditing the bitstream and those tools are all closed. Even for a small FPGA this would be quite a task.


I've never actually priced out an application where FPGAs were competitive on raw compute power -- GPUs have always smoked them for the tasks I was interested in, and not by a small margin. We only ever used them because they were "hardware engineer duct-tape" -- they let us talk to chips, network, and system busses at high bandwidth / pin count. Real time pipelines were a bonus but memory and compute were so much cheaper on computers that buffers + overprovisioning were a much more attractive real time solution wherever the logistics weren't prohibitive. Often they were, though, forcing us to use FPGAs for compute even though they weren't particularly good at it.

I have a sneaking suspicion that this is a more general truth and that in many applications where FPGAs are touted as compute accelerators they were actually chosen to minimize the length of the critical real-time data path rather than to do more computation for fewer dollars.


It's surprising how many places FPGA's show up. I picked up a used black magic pci capture card that had a xilinx on it. It's not a mass consumer product but I am guessing they still sell a good number of them.

People seem to forget that much of the allure with FPGA's is the 'Field programmable' part - where you can fix bugs and upgrade hardware after it's shipped.


We have been developing and selling FPGA based QAM modulators for DOCSIS and video for last 2 decades but currently all major companies are slowly moving from specialized hardware to software solutions with bare minimum of custom hardware. But some devices still retain FPGAs. All legacy devices sold in tens of thousands also may require support from FPGA expert. Multiple solutions use specialized Broadcom chips that are half way between ASIC and FPGA.

FPGAs are expensive by themselves, expensive to implement in your solution, very hot, relatively slow and require rare expertise from developers. But they have one feature that beats everything - they can be patched in production.


I've heard Microsoft was deploying them like highly programmable dynamic network switches/adapters for Azure and also some applications for Bing.

https://arstechnica.com/information-technology/2016/09/progr...


High Frequency Trading. They have been on the rise for last couple of years.


FPGAs are used in all of the mid to high end test equipment: things like signal generators, digital multimeters, spectrum analysers, osciloscopes and such.


Amazing what innovations naturally spring out from nothing once the incentives are in place.


There's absolutely 0 innovation in creating new hardware for Bitcoin mining: the only thing that stop us from having it is the economies of scale. Please stop mistaking one for another.


I think bias is making you more unfair than you need to be.


I think he's being fair. The real difficulty in making an ASIC is justifying the huge price to start production. The good old "the first chip is worth $10 millions, the second is worth $5" or something like that.

Designing a bitcoin mining IP is not exactly difficult, it's basically two rounds of SHA-256. It's still some work of course, but as far as ASICs are concerned it's very low on the difficulty scale.


The "innovation" here obviously isn't in the "cutting edge technology that moves the state of the art forward" at all.

The innovation at the time was that the conventional wisdom that it's "effectively impossible" to do small custom ASIC runs for relatively cheap. Most were laughed at back then when this topic was brought up for other use cases.

While I'm sure none of that was super exciting to someone who works on custom ASIC design for some enterprise, it was pretty neat watching effectively a bunch of hackers figure the process out and do it for a tenth of the predicted "first chip" costs. This has a lot of value in of itself, simply proving something is possible for $a_lot_of_money+skill vs. $epic_truckloads_of_money.

Nowadays it's not super interesting since you're back to needing to be a "big player" to get into the game - but for a year or so it was a real fun time to be a bystander and watch the rapid pace of development.


It took much less money than that to develop the first Bitcoin ASICs.

~150k USD for 130nm, 200-300k USD for 110nm, and ~500k USD for 65nm, as of 2013 http://blog.zorinaq.com/asic-development-costs-are-lower-tha...


I doubt that 65nm would be able to beat a GPU (which are all 16nm class or better) at the task however. If you make an ASIC but if the $3000 Titan V is more power efficient anyway, then you've wasted your time and money.

65nm and other "old node" designs primarily are about mass-manufacturing a design. They probably can beat an FPGA in cost and margins once mass produced. But for performance and power-efficiency, you gotta be way better. Maybe 28nm or 22nm class or better to beat the GPUs (or even standard CPUs like EPYC and the SHA256 accelerators built into it)

Also, you gotta beat your competition. If someone else makes a 10nm-class BTC ASIC (https://techcrunch.com/2018/01/31/samsung-confirms-asic-chip...), then your 28nm or 22nm design is obsolete.


A 65nm or even 130nm Bitcoin ASIC handily beats a 16nm GPU at mining, in terms of perf per watt and perf per dollars. And by handily I mean 1 or 2 orders of magnitude.


> it's basically two rounds of SHA-256.

There's a bit more to this; there's clever optimisations you can do (based on the Merkle–Damgård structure of SHA256 and the format/semantics of the data being hashed): http://www.mit.edu/~jlrubin/public/pdfs/Asicboost.pdf

https://arxiv.org/pdf/1604.00575.pdf


Yeah I conveniently avoided to talk about that, but this is more like exploiting a weakness in the bitcoin protocol, it wasn't really meant to be feasible (and made many people angry when it was patented). That is innovation however, I grant you that.


You could support that by pointing out what you feel the actual innovations involved in Bitcoin hardware are.


Faith routing, amongst those who are underserved or disenchanted with incumbent faith routing providers.


Could you elaborate on that? I've never heard the term and Googling isn't being helpful.

https://www.google.com/search?q="faith+routing"+bitcoin

https://www.google.com/search?q="faith+routing"+cryptocurren...


If you replace "innovations" with "behaviors" you're on to something.

OP is catching flak for the comment, but Filecoin and others are doing cool things exploring the possibilities of using incentives to drive massive resource re-allocations.


Innovations?


How long will it take till I can buy GPUs again?


GPUs are no longer used for Bitcoin mining. The recent GPU shortage is caused by the raise of thousands of little altcoins. It's obviously unsustainable, and it's going to end in a big crash, and all these GPUs will flood the market. It's hard to predict when it's going to happen (I'm guessing this year), but if you're patient you will be able to buy GPUs very cheaply.


I wouldn't consider Ethereum to be "a little altcoin". While there may be a correction in the market, I don't think it will be "bitcoin vs. everything else"; at this point it would probably be all cryptocurrencies, or something more specific (individual coins, or classes, but probably not "scrypt-based coins as a class")


Michael Bedford Taylor misrepresents the most important aspect of these "ASIC clouds" and the bitcoin algorithm; the puzzle of the algorithm is incredibly simple. Every implementation of a Bitcoin miner simply generates a bunch of random guesses to the hash equation.

On page 60, graph (a) Professor Taylor uses distorted manipulative log charts for finance.

The chart on page 61 is the most insanely manipulative chart I could ever imagine to represent the history and progression of Bitcoin mining in relation to hardware speed.

Try charting

  watts per block over time.

  ROI in BTC for a core i5 over time

  newly minted coins per user over time 

Michael also doesn't understand or chooses to misinform the reader and the journal about "computational demand scaling with the number of users"

The Bitcoin network grows increasingly inefficient with any additional computer hash power added, and as more users use it, it clogs due to limited bandwidth and inefficiencies of the algorithm. Additional hardware speed does nothing to scale with network growth, actually the complete opposite occurs as it requires more (energy, computer resources) to do the same thing (transactions) per second for less (rewards per user). The result of all these aspects point towards a system of rules that exploits new, uninformed users to benefit "early adopters".

Sad but not surprising to realize this guy is a professor at the University of Washington.

Is Michael Bedford Taylor promoting something here through intentional misdirection and omission of basic facts to manipulate readers?

Would Professor Michael Bedford Taylor be under a conflict of interest if he sells Bitcoins?


Using log charts is appropriate for data that spans multiple orders of magnitude. I don't know why you complain about distortion, unless you think the data itself is wrong?


The log chart is used to hide volatility.


I guess it depends on how you are used to eyeball volatility, but for relative swings (percentages) log charts are actually best, since constant differences in the chart correspond to constant percentages.


The built-in increase of difficulty proportional to new blocks is well known


To who?

This is an academic article, making that assumption would be negligent.


Pages 60 & 61? I count only 9 pages.


The pages have embedded numbers implying it's an excerpt from a magazine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: