This makes me curious – remember back in the oughts where SETI@Home and Folding@Home were popular? Were those adapted for GPUs and saw a huge acceleration in calculations?
If distributed computing had a fraction of time on the millions of GPUs mining cryptocurrencies, I can't imagine how close we'd be to curing disease and finding ET.
Many things are pointless if you start evaluating things by one's standards. Let the market decide what they should focus on. Otherwise, you can always rely on government funding for less-market-focused initiatives like fundamental research.
Gridcoin is the biggest I'm aware of. Most people who mine it (that I've spoken to) think of it as a way of subsidizing their hardware and electricity, not as a get-rich-quick scheme. Since it lets you choose what project to work on, you can choose one that you approve of and that is well-suited to your hardware.
I was considering buying some old servers to mine it. I'd have pretty much broken even back when it was $0.10/grc, but it would have taken me ages to make back my investment.
The problem of replacing the general purpose proof of work in cryptocurrencies (finding partial SHA256 collisions in the case of Bitcoin) with something more useful is that the attributes for a good PoW computation are hard to find in real world distributed problems.
In particular you want something a problem with the following attributes:
- must derive somehow from the block data you're trying to mine, otherwise you could reuse your work for an different block and make double-spend attacks trivial. It's very important that once a new block is mined everybody else must start from scratch for the next one, otherwise you could "premine" an arbitrary number of blocks and later append them at an arbitrary position in the block "tree", potentially rewriting history.
- difficulty should easily be modifiable to account for the current "hashrate" otherwise your blockrate will go whack as the amount of computing power available changes. It also means that you should be able ahead of time to guess the difficulty of a problem and the average amount of processing power required to solve it.
- easy to validate: the nodes of the network should be able to check that the proof of work is valid using a tiny fraction of the computing power necessary to actually produce the proof (finding hash collisions is hard, verifying them is comparatively trivial).
- doesn't require access to a centralized resource. If you need to connect to some central repository to fetch the work set then not everybody is on equal footing. You have a single point of failure and some miners could have privileged access to the work data.
It's very difficult to find real world problems that have all these attributes.
Oh I took it very seriously because it's something I've given quite a lot of thought. I'm not a huge believer in cryptocurrencies but I would be a lot more optimistic about them if they weren't wasting so much energy. Harnessing all that processing power to do something useful would be amazing. Unfortunately so far the most useful PoW people have manage to implement are things like "compute very large prime numbers" which I suppose is mildly more useful than finding SHA256 collisions but not by a very large margin.
I was on the Curecoin team as of last year. It's a neat project, but they weren't able to manage a working relationship with Stanford.
There is a unique opportunity here to use digital currencies to fund scientific research through the use of a reward mechanism and distributed ledgers.
Our project is hoping to take a similar idea to scale (research/project-based "work) by utilizing open datasets posted on decentralized technology. We hope to build relationships with institutions, non-profits, and the public sector to track the economic and social value of campaigns similar to the Folding@Home and Seti@Home projects, but with a capacity to on-board projects as they appear.
I suggest sending the leaders of aforementioned institutions/non-profits/public sector establishments a healthy dosage of LSD if you hope to persuade them to work with you.
That doesn't jibe with my memory - the first codes running on GPUs at the supercomputer centers that had privileged early access were running MD, without any GPU-accelerated LA libraries.
My recollection (which is admittedly fuzzy) is in line with yours. MD was the first application of GPU-accelerated computing that I recall (partly cause NVidia seemed to push that). BLAS, LAPACK, etc got GPU-enabled later.