So this is basically a blockchain, but with transactions instead of blocks, and diverging-then-converging graph instead of a linear sequence of blocks (like a real family graph instead of just one-parent-one-child families common in blockchains). Looks nice, what are the problems with the approach? (the paper only lists the benefits).
There is no way to have a controlled release of new coins into the system. For that you need a blockchain that establishes a consensus on time transpired and on the total economic resources being contributed (which allows the share of the newly generated coins that each participant will receive in a unit of time to be proportional to the share of the total economic resources they are responsible for contributing).
There is no mechanism to link cost of proof of work generated to the value being transacted. With a blockchain, scarcity of space per block leads to a fee market forming, and fees paid increasing as the value contained per transaction increases. This leads to security (proof of work) increasing in proportion to value that needs to be protected.
I'd like to direct interested readers to a thread containing a link to a draft paper I wrote about addressing these issues in a blockless DAG based cryptocurrency:
>With a blockchain, scarcity of space per block leads to a fee market forming, and fees paid increasing as the value contained per transaction increases.
Note that this is not currently the case in bitcoin, transaction fees have gone up with the current scarcity due to the arbitrary block limit, but the fees are still a pittance compared with block reward - which is the real incentive for mining (but will not always be the case as block reward reduces in the future).
This is somewhat of a sore point for the bitcoin community as a large (probably not majority, but large) portion of the user base / miners / nodes does not think scarcity of space is a good idea at current levels of transactions.
Also Ethereum (which at the moment uses a blockchain and mining mechanism very similar to bitcoins) does not impose a block size limit, rather leaves it up to the miners to decide on the "gas" limit (they have a computational limit rather than a block size, but it can be viewed as a parallel).
0.9 is not insignificant, but compared with block reward, the true (current) incentive providing block security is not in question, and that is what was being discussed.
How is that a useless figure? You'd expect it to have the same ratio per block or per day, so they're comparable when talking about the ratio of fees to rewards
Because we are talking about security of blocks and block scarcity, there is no limit on the space "per day", its all per block, similarly people don't make race attacks on a day, they make race attacks on a block.
> There is no mechanism to link cost of proof of work generated to the value being transacted. With a blockchain, scarcity of space per block leads to a fee market forming, and fees paid increasing as the value contained per transaction increases. This leads to security (proof of work) increasing in proportion to value that needs to be protected.
My understanding of proof of work is that it's used to limit the number of new blocks which will get propagated through the network. Bitcoin automatically adjusts difficulty such that it approximately takes 10 minutes for a new block. If block creation intervals were lower it would compromise the security of the system and enable attacks with much less than 50% of the hash power.
> If block creation intervals were lower it would compromise the security of the system and enable attacks with much less than 50% of the hash power.
Not really. The odds of an attacker successfully generating a double-spending block remain the same with a lower block interval. Many alternative cryptocurrencies have far shorter blocktimes: Litecoin has 2.5min blocktimes, and ethereum is less than 30 seconds IIRC, and they don't have problems with rampant double spends.
The problem with shorter blocktimes is that latency has a greater impact on mining profitability. A miner with a 600ms ping will lose ~0.1% of their revenue with a 10 minute blocktime, but will lose 2% of their revenue with a 30s blocktime.
This gives miners an incentive to centralize geographically to reduce their latency. No bueno!
> Not really. The odds of an attacker successfully generating a double-spending block remain the same with a lower block interval. Many alternative cryptocurrencies have far shorter blocktimes: Litecoin has 2.5min blocktimes, and ethereum is less than 30 seconds IIRC, and they don't have problems with rampant double spends.
I based my statement on the following paper: Serialization of Proof-of-work Events: Confirming Transactions via Recursive Elections: https://eprint.iacr.org/2016/1159.pdf
Unfortunately, recent research has shown that the Nakamoto consensus has severe scalability
limitations [6], [25], [11], [18]. Increasing the system’s throughput (either via an increase in block
size or block creation rate) comes at the expense of security: Under high throughput, Nakamoto’s
original guarantee no longer holds, and attackers with less than 50% of the computational power
are able to disrupt the system. To avoid this, Bitcoin was set to operate at extremely low rates.
The protocol enforces a slow block creation rate, and small block sizes, extending the blockchain
only once every 10 minutes (in expectation) with a block containing up to 1 MB (roughly 2,000
transactions). Users must thus wait a long while to receive approval for their transfers.
Regarding litecoin: litecoin does have a lower block creation time of 2.5 minutes - however if you look at the average block size of litecoin it averages around 15kB, compared to ~950 kB of bitcoin (basically exhausting its 1MB limit): https://bitinfocharts.com/comparison/size-btc-ltc.html Considering the litecoin network operates way below its maximum capacity a double spending attack is indeed unlikely. However whether that security would hold up under full load remains to be seen.
Oh, I see what you mean now! The block propagation delay that larger/faster blocks would cause could allow attackers to double spend with less than 50% of the network because the blocks of honest miners will be occasionally orphaned while your secret chain won't. The longer the delay... the more orphans, and the bigger advantage you have. Is that the effect that you are describing?
If you're interested in how big the delay is you can check this out, it's cited to the paper you linked me and I found it helpful.
The last point is contentious IMO. Why wouldn't there be a fee market with an unlimited block size? The miners would still want to make a profit, and would set fees to a level they deem profitable.
Seems to be about someone creating massive amounts of privkeys and messing with verification (the fact that 51% attacks no longer exist in this approach make that irrelevant, however.) I'm not an expert on cryptocurrencies, but I don't see how that issue could be any more prevalent here than in "standard" (Bitcoin-style) blockchains.
To expand on that: The only way I can imagine a Sybil attack here is if someone created a massive amount of tiny transactions, and the fees required to get peers to validate them would make that approach infeasible.
Reminds me of a cryptocurrency for the gift economy that I'm working on. Each coin is unique and valued subjectively by each person. https://github.com/jchris/document-coin
I think it's a common idea. I thought of it before bitcoin was a thing (public signed transactions) and heard similar ideas many times. Still waiting for a working system because at this point in pretty sure I'll never actually create one myself.
Note that this proposal is from two authors who are under huge controversy right now and depending on who you listen to are likely to be removed from their current positions as Bitcoin contributors along with several others in the next few weeks. Given this it seems likely that this proposal will go nowhere.
The problem comes down to this; at the current stage, an attacker could very easily amass 33% of the hashpower of the network, because hashing only happens at the instants when new transactions are being added to the tree, and is completed in a second using a normal laptop.
I was unable to find any information on how IOTA resolves this seemingly disturbing security issue on their website or in their whitepaper, but I did find the following information in two non-affiliated blogs (1, 2) after a lot of searching:
> Milestones: Milestone is a special transaction issued by a special node called Coordinator. The Coordinator is run by Iota Foundation, its main purpose is to protect the network until it grows strong enough to sustain against a large scale attack from those who own GPUs. Milestones set general direction for the tangle growth and do some kind of checkpointing. Transactions (in)directly referenced by milestones are considered as confirmed.
This means that IOTA in its current form does not provide any censorship resistance, since the path of the tree is centrally directed through a Coordinator node run by the IOTA Foundation. As such, IOTA is no more decentralized than an Apache Kafka cluster, or Ripple and their Unique Node List.
I would argue that this is crucial information a user needs to know, yet I have no idea how the average person is intended to learn about this, since it’s nowhere to be found in the IOTA whitepaper or on their website. (EDIT: Since this article was written, IOTA published a post regarding this matter https://blog.iota.org/the-transparency-compendium-26aa5bb8e2.... I responded to their post https://medium.com/@ercwl/hello-david-b77bbc62c457 )
They seem to have developed their own hashing function, of the sponge family, called Curl - and are actually using the Westernelitz (oh jesus spelling, more space-efficient Lamport) signature scheme - it is a method of constructing a digital signature only from hash functions. Cool.
Not to argue semantics but its isn't a chain. It may have blocks (containing one transfer) but it is by no means linear. This lets the tangle achieve verification parallelization. Which is in contrast to blockchain’s strictly sequential, synchronous ledger.
Stellar has nice properties, like a decentralised exchange built into the platform and native integration with existing financial institutions/other cryptocurrencies through anchors.
But I would not say it's blockchain-free. They close a "block" every 5 seconds. Depends on what you consider a block. They use boring stuff like PostgreSQL to store the data instead of reinventing everything.
The innovation of blockchains is not on the how-to-store-things side, but how to keep a state of things every nodes agree with.
Stellar can store data on Postgres because it is just a small database of how much money each account has at each ledger. Past ledgers can be erased from the database (which makes it not a blockchain in any sense anymore).
Bitcoin has a history of all transactions organized in blocks not because it's a fancy new database technology, but because that way is the way it worked out better to keep a synced state between nodes, kind of an append-only log.
You could, if you wanted that, read the Bitcoin database and translate it into a set of rows of a who-has-how-much Postgres table.
Or, better, you could implement a Bitcoin client that stored its blocks as rows in a Postgres table, but I think it would be much more resource-intensive than the databases the Bitcoin clients are using today.
The Bitcoin blockchain is a compact serialization format used as the input to the proof-of-work function. The proof-of-work is the important part, and for that we need a centrally, agreed-upon serialization format where the latter blocks inherit PoW from earlier blocks.
A Bitcoin node can store blocks/transactions in whatever way it wishes -- Postgres, Dropbox, SQLite DB on a floppy disk -- as long as it's able to deliver to nodes in the canonical serialization format, because it's needed to verify proof-of-work (and signatures for the transactions).
They have "ledgers". Each ledger is based on the previous ledger, but the previous ones are discarded as soon as the next reaches a state of consensus.
Yeah, I'm done with this "scientific" tone, this PDF thing and so on. What is in this paper that couldn't be posted as HTML in a web page? How is PDF better compared to that?
He's right about those generally not actually needing PDF for non-print purposes though - simple text and a few images is way more web-friendly as HTML. Sadly only very few repositories make HTML versions available as well.
Try reading the HTML version of Fielding's paper and tell me it looks better than the pdf. It might have been invented for that purpose, but that doesn't mean it's still suitable for it. When you are writing a paper, you want to control the presentation, not just the content.
No you don't, HTML has massive variation between devices and what software is used to display it. Try setting your screen resolution to 640x480 and opening a webpage or, even worse, modifying DPI. PDFs on the other hand specify exactly where to place each glyph (admittedly there is still variation between software but it's much more consistent).
You sure do - the variation between devices is the manifestation of the ability to control layout depending on the recipient's display's resolution and size.
If anything, a PDF designed for A4/letter is going to be cumbersome to read on a (probably rather small) 640x480 display.
The sad state of PDF rendering on (most) e-book readers should be evidence enough.
More control over how it looks at the users' end with HTML than I do with Latex? Yeah, I don't think so. Why would everyone be using JS if you could do whatever you needed in HTML?
Do papers submitted to electronic-only repositories like ePrint really still have a "distribution model based around print"? They might have if they are submitted to traditional journals as well, and having a PDF version as well is desirable for various purposes, but IMHO it's not so clear cut.
I don't know why you're getting downvoted. I've always wondered the same. Why do these bitcoin people always do this "white paper" thing as PDF? Is it because it's trendy?
They also depend less on browser quirks. Except in very extreme cases, PDFs look the same and work just as well (aside from form-filling, unfortunately) regardless of which viewer you're using.