Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Need more time to think this through. A few comments:

1. The problem space being solved in proof-of-history needs special mention. It assumes byzantine faults. (My article does not).

2. If I understood it correctly, the idea behind proof-of-history is that you perceive the flow of time in "windows". Each window is linked to the state of the previous window (ala blockchain) and has enough randomness built into it that predicting window attributes is a very-low-probability case. When a new event is generated you declare that event to be part of a certain window. You could not have fraudulently backdated your event because the future windows already considered the original future window state (i.e., the state before you inserted your event). You cannot future-date your event because you cannot predict the randomness.

3. At the outset, this is a clever idea. But you mentioned "scalable", I wonder how you would deal with the order of events that are rightfully binned to the same window. Wouldn't you end up with "concurrent events" and find yourself back at square one?

4. At some level, this design can be classified as a causal consistent system. In a non-byzantine world, causal consistent systems can afford network partitions. But in this world, you assume the systems have access to the window-ing system of time flow.

Apologies if I grossly misinterpreted the article.



Yes, 1 seems to be the most important difference, with 4 as one of its most important consequences. It's definitely intended to be a solution to Byzantine faults (or double payments as they would show up in practice if the transactions are financial!).

I think your summary in 2 is accurate. The answer to scalability 3 seems to be that verifiable delay functions are always costly to solve the first time, but can be checked in parallel -- like a maze that takes time to solve the first time, but can be checked instantly simply by looking to see whether the solution crossed cross any walls.

It seems to me like a reasonable technological solution to time-ordering transactions in a decentralized ledger subject to byzantine faults, but I'm not an expert here. It's at the core of the Solana layer 1 blockchain. Solana took it on the chin with the FTX collapse because FTX had chosen Solana as its main exchange and made a heavy investment in its ecosystem. Wondering whether that was an exogenous shock that makes Solana a relative good investment opportunity at the moment. The capability of doing verifications in parallel means verification of 100k transactions per second. Which is on par with what the credit card companies can do.


net TPS for by a distributed and BFT system is going to be bottlenecked by the cost of information exchange between participating nodes, which is a function of protocol design and, ultimately, network bandwidth/latency

any basic server-class hardware can saturate normal network capabilities with no special effort: a single core, a few hundreds of megabytes of system memory, is all you need to push 1Gbps, probably even 10Gbps

but solana defines hardware requirements for validator nodes https://docs.solana.com/running-validator/validator-reqs that stipulate, at minimum, 12 physical cores, 128GB of RAM, and independent PCIe NVME SSDs for specific types of persisted state

even accounting for the additional work you need to do in a trustless/open/BFT/etc. protocol, these requirements are orders of magnitude beyond what should be necessary, and pretty clearly indicate that solana is very much at the "naive prototype" stage, technically

also, fwiw, solana is not doing the claimed theoretical maximum ~700k TPS, or even 100K TPS, but just 3k TPS, as reported by https://explorer.solana.com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: