Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So point them out, please. One that is efficient on all levels.


The reader will find dozens of suggestion in this thread about how coin X is more "efficient".

Note that efficiency would mean producing the same properties but with less resources. Not merely using less resources. While experimentation is certainly desired, it's a fallacy that any coin has actually reproduced bitcoin in a more efficient way.

To name a couple,

- Bitcoin had what some call an "immaculate conception" where there was no expectation that such a system would work in the first place and where the founder was anonymous. This creates a very different history in comparison with the founder led, pre mined, frothy market launches of today.

- Bitcoin has stood the test of time. With each year in production that passes, there is more confidence that the system and specific implementation is sound.

Once these other coins have stood in the market for 10 years straight and continued to retain the confidence in their protocol, the comparisons can start.


> have stood in the market for 10 years straight

At which point, Bitcoin proponents will up the requirements again. This is an excuse people use to avoid talking about the fact that Bitcoin has spent the last 2-3 years consistently proving that it doesn't scale and that it is unusable as a currency.

> had what some call an "immaculate conception"

Or, it was a good first try that needs iteration now that we understand that hyper-deflation is a bad thing for a currency, now that we understand that pseudononymous addresses don't provide any real privacy guarantees at all, and now that we understand that transaction speed and cost has to be something that's optimized for early on in a coin's life and that those issues can't be fixed by pasting an ad-hoc, unproven non-blockchain solution on top of the coin 10 years later.

The weird, religious loyalty people have to reinterpreting Bitcoin's flaws as strengths is setting the entire crypto ecosystem back. If you want to appeal to 'stability', then banks have been around for centuries. But if you're trying to build something new, you're going to have occasionally innovate and iterate on the thing you're building.


> The weird, religious loyalty people have to reinterpreting Bitcoin's flaws as strengths is setting the entire crypto ecosystem back.

I’d say it’s not weird or religious - it’s a predictable outcome of the incentive structure of then currency.

A key criteria for a better alternative is that it doesn’t incentivize hagiographic thinking.


> Or, it was a good first try that needs iteration now that we understand that hyper-deflation is a bad thing for a currency

in what sense is bitcoin "hyper-deflation"ary? i think you are thinking of the word "deflation" but i know that doesn't sound as smart

>The weird, religious loyalty people have to reinterpreting Bitcoin's flaws as strengths is setting the entire crypto ecosystem back. If you want to appeal to 'stability', then banks have been around for centuries. But if you're trying to build something new, you're going to have occasionally innovate and iterate on the thing you're building.

there is nothing stopping anybody from making something better, so go on


> there is nothing stopping anybody from making something better

They already did. Half the crypto ecosystem is already better than Bitcoin if you're trying to build a functional currency. We're talking about a coin that doesn't have any privacy guarantees, it's not exactly hard to improve on that.

When I talk about religious loyalty, what I'm getting at is that it turns out the Bitcoin ecosystem doesn't seem to care that much about stuff like privacy, and whenever anyone brings up Bitcoin's flaws, or the fact that it has been outclassed by other coins, suddenly the conversation is all about "stability" and the trust of having an old coin that hasn't fixed its problems in 10 years.

It turns out that having a fixed number of coins that fluctuate wildly in price is very bad for building a currency, but we have people on here arguing that it's a good thing. It turns out that PoW is unscalable and prohibitively expensive, which is a barrier to real decentralization, but we have people on here arguing that's a good thing. It's understandable that Satoshi didn't know in advance every problem, but it's not understandable that in all of the years since then, the Bitcoin community has largely failed to evolve or learn anything about the initial flaws that the coin had.

And this is honestly a tricky conversation to have because we have to keep dancing around the fact that Bitcoin is a speculative asset and that very little of the community cares about making a currency at all. But the community won't acknowledge that, so instead we get a lot of weird deflection around "stability" and we all have to pretend that the community is in good faith actually trying to create an alternative to cash, and that people genuinely believe that Bitcoin's tech is the best way to do that.

But obviously that's not what's happening. Nobody would really look at the state of Bitcoin and argue that it's best-positioned to replace traditional currency. It's worse than cash for privacy, it has higher transaction fees than almost any other payment method, it doesn't scale environmentally, and it takes 10 minutes to add a new block. But... it is for the moment an excellent speculative investment because the speculative market is irrational and doesn't care about technology. And the speculative market benefits from pretending that it exists for some kind of grand purpose rather than just as an end to itself, even if doing so makes it harder to accomplish any of the original goals behind the coin.


> It turns out that having a fixed number of coins that fluctuate wildly in price is very bad for building a currency, but we have people on here arguing that it's a good thing.

sorry, is that just because you say so? the success of bitcoin is the only thing i need to point to contra every point you make. if it's so bad, why does it work? maybe that won't always be the case, but so far the market has deemed it sufficient. nobody is making anyone use bitcoin, it's just the best schelling point.

btw, wrt privacy -- chaumian coinjoin has made great strides in popularity in the last year. it may not be zcash but it is still quite possible to use bitcoin in a private manner.


> if it's so bad, why does it work?

As a currency? It doesn't. Do people actually think that Bitcoin is succeeding as a currency right now? If you want to talk about the market deeming something sufficient, the market has broadly, almost universally decided that credit cards are preferable to cryptocurrency.

As a speculative investment? Because of what I said in my last paragraph.

> chaumian coinjoin

I've seen multiple vulnerabilities pop up with coinjoin wallets, plus the process costs extra money to do, plus it's not network-wide so the coins involved can still be blocked from future transactions. I don't understand why a community would go down that route when solutions like zero-knowledge proofs already exist and have a better track record. I mean, that's supposedly the whole thing with Bitcoin, right? Stability and security?

The single advantage that coinjoins have is that they don't require iterating on Bitcoin itself, which the community seems to be desperate to avoid. This is something that pops up a lot with Bitcoin-centered "solutions". We have coinjoin, because having a coin with real privacy guarantees would require iterating on Bitcoin and we can't have that. We have the lightning network (which also has serious privacy problems) because building a completely separate layer that operates off of the blockchain is preferable to the community over fixing the problems with transaction delays and fees in the coin itself.


> I don't understand why a community would go down that route when solutions like zero-knowledge proofs already exist and have a better track record

you don't understand why a community at a nash equilibrium wouldn't jump at the opportunity to break consensus? bitcoin is favorable because it is relatively speaking the minimum viable collection of traits needed to function as a cryptocurrency, reduced complexity leaves less attack surface. if zk proofs become obviously necessary or superior to cj because of coin blacklisting, people will switch to zcash or whatever else uses it, but in a world where regulators or businesses block cj'd coins, i don't know why you think zk coins would be any better off. in any case these alternatives still exist and people use them, so why does it even matter? if you're right, everyone will see and you will probably be rich.

> the market has broadly, almost universally decided that credit cards are preferable to cryptocurrency.

man in 1910: the market has universally declared horses superior to automobiles


> reduced complexity leaves less attack surface

Reduced attack surface only matters if the remaining surface is secure. Privacy-wise, the comparison here is that you've got a door with no locks, and you're arguing against putting a lock on it because "that'll make it more complicated, and then people will get in and steal my stuff."

There is no particular reason to be confident that coinjoins will protect your privacy in the long run. They've both broken in the past and they're easier to regulate than something built into the core design of the coin. This is a situation where you need a more fundamental guarantee, because Bitcoin at its current complexity level is not capable of protecting your privacy well, and I don't see strong evidence that coinjoins will solve that. You've got to put a lock on your door, even if it makes your door more complicated.

Even disregarding the fact that coinjoins are unproven, paying coordinator fees every time you want to make a private wallet is a bad solution anyway. Even if you could be confident that coinjoin wallets weren't going to leak information, you still wouldn't want a solution that had those fees, because we don't want a world where only rich people have privacy.

> at a nash equilibrium

I think the whole point of what I've been saying is that the nash equilibrium of Bitcoin is terrible. If the crypto market stays in the state it's in, it will not become a viable currency. So yeah, I don't understand why a community in a nash equilibrium where >50% of the network is subject to interference by the PRC, where transaction fees are spiraling out of control, and in which tracking is already a serious problem -- I don't understand why a community would be happy to stay in that state, complexity or equilibrium be damned.

That's what I've been saying. Bitcoin as it exists today is terrible, and making it better is going to require actual iteration, not just sitting around and trying to pump the market so somebody can buy a porche.

> why does it even matter?

Because some of us would like a functional, private, inexpensive online payment system at some point, and the entire crypto market is intertwined in the public eye, and silliness like speculative investing and NFTs distort that purpose and get in the way of other coins doing anything useful.

There's a reason why cryptocurrencies are now classified by the IRS as an investment. If you're trying to build a currency, then that's a bad outcome, but none of that matters if the only thing you care about is "becoming rich" instead of building something that's actually useful for the world.

> man in 1910: the market has universally declared horses superior to automobiles

Well OK, then don't act like 5-10 year temporary consolidation around Bitcoin means that the crypto market has decided that Bitcoin is the technologically superior option for a currency, rather than a temporarily valuable speculative asset. Particularly, don't act like Bitcoin prices prove its superiority during the same week that heckin Dogecoin prices spike.

You can't have it both ways here, either the market is always rational or it's not.

If the market is always rational, then credit cards are better than cryptocurrency in its current state. If the market isn't always rational, then speculative investment based on FOMO does not indicate anything about Bitcoin's quality as a tool for transactions.


The "market" sometimes drops incredibly cancerous waste chemicals in the water supply.


People often say, BTC has stood the test of time. What was the test exactly. Test to see how many people are greedy? Test to see how much energy can be wasted? How much more can we pollute the environment ? How it can not be used as a currency ? Yes I see, it is still here. But in my view, it shouldn't be. There needs to be another way to solve ledger consensus, other than POW.


Why didn’t Peercoin and Novacoin stand the test of time? Both were PoS and had high market caps.


it's a fallacy that any coin has actually reproduced bitcoin in a more efficient way

It's a fallacy to consider bitcoin as an apriori better system than either traditional currencies or other coins when it has failed to achieve the hoped-for results.


I like Tezos. Its has a very interesting and developed smart contract technology stack: https://tezos.com/developer-portal and is Proof of Stake. You can run a node on a cheap laptop.


First of all I want to preface my comment with the disclaimer that proof of stake is nonsense in many ways (vulnerable to coordinated bribery attacks - we don't know how to coordinate well enough to execute them yet but the math checks out, also plutocracy..)

However it is a pretty expedient way to get something out the door fast (it's literally the simplest form of anti-sybil you can make and a cryptocurrency is anti-sybil + consensus).

So given that we accept proof of stake as a good way to prototype different approaches then a cryptocurrency that meets you criteria is the avalanche cryptocurrency (AVAX).

The way it works is via "stochastic gossip", whenever there is a conflict (double spend or some kind of decision problem where a node is unsure what is reality) then a consensus "game" is started. The way the game works is that you sample some reasonable sample of peers in the network (say 30-40 peers) and ask them "is A or B the legal state of the world?", now an honest participant will naively believe whatever the majority decision is.

Due to how stochastic probability stuff works the network will converge if you repeat this game sufficiently often. The idea is that if the initial state is 60% think A is the correct transaction then after a few rounds (they do 17 rounds which is overkill, you get like nine nines of probability or something) the whole network will have come to that conclusion (each time you run a round of the game the majority will _probably_ grow, eventually only the byzantine nodes are still saying B and if they are not the majority then all conflicts are solved).

Now bear in mind, there are some drawbacks, if someone is stubborn or does the opposite of what they should then maybe they can have some outsized influence, but in general this works pretty well and you can have a _lot_ of nodes participate in the network (much much more than in nakomoto consensus due to the different nature of the consensus mechanism). Furthermore you get finality in 3 seconds (even with the 17 rounds) so that is also quite nice (if you want to reach VISA level of transactions...

A validator in the avalanche network doesn't need some crazy hardware either.. a raspberry pi is fine, the limitation is you need the machine to be online 80% of the time they stake for.

That is one solution. There are others. For example amoveo, which is proof of work BUT it solves the second order byzantine fault tolerance problem (also known as the "oracle problem") which means that you can account for externalities (like energy usage) and throttle the mining rewards or adjust parameters in response.

I am working on a very raw but next-gen project myself, it's called datalisp (I wrote a very bad "whitepaper" in one sitting a couple of weeks ago... it's at datalisp.is - there is a telegram channel.. we're trying to figure out a lot of hard problems and we'd rather show than tell so if the whitepaper is somewhat up your alley then there is a lot more elaborate stuff happening in the channel).

The point of what I am saying is: if we want to kill bitcoin then the simplest way is to just make a better solution and change the narrative around the whole scene. I hate all this nonsense proof of stake ponzi nonsense, if a cryptocurrency is well designed then there is no advantage from being an early adopter (i.e. we solve byzantine fault tolerance _AND_ tragedy of the commons).

Please don't shit all over those of us who want to solve hard problems in a good way just because there's a bunch of assholes trying to get rich quick.


The bribery can be solved with slashing and making it economically unviable to cede to bribery.

Regarding plutocracy, it depends on proof-of-stake vs delegated-proof-of-stake. In one, everyone can become a staker and earnings are spread to all, in the other, some stakers are elected and earnings will be concentrated to them.

Proof-of-work has inequality issues:

- economies of scale for those that have the capital to rent a power plant and a large datacenter in a location with cheap electricity and/or cheap cooling.

- latest ASICs / GPUs access at scale.


Slashing doesn't solve anything, I don't think you've studied how the attack would work.

Proof of work that doesn't solve second order byzantine fault tolerance cannot account for externalities and suffers from the issues you mention.

I agree that neither "solution" is all that great, my project is neither proof of work nor proof of stake.


> I don't think you've studied how the attack would work.

It's my day job to implement proof-of-stake https://github.com/status-im/nimbus-eth2/pull/865


How is that pull request relevant to our chat here?

By the way I don't think it's a good argument to say that your job depends on proof of stake being secure and therefore it must be secure (or "therefore you would know if it wasn't").

The coordinated bribery attack can be done by having people stake on a different chain until a majority is reached and then softfork and slash the minority.

If you don't believe the attack will work why not collect the bribe (since the defecting can be done anonymously / out of band) and if you believe it will work then you'd rather be on the winning side.

^ This is a rough sketch of the argument. As I said, it's hard to implement since you need an oracle to decide whether to pay the bribe, so it can only be done if you have second order byzantine fault tolerance and a few constructs. I believe we will see such attacks in this decade though (but ofc it only works if the PoS network is actually decentralized and it will keep working until the network becomes centralized).


It's a relevant answer to your personal attack "I don't think you've studied how the attack would work."

My job is literally implementing proof-of-stake securely. Also when I work and implement an algorithm, I provide references and sources, which my PR is.

> The coordinated bribery attack can be done by having people stake on a different chain until a majority is reached and then softfork and slash the minority.

Changing the vote is a slashable offense in the Gasper paper (cited in my PR). https://arxiv.org/abs/2003.03052 A change of vote will be ignored and the validator will be deemed malicious and ejected.

p17 section 4.7 Slashing conditions with proofs.

Furthermore, this has been formally verified https://runtimeverification.com/blog/formally-verifying-fina...


I'm sorry you took offense. My intention was not to slight you, how can you have time to study everything in the world?

W.r.t. to the proofs in the paper, they happen within certain assumptions, assumptions that do not hold when the attack is as I described because the majority will be byzantine!

The key idea in the argument is not computer science but economics. If you can earn more we have to assume you will do that (charitable behavior exists but is not a good thing to rely on for security) the issue is that PoS is vulnerable to the tragedy of the commons in a similar way to democracy (the amount of influence each person has is small but time investment of being a rational voter big, hence it is rational to be ignorant - in PoS setting the idea is that you defecting is unlikely to change things so the bribe does not need to be big). Once you've obtained a majority you can censor the minority. How much you can do with your soft fork depends on the chain in question but in general the very fact that you can stop byzantine behavior works against you when the majority is malicious.

The reason we cannot do the same in proof of work is because of how expensive the bribes are.

Anti-sybil based in racing is hard to censor but when you start voting you can do bullying.

Regarding formal verification, you can formally verify the algorithm, implement it safely and still it will not work in the real world due to the game theory considerations. However bittorrent worked and that was largely charity. Similarily the data availability problem has not been a problem (e.g. all bitcoin blockchain is accessible even though there is no monetary incentive to make the archive accessible). While there is no way to exploit known vulnerabilities they may as well not exist.

From our exchange you have been appealing to authority but I suggest working through the math and looking at tragedy of the commons. Since you are intimately familiar with how proof of stake works you will be able to convince yourself one way or the other.


The "stochastic probability stuff" falls apart once ISPs discover they can profitably influence the consensus outcome by messing with packet arrival times between nodes.


Yeah that's an interesting angle. I believe the problem in the first place is forcing the network to come to global consensus. In my project the idea is to form "lazy consensus" (i.e. just enough consensus for everyone to have a consistent outlook on the world).

Datalisp is a data interchange format rather than a cryptocurrency though.. so it behaves a bit different.


"Just enough consensus" in BFT settings is necessarily 2/3 vote from all votes on agreement. That's a tall order.


Why is that necessary?

Edit: To be a bit less standoffish and make my position clear.

You only need to form a consensus with the part of the network you are aware of. Datalisp leans into the properties of propagator networks (fully CP systems) to avoid needing to synchronize everyone eagerly.


Try googling "Byzantine Fault Tolerance" and read the original papers by Leslie Lamport. 2/3 is required to ensure liveness.


Yeah, so if you look at some commentary around that paper there is a variant of the problem with "approximate agreement" that can solve for example the 3 person variant. This fits with the theory (really just a hypothesis at this point but actively worked on) that I have w.r.t. datalisp.

Edit: see for example slide 30 of this slideshow: https://www.cs.cmu.edu/~15712/lectures/15-bft.pdf

I'm not (yet) very good at probability theory so it's a bit slow going filling in the model I have in mind (that is; deciding the message format) but propnets are the correct framework to fit it into.


If you read the Avalanche paper, you'll see early on that all of its claims to being BFT stem from an assumption about the distribution of message arrival rates in the system. That is a very generous assumption that can easily be wrong. Also the original BFT literature makes no assumptions about the arrival times or even reliability of the network -- BFT constraints apply even if the network is 100% reliable. It only gets worse from there; 2/3 honest votes is the best you can do under ideal circumstances.


I am in no way trying to defend avalanche. I have been speaking from the standpoint of datalisp.

Again, the Byzantine generals problem of the paper is not an accurate description of what is happening in the datalisp network and does not apply except locally (in a 'neighbourhood' of the dispute) so it's not 2/3 of the whole network but rather 2/3 of the echo chamber that needs to agree...

Anyway you seem to be eager to dismiss off-hand so I am losing interest in debating with you as-is.


So is global agreement on state a criterion for datalisp's correct operation, or not?

If not, then why compare it to Avalanche at all?


Could you please read our exchange from your first comment and tell me how you parse it?

From what I can tell; I agree with your dismissal of avalanche and offer an intuition for what may be the problem by contrasting it to my work-in-progress idea, then we have a series of back-and-forths where I consistently mention datalisp but never once talk about avalanche..

Regarding global agreement of state, yes it is not a criterion for datalisp. Datalisp approaches the network like a sheaf (locally consistent but globally not necessarily so) and uses the properties of propagator networks (which are fully CP, therefore they converge on a fixpoint) to reason about convergent consensus that executes lazily.

The casual dismissal of datalisp (as it stands) is that the message format for communicating your beliefs w.r.t. probability of signal-to-noise is not fully worked out yet and all we have is a series of heuristical arguments.

However, datalisp is a data-intechange format first and foremost and optimized for building authenticated datastructures. It is not specialized for cryptocurrencies per-se (since the focus is on second order byzantine fault tolerance).


This would be going a lot more smoothly if you could articulate which distributed systems properties, exactly, you intend datalisp to provide. So far, you're contradicted yourself at least once here:

> I believe the problem in the first place is forcing the network to come to global consensus. In my project the idea is to form "lazy consensus" (i.e. just enough consensus for everyone to have a consistent outlook on the world).

So, which is it? Not coming to global consensus and "just enough consensus for everyone to have a consistent outlook on the world" seem mutually exclusive.

Don't be afraid to lay down a bold system description, and don't hold back on the specifics. Not trying to brag, but I have a PhD in distributed computing, I work on a blockchain at my day-job (https://github.com/blockstack/stacks-blockchain), and I chair its governance process. I'm pretty sure I'll be able to understand a rigorous system design. But, I don't have very much patience for weasel-wording, pussyfooting, or hand-waving.


I had already looked at your profile so I am aware of your credentials.

In the original comment I mention my project and point to resources for learning more. I don't know why you think I am "weasel-wording, pussyfooting or hand-waving" when what is in fact happening is that we are having this discussion in a place that is not exactly optimal for writing pages and pages of explanations but rather for high-level discussion (which is necessarily brief on details but may link to resources).

I will be the first to admit that what I am doing is not fully worked out yet. Then again you are familiar with research work so you know things take a while to become polished enough to stand up to the "surface level scrutiny" even if the ideas may have merit from the beginning. The fact that I am working in the open and engaging people like you in discussion should not be considered "weasel-wording" or whatever slur you come up with.

You claim I am contradicting myself yet you do not elaborate on how so. I have already given you the argument for why it is not a contradiction (global consensus happens eventually due to convergence and "lazy" in computer science is another way to say "demand driven" - i.e. we converge as much as we need to in each "neighbourhood" of the network, the sheaf reference should further enhance your mental image of what is going on).

If you want to go into details with me and discuss things more thoroughly then I suggest you join the datalisp channel on telegram. I would be happy to hear your criticism and try to understand better in which ways I may be lacking. After all, what we all want is better systems of governance and if you are intrigued by this approach (or rather; high level description of a possible approach) then from my point of view our interests are aligned.


> But, I don't have very much patience for weasel-wording, pussyfooting, or hand-waving.

This was meant in a general sense; it wasn't an accusation. I apologize if it was interpreted otherwise.

I was hoping to see more about what distributed systems properties datalisp will be providing? It's fine if you're not ready, but if so, then maybe you could speak more to the specific problems you're trying to solve?


Sure, datalisp is really a very simple idea.

First we take canonical S-expressions (made by Ron Rivest for cryptographic signing) as the syntax of the "language" (again it's primarily a data-interchange format meant to be used to build distributed systems).

We enforce the restriction that the csexps must have versioned operators, that is (2:op1:0...) would be a the operation `op` and version `0`, the version is a natural number.

That's it for the data interchange format, now for the semantics of the language.

As I have been saying the primary idea is to use propagator networks (propnets). The name of datalisp comes from datafun which is an extension of datalog. Datalog is something you are probably familiar with, it's not turing complete and is a constructive logic programming language that uses "relations" and postulated "facts" to infer new "facts" - i.e. it builds out a transitive closure of sorts.

Datafun takes this and tries to extend it as much as possible without becoming turing complete, so rather than being limited to the boolean lattice of normal logic it uses any lattice and monotone functions, e.g. a propnet. However it tries to do type inference and be a "programming language".

One nice property of datalog is that the evaluation semantics coincide with the shortest vector path algorithm used to do routing on the internet, except datalog is declerative rather than imperative so it's an interesting way to do routing. The idea of datalisp is to make the "XPath of canonical S-expressions" be a datafun-ish language that can then also do the routing.

Now the reason this has anything to do with cryptocurrencies is based on the claim that the internet is a propnet where each place is a "max-signal lattice". The idea being that each user tries to put the computer into a state that most aligns with their intent (`max` over a total order - in this case ordered by "signal" - is a lattice), therefore they will not choose to persist messages that are spam i.e. messaging in the network is monotone (at the limit you have security and only choose to replace information if it rewards you with higher signal).

The goal of the datalisp network is to make this intuition concrete. There is a lot of details but this is the general gist.

So the anti-sybil mechanism in the network is called "proof of trust" we make participants in the network sign their messages and include a prediction of the signal-vs-noise of their message. Peers in the network will have a prior how much to trust the self-evaluation of this key that they then use to order the message.

Second order byzantine fault tolerance is a class of problems and the representative problem we try to solve (i.e. a "second order decentralization complete problem") is deciding whether to merge a patch and rebuild your system in a byzantine setting. If you can do that then you can of course also do a ledger that simulates some formal language. This is a fancy way to say we want a way to do updates in a distributed system with no single source of truth.

The consensus mechanism is based in lattice theory, we have "join consensus" which is a race of sorts, that is you can always merge a patch ahead of the network if you like (stay on the "bleeding edge" so to speak) and then there is a "meet consensus" which is based on the avalanche algorithm and is used to move up the bottom and collapse all patches (or more generally; metadata) into its state.

So the semantics of datalisp end up being something similar to "probabilistic datafun" if you are familiar with semi-naive datalog evaluation semantics then it's a bit like that except we also have a cutoff w.r.t. how much confidence we are willing to tolerate to use a fact / inference. Instead of programming datalisp in a text editor it is more like a graphql of sorts, it /is/ the internet, or rather it reifies the propnet datastructure. The reason we care so much about canonicality and consensus is so that we can take advantage of content addressing.

I have some ideas about building on top of this framework mixnets and simulations of different political systems, then the idea is to do composition via constant function market makers since they are convex and therefore solve the tragedy of the commons (local optimization is global optimization). Since the network is decentralized you can be byzantine, that is you can make your own identifiers in datalisp (each operator has an infinite namespace due to versioning, we use the probabilistic logic programming and metadata gossiping in the network to find out what schema the data is supposed to fit, if there is a namespace clash we form a local consensus to re-version one of the clashing names to the first unused version).

The properties of lattices allow you to "zoom out" of such echo chamber bickering.. this is how anti-sybil works...

There's a lot of ideas basically, I hope you can see the general picture of what I am planning.


> However it tries to do type inference and be a "programming language".

"tries to do"? It either does or it doesn't. If you're not sure which, then it doesn't. Let's not use weasel words and misrepresent what the system actually does.

> One nice property of datalog is that the evaluation semantics coincide with the shortest vector path algorithm used to do routing on the internet,

Routing on the Internet is famously not shortest-path, so I don't know how to interpret the rest of this.

> The idea being that each user tries to put the computer into a state that most aligns with their intent (`max` over a total order - in this case ordered by "signal" - is a lattice), therefore they will not choose to persist messages that are spam i.e. messaging in the network is monotone (at the limit you have security and only choose to replace information if it rewards you with higher signal).

You seem to be describing a system where anyone can submit as many messages as they want, but it's up to each node to decide which ones are worthy of consideration ("not-spam") and which ones are not ("spam"). Identifying messages as spam is a notoriously hard problem. It's not even truly solved with email, for example; Cory Doctorow's humorous checklist applies here: https://craphound.com/spamsolutions.txt

Which brings me to:

> So the anti-sybil mechanism in the network is called "proof of trust" we make participants in the network sign their messages and include a prediction of the signal-vs-noise of their message. Peers in the network will have a prior how much to trust the self-evaluation of this key that they then use to order the message.

This is not an anti-sybil mechanism in any sense of the word. If anyone can send arbitrary amounts of messages (of which an arbitrary amount can be spam), and run arbitrarily many nodes, then any attempt to predict the ratio of not-spam to spam messages is going to fail. Someone who wanted to break this system can choose how many nodes to run and how many spam/not-spam messages to send in order to trick other nodes into mis-classifying messages 100% of the time.

If you want a better understanding of sybils and how to deal with them (specifically, what not to do), you should read the original paper: https://www.microsoft.com/en-us/research/wp-content/uploads/...

But, know that sybil-resistance is not the same as BFT.

> Second order byzantine fault tolerance is a class of problems and the representative problem we try to solve (i.e. a "second order decentralization complete problem") is deciding whether to merge a patch and rebuild your system in a byzantine setting.

Making up new words is a great way to confuse and annoy your readers. If you want nodes to decide whether to make forward-progress when other nodes can fail arbitrarily, you need Byzantine fault tolerance. There's a plethora of papers and real-world implementations of high-performance BFT agreement protocols. The language and data format has absolutely nothing to do with the agreement problem.

> The consensus mechanism is based in lattice theory, we have "join consensus" which is a race of sorts, that is you can always merge a patch ahead of the network if you like (stay on the "bleeding edge" so to speak) and then there is a "meet consensus" which is based on the avalanche algorithm and is used to move up the bottom and collapse all patches (or more generally; metadata) into its state.

It sounds like your nodes execute some kind of speculative execution for state-transitions that they somehow believe will be accepted by the network, even though they have no confirmation that they have or will. This is all well and good -- speculative execution of a happy path is a well-known design tactic. But that's not the hard part of distributed system design; the hard part is how to handle the case where the happy path is unavailable. What's the unhappy path look like?

> So the semantics of datalisp end up being something similar to "probabilistic datafun" if you are familiar with semi-naive datalog evaluation semantics then it's a bit like that except we also have a cutoff w.r.t. how much confidence we are willing to tolerate to use a fact / inference. Instead of programming datalisp in a text editor it is more like a graphql of sorts, it /is/ the internet, or rather it reifies the propnet datastructure. The reason we care so much about canonicality and consensus is so that we can take advantage of content addressing.

It sounds like your nodes have no way to actually agree with one another with any certainty, so I take this with a huge bucket of salt.


First, thanks for taking the time.

Alright, I realize that what I wrote out on the spot was quite hand-wavy. The devil is in the details, also I honestly am a bit perplexed why you are so insistent on not looking at the (terrible) paper but willing to invest in the discussion at all.

For example the spam comments; in the paper I mention a construction from a project called Vac that deals with this. Bandwidth can be throttled per node per epoch by leaking secret shares of a key that put up collateral.

Regarding shortest path; interesting, I thought it was. Guess I was wrong. Still it is a valid way to do routing and is how I'd intended to do it in the prototype network. The big idea of versioning the operators is to be able to converge on the desired semantics.

The reason I focus on the proof of trust part is because that is how a bunch of decisions are made (such as amount of bandwidth to allocate). You still need a bunch of other constructions to use this data.

The anti-sybil is not just anti-spam it is also about finding correlated intent and packaging that whole echo chamber up as an individual so that you can rate limit / adjust visibility. Major news from an uninteresting group may be more interesting than minor news from an interesting group.

Regarding "making up words" - there are precise definitions that I didn't reproduce because I figured you would want to hear a different narration than the one presented in the paper / the channel.

The way nodes participate in the network is by pinning content addresses and gossiping metadata about pins. At any given time the "commons" is made explicit because it is pinned by a bunch of nodes. The nodes build metadata on top of the bottom (commons), when a consensus is reached, i.e. there is some metadata that everyone has join-ed into their state, then we have a meet consensus and the pin can be moved.

You are still thinking in terms of first order BFT which is why your criticism is a bit.. off base. The network is not trying to start off automatic but rather manual and then discovered patterns will be automated once proven correct.

If you read (and understand) how the amoveo oracles work and look into the properties of prediction markets then you can see that this is actually a social network built on a data-interchange format that is optimized for building authenticated datastructures (i.e. amenable to automation of any first order problems that can be solved with MPC).

Edit: W.r.t. Datafun "trying to" do type inference. Datafun is a research project, he has found most of the type inference rules but some are still missing. The language is not really usable as-is but it "tries to" be, carrying around two typing contexts (one for lattice types and another for algebraic types) and having modal subtyping and such things is complicated and hard to do well. Datalisp is less interested in such things and is closer to being a probabilistic datalog. Again, the versioning of operators is so that we can converge on desired semantics; the mvp is going to be using bayesian factors for inferences and odds of signal:noise for priors attached to facts, this is just for pragmatic reasons (easy to code / reason about) it is probably insufficient for the economic model and composition of prediction markets.


I haven’t evaluated and run the code for these 8 cryptocurrencies but the most energy efficient cryptocurrencies with the highest market caps are:

Stellar Lumens & Ripple: More centralized but that centralization means there no no wasteful proof of work needed

Tezos: like a cheaper version of Ethereum for NFTs. Already powered by proof of stake.

Polkadot: Many new coins are beginning to be released on polkadot’s proof of stake network just as they are on Ethereum’s network.

Binance coin- centralized and proof of stake. The whole Binance Smart Chain network is powered by just 21 nodes (validates) that stake about 3.5M of Binance coin.

Cardano - has a cool name and is powered by proof of stake. Has lots of PHD research papers but most of which are over my head.

Algorand- a fast and cheap new cryptocurrency that is powered by two different kinds of validator nodes, one centralized and the other decentralized.

Ethereum 2.0 - will probably take 2-3 years to migrate to proof of stake but once it does, it’s possible that many of the above cryptocurrencies will be rendered obsolete unless they’ve captured large amounts of the market before then.

All 8 of these coins/tokens should meet your standards of being efficient on all levels. When evaluating cryptocurrencies you’ll notice this pattern. The cryptocurrencies with the cheapest transaction fees tend to use the least amount of energy.


XRP and Stellar were the pioneers in this regard. Since about 2013 and 2014 respectively (or thereabouts). Such systems have negligible energy usage.

In general, anything using some flavour of Byzantine Fault Tolerance - of which there are a few - as a method of consensus, consumes very little energy for the payback.

If appropriately deployed (similar issue to PoW, decentralisation needs to be maintained) BFT-type consensus mechanisms also exceed the consensus security offered by PoW or PoS.

BTC and other PoW models are typically constrained by a 51% corruption of the network. BFT systems typically do better than that, as they often require numbers in excess of 50% to reach agreement (I believe I've seen around 80% - but it's been a while). The way the system usually works is it's constantly reassessing malicious nodes so it becomes difficult for a minority to gain a foothold to cause disruption. They also have the possibility to exhibit less destructive effects if breached (temporary network halt rather than network takeover).

Finally, in such systems modifications to the network protocols are voted upon democratically by participants on the network itself, and so far - at least in XRP's case - this appears to have worked well. Ripple has proposed a few changes that have been rejected by the network using this voting process, while others proposed by them or other participants have been approved and become part of the protocol.


XRP is not an alternative to Bitcoin; it's a proposed alternative to SWIFT.


XRP is whatever any holder wants it to be. If users hold it as a store of value instead of Bitcoin, your strange opinions have absolutely no dominion over that.


Ethereum POS chain is live and the migration of the main chain is planned for the end of the year.


Where can I learn more about this assuming I have an 85-90 IQ?




Nano. It is also very fast, few seconds to transfer funds.


I'll second Algorand. Founded by Silvio Micali, Turing and Gödel Prize winner MIT prof who got a bunch of MIT people and now has probably solved the problems of speed, safety and energy consumption of the old blockchains. Check out their white papers if you're interested. I would also suggest checking out Hedera Hashgraph for a newer approach on distributed ledgers.


Nano. It's also very fast and has no fees.


Algorand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: