> Hence, instead of forcing clients to go below a static target like in Bitcoin to be successful, we ask clients to "bid" using their PoW effort. Effectively, a client gets higher priority the higher effort they put into their proof-of-work. This is similar to how proof-of-stake works but instead of staking coins, you stake work.
Probably by having enough computers that they can overload the server even if the number of requests coming from each individual computer is relatively low, as a multiple of what a normal user would send – low enough that those computers have enough CPU time to solve the challenges.
> The service would give the request a priority value based on the "difficulty" of the puzzle solution.
Seems like single clients could increase the difficulty to higher than what the bot net would do (so it gets priority), and hence get access. Operators of the bot net would probably hard code one value as the difficulty, and it would be lower than what you could typically set on consumer hardware.
Maybe user agents could even do this increase automatically?
> Operators of the bot net would probably hard code one value as the difficulty
Bad assumption.
Assumptions like these never last. People who say “I don’t have any money” are still valuable to hackers as phishing senders, legitimate social media accounts, residential + non-cloud + regionally convenient IP space, etc. If consuming connection / server resources becomes valuable then botnet controllers will find a way to pay the cost. It’s easy because someone else is paying for the hardware, bandwidth, and power costs.
But the effect of a market of PoW is the same — there is game theory involved in bidding (just like a silent auction). Even if a botnet uses a dynamic priority bid system, the cost increases as the botnet tries to starve the server of resources. The server’s resources are always zero-sum and the bidding will get progressively more expensive until the opportunity cost of the botnet changes behavior.
Would it really be lower on the bot net in the majority of cases? I'd imagine that real users probably wouldn't want to have their entire cpu spent on this.
Not only that, real users actually want to use the service, not overload it. A real user might only make one request a second. A botnet device is trying to make a thousand requests per second to overload the server. Even if they each have the same CPU as a normal user, now each node in the botnet can only make as many requests per second as a user or the user can outbid them.
> The large botnet is a serious operation with many thousands of computers organized to do this attack. Assuming 100k medium-range computers, we are talking about an attacker with total access to 200 THz of CPU and 200 TB of RAM. The upfront cost for this attacker is about $36k.
They appear to define it by compute capacity, so I'd expect the attacker can solve harder puzzles than legitimate users would attempt.
I just wish that the PoW defence actually involved some sort of transfer of value from user to provider. (as opposed to just spending resources on the user side)
This version is good, don't get me wrong, but adding value transfer would be better imo.
Value transfer would be worse because it would create bad incentives. Now I'd have a reason to flood garbage fake (/proxy) onion sites to skim the payments. The user would have to go obtain that value too, creating logistical challenges.
There might be legal issues for the users too-- e.g. upgrading copyright infringement into criminally prosecutable commercial copyright infringement.
Burning funds, instead of transferring them, would solve two of your three issues.
Obtaining the funds creates extra friction, yes, but it’s possible the benefit outweighs the cost — e.g. because an attacker can no longer utilize the idle CPUs of botnet devices.
Ultimately, any form of anti-DoS protection also punishes legitimate users. The question is (1) how much it punishes attackers relative to this, and (2) whether this punishment for legitimate users deters them, too.
So now you have the drawbacks of both as well, in that the guy who has the most compute to use as a toaster can DoS everyone else.
Plus, PoW is nothing but wasted, needless computation. Computing is not free. Every watt spent doing anything PoW is just that much more intensification of our current climate crisis.
As someone with temps of 109 with heat index of 120 coming in the next few days, with all due respect, fuck anyone who proposes PoW is a good idea for anything.
It isn't interesting. It's the most egregious example of conspicuous consumption on the planet.
The proof of work only engages when there is a Denial of Service attack. So if you're going to be mad at anyone for useless consumption, be mad at the attacker.
IDK, a mechanism that helps Tor continue to run does not seem like a 'wasted', 'needless' computation to me. People literally use Tor to protect themselves in situations that might result in heavy penalties for exposing the truth (maybe even torture or death.) If hidden nodes can be easily DDoSed it just makes it easier to censor the facts (and that can be dangerous.) Tor really does represent the best of us.
This proposal applies a temporary cost to DoS and DDoS — which itself is already a big waste of power. This proposal has the ability (if it works as planned) to destroy DoS + DDoS as an effective tool against this kind of server. Likely a net positive in terms of power usage.
Remember that this PoW proposal is a market. If the server has unconsumed cycles (eg. Is not saturated), the POW spot price can remain 0. The server only needs to set a PoW price after the server’s resources near saturation. For the same reason auctions end because nobody is willing to pay infinity dollars, clients can forego PoW and can opt to check back in on the server later when costs subside.
I think the presence of this PoW system might mean that this form of abuse is discouraged enough that in practice it means that PoW will not be required. Who is going to compete with a huge amount of compute to DoS others? They'd need the same compute as all legitimate access put together to get to just 50% effectiveness.
If this is the case and in practice PoW is never required, then your rant is moot, and instead it is interesting that this effect occurs.
My thought exactly. Would be interesting if they shared metric of pre and post level of DDoS attacks so there was some proof this scheme actually has desired effect.
Maybe we can ask the DoSers to stop, nicely? Everything else being equal, I bet the Tor/Onion-folk are pretty smart people, and this is what they felt was necessary to keep the service running.
It's an unconvincing argument, but not an attempt to shut down anything. By comparison, the grandparent comment was emotional and reactionary beyond the point of being expressly uncivil.
A better response might have been to point out that the level of POW is reactive. If there are no attacks ongoing it will use little to no resources. If it's effective, the attacks will largely stop (no point in attempting an attack that won't work) and so paradoxically this can potentially provide its benefit without actually having much usage.
If it works out that way the benefit vs costs are very good for pretty much any way of evaluating the costs. This is the kind of nuanced thinking that you'd expect from smart people, as the prior poster suggested (and, in fact, is pointed out in the design document).
"Conflating" energy consumption with energy generation, which here in the real world is still predominantly carbon intensive? I wouldn't say there's much conflating going on, rather, recognition of the reality we live in.
It's addressing the problem from the wrong end. If you replace your generating capacity with non-carbon sources then energy consumption is no problem. If you don't, you have a problem even at the current level of consumption, and that problem continues to have the same solution.
It's not even impossible for increased consumption to lower carbon emissions, because to meet the higher peak demand you may need to add more generating capacity. When the new capacity is renewables or nuclear then it adds no carbon emissions during peak usage times and allows for a reduction in carbon emissions whenever the grid is at less than full capacity by assigning the remaining load to the new plants and spinning down the legacy fossil fuel ones that would otherwise have been used.
Wrong. Currently most electricity is generated from sources that release co2. Also, all the energy used in computation is ultimately released as heat anyways!
Everything has drawbacks. It's always a tradeoff in software. You want a really simple interface? Now you can't do complex things. And so on.
Instead of complaining loudly, why not be the change you want to see? Proof of CPU makes you hot? How about proof of RAM? How about about something else, which you thought of yourself, which is a great idea, which you shared with their team, which they would eagerly accept as a superior solution to proof of work?
Without this proof of work, users can be arbitrarily denied access due to overload.
With the proof of work, I think the assumption is that a legitimate user will be willing to accept a sufficient delay to make botnet DoS attack impossible.
The counter argument might be that the botnet can generate arbitrarily hard proof of work, but this isn't true. Assuming some fixed capacity by the botnet, and that the botnet needs to send some amount of requests per time in order for the attack to be effective (e.g., if they only send one proof of work request per minute, then the "bid" for the rest of the time is quite low), then there's some maximum effective "price" based on the capacity of the botnet past which a user is guaranteed access.
For example say a botnet has access to a million compute nodes, so they have 1 million proof of work seconds per second.
If there's a service which can serve 1000 requests per second, then the average proof of work seconds they have per request is 1000 seconds of proof, so as long as the user is able to provide 17 minutes of proof of work, they can get access. In reality, the user's hardware is likely more capable than the attacker's botnet (which is probably IoT devices etc.,) so the ratio is more favorable.
I find it amusing that this approach is pretty on brand for onion services, as an invisible hand type market solution from what I'd consider to be a fairly libertarian leaning group.
What if instead of spending energy on compute, we just spend money instead? On the one hand, some people may be turned off by the idea of spending money, but on the other hand, the two are usually interchangeable unless you're stealing energy. Someone with a lot of money and no hardware or energy can purchase hardware and energy; someone with a lot of hardware and energy can sell the hardware and sell energy back to the grid to make money.
The goal with Tor is to preserve privacy. Payment systems come with significant legal and regulatory overhead, KYC and AML, etc. That introduces significant privacy risk. Meanwhile PoW just requires owning a computing device.
How do you suggest the people who are using Tor for anonymity pay money to use Tor? That might make sense for cryptocurrencies, but for Tor I think it's unusable.
I believe this could be done using zero knowledge proofs, ala tornado cash (which I'm not familiar with in practice, but I've read the algorithms behind it). You'd need some service that produces zero knowledge proofs that someone sent some funds to the service, and got their slot in return. Put down your pitchfork, but I think this would essentially be an NFT backed by a ZK-snark.
PoW (minus bitcoin, which is bad anyway for many other reasons) contributes an incalculably insignificant amount of energy to the atmosphere. It's much, MUCH more important to protect the free internet.
Sorry it's hot there but this is absurd virtue signaling and should under no circumstance come into view as a reason to not do PoW.
This stinks of hollywood accounting, like a lot of "negative carbon" plans for things that would otherwise be nonsensical for.
The premise here is that if Bitcoin mining uses an energy source that inherently captures carbon or methane, then it's "carbon negative?" This ignores the fact that the energy budget is shared. We could just as well use that same energy for something else currently on carbon-based sources. So, energy is still being wasted on crypto speculation - you've just fudged the total energy budget calculations by pretending it doesn't apply here.
Burning methane has positive effects on global warming since methane is way way waaaaaaay worse than CO2. Taking care of landfills that are largely untouched, just spewing methane is a good thing. If Bitcoin ends up having a net-negative effect on global warming, how is that a bad thing?
You're assuming that all energy sources are well-connected to a grid that can handle the electricity that is generated, that's not the case at all. You can't "use that same energy for something else currently on carbon-based sources" without major infrastructure costs.
To the sibling, while I largely agree, I think there's some argument that building out additional manufacturing capacity for renewables even if it goes towards crypto mining has beneficial knock on effects for reducing the cost of the equipment provided that the learning curve effect and economies of scale outweigh the competition for resources, which to be fair isn't guaranteed.
It's mainly the fact that Bitcoin mining sets a price floor on electricity prices. It's a buyer of last resort. Without it you'll get negative electricity prices and less investment in renewables.
> Every watt spent doing anything PoW is just that much more intensification of our current climate crisis.
> It isn't interesting. It's the most egregious example of conspicuous consumption on the planet.
This cannot be overstated. PoW needs to die. It is a lazy implementation that just sounds clever but is nothing of the sort. It kills our habitat. It has to go.
I'm surprised something like this wasn't done sooner, and also haven't read the proposal [0] in enough detail to tell if this will lead to more data affecting the anonymity of users. Should be fine though, since it's tied user-to-service and not stored anywhere.
I'm wondering how much this will decrease load on the service being proxied vs the nodes themselves though, I assume it'll have more benefit to services since access is spread out between multiple nodes.
FWIW, we talked about using things like hashcash to prevent abuse and similar attacks for anonymous remailers back in the early 90s. Talking about it is one thing, actually making the commitment to do it is something else entirely. Given interest in cryptocurrencies there has also been a lot more effort devoted to considering the cost/feature tradeoffs of various PoW mechanisms (and a broader familiarity with the general concept among the target population), so it is possible that we are seeing a happy side-effect of years of cryptocurrency hype.
Someone is shrieking about it in the thread above.
>salawat
>PoW is nothing but wasted, needless computation. Computing is not free. Every watt spent doing anything PoW is just that much more intensification of our current climate crisis.
>As someone with temps of 109 with heat index of 120 coming in the next few days, with all due respect, fuck anyone who proposes PoW is a good idea for anything.
>It isn't interesting. It's the most egregious example of conspicuous consumption on the planet.
Cloudflare could do this, too. Every time you access a busy site, seconds to minutes of useless crunching. The overall effect would be to drain batteries worldwide.
The original invention of PoW, as well as the idea of using it for email, was years earlier; see Naor and Dwork's "Pricing via Processing, Or, Combatting Junk Mail" in CRYPTO'92.
I was actually quite curious about that and I found this.
>With a JS challenge, Cloudflare presents challenge page that requires no interaction from a visitor, but rather JavaScript processing by their browser.
>The visitor will have to wait until their browser finishes processing the JavaScript, which should be less than five seconds.
WebGL, an important part of browser fingerprinting, takes a long time. Im sure there are other APIs being 'abused' for this purpose that take a while. This doesnt quite prove the PoW.
right, it's not per client; if it was per client you could just ban the abusive clients instead of asking them for pow. anonymity (or freely available new identities) mean that attackers can use a sybil attack to deny service via capacity overload
I'm wondering if there is a more elegant way to solve sybil attacks here. For example: many CPUs are provisioned with key pairs that are unique to the processor and can be verified with the CA root cert of the issuer (Intel, AMD, etc.) You could tie PoW to successive signing and allow it to be verified in parallel. Then the operation couldn't be parallelized to a botnet as all PoWs would be unique to a CPU.
It seems that they're targeting memory as a way to make it more costly for botnets. I think that there are many other ways to help minimize this attack scenario, too. The same logic could also be applied to mobile phones using ESIM. Later authentication with the mobile network uses public key crypto so I feel like you could also do unique proofs there, too.
This is just a throw away comment though. I am probably missing obvious problems with this scheme.
If you are suggesting solutions based on immutable hardware keys and certified chain of custody from the manufacturer,I have to ask if you understand what TOR is.
That method seems closer to PoW (as is being done by TOR) than attestation.
The strongest privacy guarantee I've heard behind attestation is it would require two parties to collaborate to break it. If Google attests to a Cloudflare protected site, they can determine who you are by cooperating.
There are ways to do the verification with cryptography that would preserve anonymity and wouldn't allow messages to be tied to public keys. I find condescending ignorant responses like yours highly annoying. One way to respond to people in the future is to start with the assumption that the person isn't a fucking idiot.
Attackers can still outsource the PoWs. The sybil is the assumption that 1 PoW == one PC. But you can force this assumption with provisioning keys at least.
Proof-of-work uses resources like memory, CPU, hard drive space, and so on for their challenges which just means that the person with the most resources has a disproportionate impact within the system. A botnet owner has more total resources than anyone else so any PoW challenges that a server issues can be easily outsourced to the system.
Overall, they will have more leverage from these resources than the number of systems they have access to. But you could at least restrict this to the number of systems with provisioning keys. The idea behind memory bound hash functions is that you're trying to make it hard to paralyze the challenge to a farm. But many systems in the farm are still going to have multiple cores and gigabytes of RAM (so they can be used to leverage multiple challenges simultaneously.) The underlying problem to solve here is an identity problem: allowing an individual machine to act as a single identity which various proof-of-work schemes have tried to achieve.
The ideal solution would also limit connections made by the same actors but that is probably not something you can achieve with something like TOR. This is a sybil problem, by the way.
You're trying to solve a straightforward engineering problem with an unfit solution to an ill-defined problem. The solution of sybil problem would not solve the case of coordinated attack by multiple nefarious agents. You can also call this meat botnet owned by master-coordinator. The solution would distinguish this from a normal botnet but in the end your service down in the very same manner and clients gave up most of their privacy for nothing.
Imagine instead the following trivial scheme: instead of burning resources the client would pay to be served in reverse order of payment value. Let's say client is willing to pay 1 cent to be served in the next 10 seconds. The attacker would have to pay more as he have to occupy the whole head of this queue all the time to be successful. Let's say server can process 100 rps - now he's making over a dollar per second, which he can use to scale his serving capacity.
Introducing the requirement to spend money to use the service would drastically reduce its value. It wouldn't be Tor anymore. Payments would make it easier to link identities and filter access to it. It would also mean not everyone could afford to pay for the service.
>and clients gave up most of their privacy for nothing.
Also not really sure how giving up privacy comes into this? Depending on how the scheme is implemented you can still preserve all the same privacy of using Tor with provisioning keys. E.g. you might use enclaves and keep verification hidden inside enclaves (so hosts cannot see the challenge protocol) or use zero-knowledge proofs to hide everything.
There may even be simpler algorithms since the certificate chain would be using something like RSA SHA256 (which have some neat math tricks to modify them more compared to other algorithms.)
"Waiting for pair client connection". That'd be something. Interesting thought but I can imagine a range of issues.
In the same vein, how about the server would hold a pool of IPs in which the client has to return a proof of port knocking? e.g. here is a token, send that to this IP:port and wait for a unique response I can verify. Call this proof of latency. It would be low CPU, would spread the load across various machines and ports. On the downside, of course, you need multiple IPs and potentially servers. It could be implemented on the same machine but that would shift the cpu load to port connections.
What does this prove about the client? Just that they have a reasonably fast connection (which in TOR-world can be painful to achieve), not that they aren't part of a botnet.
It allows you to scale your workload, but "just pay for more servers and outscale the attacker" isn't generally an acceptable way to deal with DDOS.
I have an idea to minimize traffic on the tor network or make it faster. It should be possible to use the network as a cdn. If I want to make a file available, it should be possible for me to send pieces of the file to nodes who gave me permission to do so. When the file is requested, I then could point to these nodes. Of course, some care should be taken not to turn the tor network into a "anonymous torrent replacement" to avoid defeating its purpose.
The current proposal discussed in the post talks about "prioritize verified network traffic". It would be interesting if sharing "file pieces" could prioritize your traffic since you're actually helping the network. Instead of "proof-of-work" it would be "proof-of-bandwidth-contribution".
I think given the aims and limitations they've set for this it's a reasonable proposal and it will likely meet those objectives. As mentioned it will work for smaller botnets but larger ones will still overwhelm individual clients in terms of resource availability.
Personally I dislike proof of work, it's basically bloat as a defense mechanism in this case and can obsolete older hardware fast while consuming in aggregate probably a lot of power across the devices it affects. At scale it would be quite a large environmental burden.
I also think a lot of attackers will consider it a success to get it to that high difficulty as users who have to wait 1 minute of their device being 100% pegged will probably choose to disengage a lot of the time.
That said it's quite a good way of mitigating DOS attacks while not doing anything to compromise user anonymity, so from that point of view it's a good solution despite the drawbacks. As it exists on tor, I don't have too much of a problem with it but I'd consider it a total disaster if applied to the regular web.
The article says that there is only a factor of 6 in solution time between a high end server and low end phone. How is that possible? The server likely has much more than 6x the ram and cpu count (and faster cpus) than the phone.
Also, since it is DDOSing, the server’s work is embarrassingly parallel, but the client work isn’t necessarily parallelized at all.
Even if it is only a factor of 6 (or one) they are talking about 1 minute solve times once a DDOS is detected.
At that point the service is basically down anyway, right?
> The article says that there is only a factor of 6 in solution time between a high end server and low end phone. How is that possible? The server likely has much more than 6x the ram and cpu count (and faster cpus) than the phone.
The limiting factor if equihash is allegedly memory bandwidth, which maybe doesn't vary that much between srrvers and phones.
> they are talking about 1 minute solve times once a DDOS is detected.
The point of this is to prevent an existing easy DoS attack (introduction flooding) into a partial outage/slow down. It's an incremental improvement on a hard problem.
How will the service operator know when their site is under "stress". Will this effectively prevent someone from having a "high traffic" hidden service free from Tor-imposed puzzles. If the hidden service operator is aware that the site is receiving high traffic, could the operator run several sites as mirrors, so that users had options if, e.g., one site was not responding fast enough. Is there guidance published anywhere on what is the the "normal" traffic for a hidden service.
>If the hidden service operator is aware that the site is receiving high traffic, could the operator run several sites operating as mirrors, so that users had options if, e.g., one site was not responding fast enough. Is there guidance published anywhere on what is the the "normal" traffic for a service.
> When the subsystem is enabled, suggested effort is continuously adjusted and
the computational puzzle can be bypassed entirely when the effort reaches
zero.
Surely the DDossers will just use some of their botnets for generating the POWs? I don't think I fully understand the scheme. Is the idea that as the attack progressed this would consume more and more of their resources making an attack impractical? Surely in that scenario more and more of the real traffic's resources would be consumed by them having to solve puzzles also, so Tor would in effect be cooperating with the attackers and DDosing all of the valid clients ?
No, I think it makes the the experience strictly better for normal ("real") users. On phones it might burn too much battery, but on desktops or plugged in laptops dedicated users can configure their user agent to send in difficult proof of work submissions, getting closer to the front of the line. The actual problem starts when each individual request coming from the botnet starts submitting more proof of work then the real users can tolerate (due to taking too long), and that's where it's pretty much the same as before the DDoS protection system existed.
Except now the botnets need a lot more computation and thus electrical power to generate the same amount of traffic as before, making DDoS attacks more expensive for the attacker, and less expensive for the service.
I think there is a second cost aspect to this as well; right now, hacking low-powered IoT devices and making them part of your botnet is (relatively) easy and valuable, but as their computing power is quite limited, a PoW defense should make them less viable for DDoS attacks, decreasing the amount of free attack power.
Botnets usually consist of networks of captured low spec IoT devices (mostly routers, sometimes exposed IP cams etc.). They might not have the hardware required to outbid real users in PoW.
I do wonder if tor would allow servers to have the difficulty to be "tuneable" so that service A with a small server but a solid fan base could have a high difficulty scaling that everyone knows about, and service B has a lower difficulty scaling but more servers behind it.
A bit late to the party, but... Is that not leaking information about the server? It broadcasts under how much stress the harddware is. It is probably not enough to find the exact server by itself, but still..
I wonder if it would be possible to securely offload some of the aes processing to the proof of work cpus. Or make ddos requestors perform a fourth layer of encryption/routing, possibly of data, possibly noise.
I though that its the last hop that operators control and they can intercept the traffic thats the problem. but this only saves them money and costs the users more when energy is expensive
With regular Tor > Web traffic, yes the exit relay (last hop) is able to tell the destination and can gather other metadata or intercept/modify unencrypted traffic.
With Onion Services however, there is no exit relay. Services are encrypted end to end between the client and the hidden service.
Computers are still getting faster. With time, the POW threshold will steadily increase. So will this discriminate against those with old sluggish hardware?
And is that the kind of hardware that the users in regimes that TOR is supposed to help have?
It's a start but eventually I'd be great to add some sort of payment layer much like bitcoin lightening. If running a node pays much more people would be willing to do so.
Adding payments would just make sybil attacking tor profitable. And the same parties that install software to collect payments would probably be willing to run versions that double their income by reporting all their traffic to a third party attacker in exchange for payment.
Sure, but it seems easier to defeat than captchas, especially for a big opponent like a state, which can possibly allocate enough computing power to make the difficulty target too high for regular users
Whereas with captchas, the government need to pay real people to fill them, which doesn’t scale
There are PoW algorithms specifically developed to resist GPU and ASIC. Typically they do this by being memory intensive instead of (or in addition to) being compute intensive.
Regardless, you need a device that’s more powerful than whatever the attacker is using.
The article says they target 1 minute solve times under load. If that’s 1 minute on a 5GHz, 64 core machine with 512GB ram, an A100 and an FPGA, then it’s going to be at least 5-15 minutes on your phone.
Also, the server farm can parallelize work across an arbitrary number of challenges, but legitimate users cannot.
Requiring regular users to compute PoW is a terrible idea. Actually it has the exact opposite effect. It will keep the attackers in, and the regular users out.
The problem is that we don't know how much is a cheap computation without first relying on a marketplace of computation and discovering the price. That marketplace of computation does exist, and it's called blockchain.
I think this just crowdsources the server’s load. Servers will certainly have to handle fewer requests thanks to PoW, at the expense of clients’s CPU time.
The upside is that the server does not go down, so at least some users will be able to access the website, compared to zero users
>The upside is that the server does not go down, so at least some users will be able to access the website, compared to zero users
Yes but the price is very important. Imagine you visit a country, and paid car rides, (i.e. taxis) cost one thousand dollars per hour. It might be the best ride you have ever taken, but it excludes 99.999% of the users due to price.
The problem is, it is impossible to figure out, how much computation is a cheap computation without first relying on a marketplace of computation and discover the price that way. The blockchain technology serves exactly that purpose. The producers of PoW, the miners, sell their PoW to consumers. Consumers bargain the price, by using it less when it's expensive, and more when it's cheap.
The blockchain logic states that:
"Requiring users to give proof of burnt energy -> good idea"
"Requiring users to burn energy themselves and then give proof of burnt energy -> terrible idea"
The article itself lacks details on what proof of work actually is - based on your answer I’m assuming compute rather than captcha. Is there any example algorithms that are easy to understand? I’m curious to see an algo that’s expensive for the client but cheap for the server to verify, I assume it involves reversing an equation or similar?
One of the most known compute based PoW algorithms is the one used by Bitcoin.
Basically it goes like this:
You are challenged to use some piece of data given to you, and to add some data to it, which will produce a hash with a given number of leading zeroes.
For example let’s say I challenge you to find a sha3 hash of (“response to codetrotter for comment 37255449 on HN” + any data of your choosing), with difficulty set to 3. Meaning that in order for me to accept the hash, it has to have at least three leading zeroes.
The higher the difficulty, the higher number of leading zeroes I ask for from you. Which in turn means it will take you more time to find. Because the only way to find a fitting hash is to try a bunch of different data until you find a fitting hash.
The neat thing is that while it takes a lot of time for you to find a matching hash, it is trivially simple for me to validate your claim when you’ve found a matching hash.
For this kind of PoW, people have developed software that runs on GPU faster than most CPU can do. And then they developed specialised hardware to be even faster - ASICs.
That in turn is where memory-based algorithms come into play. To make the people with GPUs and ASICs not have an advantage over others.
Edit: these are examples of CPU-bound PoW. But the general idea with PoW is that you have some hash-like function H() with no known inverse function such that the only feasible way to determine the output is just running the function. The client runs H(x) with a different input x every time. If the output is a high enough number, the server lets the client though.
The server runs H() to verify, and this is easy to parallelize. But in order to get through the server, the client must run H() many times on average.
Also server provides a salt to prevent the client from reusing their old hashes. And the server usually indicates how high the output of H() must be (this is called the difficulty).
> But the general idea with PoW is that you have some hash-like function H()
No; that's a particular PoW algorithm called Hashcash [1]. There are other, asymmetric ones, where PoW verification is different from a solution attempt, including the Equi-X PoW that ToR is implementing.
As others have commented, it's a shame there isn't a proof of work that doesn't also hurt the planet.
It makes me wonder if there would ever be a way to actually do the opposite - your "proof of work" is somehow linked to extracting CO2 from the atmosphere?
If you sell carbon credits, the money you make doing so is proof that you did it (well, it's proof that you did something of value, which could also count).
We helped secure the bitcoin network and prevented our basement pipes from freezing around 5 years ago (Old building, new heat pumps installed for 1st and 2nd floor. The old basement steam boiler being removed lead to really low basement temps). Resistive heat was the only option down there. After improvements (water heater + circulation fan + insulation), we shipped the miner to a buyer from a hydro facility to burn excess watts for the rest of its life.
Don't hate on the BTC crowd, some of us also care about ecological load.
It's a shame that Torproject has decided to reinvent its own wheel, lagging 10 years behind the crypto crowd, instead of integrating with existing coin(s).
You understood nothing and gave your opinion. Congratulations, tell us more about how offtopic you are?
This proof of work doesn't mean crypto currency, it doesn't mean coins, it doesn't mean buying or selling tokens. It means proof of work. More exactly, having to put your computer at work in order to solve an equation. If you do that, the server lets you in. If you don't, you can't enter.
This is the original proof of work. It's also proof of work when you solve a captcha, it's just a different proof of work, a human, mental one. Here, it's a computer one, meaning in order to access a website a thousand times, you would have to run the proof of work a thousand times, so a thousand times more ressource.
I really wished you gave the article a read, before saying Torproject is a shame. Maybe you are?
You can start educating yourself on cryptocurrencies with the monero case: monero payments were used instead of captcha on an internet forum about 10 years ago.
I agree with you in principle. Wasting energy like what this and hashcash do is unfortunate but that's what happens when you have an irrational hate towards a technology rather than how it is used.
That said, modeling it after a general cryptocurrency is probably a bad idea since it rising prices may prevent legitimate clients from being able to connect to onion services due to the challenge being too expensive (either in terms of computation or accusation). I think a much practical approach is to have each visitor contribute to a partial solution that can then be combined to derive funds (much like how mining pools work). That way, clients will be completely isolated from cryptocurrency and sites can actually benefit from the work rather than just throwing it away. It's a win-win situation.
I hope the next generation learns from our senseless technology-burnings.
>modeling it after a general cryptocurrency is probably a bad idea
Not modelling after but integrating of an existing one. Because this saves massive amount of engineering effort.
>rising prices may prevent legitimate clients from being able to connect to onion services due to the challenge being too expensive
Obviously, the price for legitimate clients would be much cheaper, as their requests shall be placed in the middle of priority queue (clients can wait a few seconds) while the attacker have to occupy the very top of this queue all the time. Also note that the bigger the DDoS in this scheme - the bigger profits server could make, which he could spend on expanding capacity.
>each visitor contribute to a partial solution that can then be combined to derive funds
This scheme predates monero, which was about 10 years ago.
>the next generation learns from our senseless technology-burnings.
Not if they would reinvent the wheel each time instead of adapting of existing tech to current needs.
I think it's pretty great that Tor hasn't tied itself to crypto that deeply given how the vast majority of crypto users don't actually give a shit about privacy, only the appearance of it.
Integrating with a coin would defeat the propose, since getting coins for PoW would make the ddos financially rewarding. Allowing the attacker to outcompete normal user who just want to access the site.
> make it harder for attackers to overload the service with introduction request
> We hope that this proposal can help us defend against the script-kiddie attacker and small botnets.
Sets expectations: does not counter large botnets.
> We hope that this proposal will allow the motivated user to always connect
A user who really wants to connect can get through durring a DoS attack, but it may still take work.
Interesting choice of PoW algorithm: https://github.com/tevador/equix
> Hence, instead of forcing clients to go below a static target like in Bitcoin to be successful, we ask clients to "bid" using their PoW effort. Effectively, a client gets higher priority the higher effort they put into their proof-of-work. This is similar to how proof-of-stake works but instead of staking coins, you stake work.
[1] https://gitlab.torproject.org/tpo/core/torspec/-/raw/main/pr...