Hacker News new | past | comments | ask | show | jobs | submit login

I have no specialist knowledge in this subfield, but after reading the article's arguments that basically if you could sic the entire bitcoin network on 2048 RSA it would take 700+ years, I have to wonder about perverse incentives.

Another thing that's missing is the lifetime expectancy, e.g. "for how many years does something encrypted in 2030 need to be unbreakable?"

The author doesn't seem to be a big authority, so has little to lose by staking their reputation on "you don't need it to be that good," whereas by the very nature of their authority, anyone in the resource you link is going to be motivated to never be wrong under any circumstances. So if someone with some reputation/authority/power to lose think there's a 0.001% chance that some new incremental improvements will allow for fast-enough breaking of 2048 bit encryption created in 2030 within a window where that would be unacceptable, then they're motivated to guess high. The authority in this case doesn't directly bear the costs of too high of a guess, whereas it could be very bad for, i dunno, some country's government, and by extension the org or people that made that country's standards recommendations, if some classified information became public 15 or 50 years earlier than intended just because it could be decrypted.




By "perverse incentives", do you mean something like: "it appears the cryptographic research department has hit a brick wall in terms of useful advancements, so we're reducing the budget and the department head will be taking a 75% pay cut"?


I mean like the incentives aren't aligned. So maybe you're giving an example but i'm honestly not sure. :)

in the space of cve or malware detection, the user wants a safe/secure computing experience with minimal overhead, but the antivirus / cve-scan vendor wants to claim that they're _keeping_ the you safe. so they're motivated to tell you all about the things they scanned and possible attacks / vectors they found. You probably would've been safe responding to only a subset of those alerts, but they have no incentive to minimize the things they show you, because if they ever missed one you would change vendors.

in the space of cryptography, the user wants secure communications that are unbreakable but with minimum hassle and overhead, but the advisory boards etc. are incentivized to act like they have important advice to give. So from the user perspective maybe it makes sense to use 2048 bit encryption for a few more decades, but from the "talking head" authority figure perspective, they can't afford to ever be wrong and it's good if they have something new to recommend every so often, so the easiest for them to do is to keep upping the number of bits used to encrypt, even if there's 99.99% odds that a smaller/shorter/simpler encryption would've been equally as secure.


I assume you’re aware, but for clarity: it’s not possible to sic the bitcoin network on anything, even cracking sha256, which it uses internally, due to hard-coded ASICs that incorporate specific quirks of the proof-of-work.


It seemed like the reason to use the Bitcoin network in the discussion was to form what a theoretical nation-state actor might possibly be hiding based on the traits of some thing in the physically-existent universe (instead of discussing things completely in imaginary-land).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: