Rogaway was my professor of cryptography at Davis. Amongst his peers he focuses strongly on the ethics of his work, noticing and calling attention to ethical failings by students and professors alike, as well as mentoring students for their future careers.
He also teaches a call called "Ethics in an age of technology". The reading list is that of a philosophy professor rather than a cryptographer. I could not more highly recommend engaging with this surprisingly "unrelated" material.
Rogaway challenged us in small group settings to explore not the implications of computers and the internet, but if technology itself on humanity. I.e the automobile, industrialization, printing press, etc.
Thank you Phil, you've changed my life for the better
Ellul's The Techonological Society should be on that reading list. I have found it to be a thorough analysis of what technology is and it's impact on society and individual.
Agree with the general point. It’s one of few subfields which is about reducing “interop”, or what is allowed to be done within a system. Auth(n,z) are also in that domain, yet perhaps not academic fields in their own right.
Interestingly, this is also true for DRM, which is also political but does not protect individuals, generally. So restricting what “can be done” as a political expression depends on the “for who?”, even if the tech itself is inanimate and neutral.
> even if the tech itself is inanimate and neutral
I've come to believe that no tech is neutral. Some tech allows more variety in the politics surrounding it, while other tech has a very narrow range or politics - like DRM.
Interesting, could you expand on your view about the range of politics afforded by technologies? Specifically, your mention of DRM was intriguing. While I personally dislike DRM applied to consumer devices and content, I think there are applications of digital rights that are less objectionable, like ensuring the integrity of medical device firmware deployed in a hospital.
It depends on the technology in question, but most development has some agenda that drives it or that is enabled by it. Simply engaging in the development of tech is already pushing an agenda - the notion that we should pursue technological advancement.
For me the important realization is that I'm not being neutral when creating something - at best/worst I'm being ignorant. I'd like to reflect more on my motivations for building things and be more explicit on my motivations for doing so.
Yes you could put it that way. A sorting algorithm has very little impact on politics whereas say fingerprinting or encryption can be leveraged for political purposes.
DRM (and it’s hipster cousin “remote attestation”) stands out as being exceptionally political because it relies on the hardware to be controlled by a different actor than its owner and operator. Similar to “tamper-proof” physical objects, from the old days.
I've come to a view that is an antonym personally whose difference is oddly semantic. The tech is neutral in itself but people can make absolutely anything political. There are no real constraints on sanity. They can make needing to eat and drink to survive political because admitting the obvious contradicts 'great leader'.
Remote attestation is another example of cryptography favoring corporations at our expense. It provides cryptographic proof to corporations that their user hostile software has not been defeated or otherwise tampered with.
Don't forget that you can also remotely attest a server... in which case you can get guarantees on how a corporation is manipulating your data. For instance, you could make sure that your data is reasonably secured and not shared with third parties. So I don't believe remote attestation in itself favors corporation, it really depends on how and where you make use of it.
Sadly the adoption of "trusted computing" on the server side has been slower than expected. Maybe because of lack of user pressure? Also my guess is that most companies prefer not to be too transparent regarding how they process our data...
> So I don't believe remote attestation in itself favors corporation, it really depends on how and where you make use of it.
This is true but also ultimately irrelevant. The truth is these corporations have all the leverage and they use it to force us to accept those terms. If you can get a business to give up its limitless power over their servers via remote attestation, you're probably a business with leverage yourself. We mere mortals don't enjoy such privileges. To them, we're cattle to be herded and monetized.
These days apps will not even start up if they detect anything out of the ordinary. I can't exactly choose not to use my bank's app and it refuses to run even if I so much as enable developer mode on my phone. I used to be able to hack these things and use them on my own terms if I cared enough. Now the remote service will refuse to interoperate unless they get cryptographic proof their hostile software is running unmodified.
It's all about who owns the keys to the system. If we do, it's good. If they do, it's bad. The paper explains it really well: "Cryptography rearranges power: it configures who can do what, from what." These corporations are using it to systematically reduce our power and increase their own. The reverse should happen: they should be completely powerless and we should be omnipotent.
Attestation can be considered a good security feature if we own the keys to the machine. Some PCs let us use our own keys in secure boot. For the people who use it, this absolutely enhances security without sacrificing any freedom. What's unacceptable is corporations using the same technology to protect themselves from the users of the computers.
This is a very realistic take on the subject. From an academic perspective it is a tool to be used as any other and gaining knowledge on how to better protect data is worthwhile and provides value to humanity as a whole.
DRM is one use that does not favor consumers, on the other hand we have encryption being used in apps like Signal to provide the same high quality software to every day consumers.
I'm very interested in quantum computers, specifically ones powerful enough to break AES and other types of modern encryption. What will that mean for humanity and individuals?
As far as I know, there are no known quantum attacks that break the security of AES. It is unknown if they are able to turn AES insecure, like they do to RSA and discrete logarithm problems. And there are several post-quantum candidate alternatives to replace RSA and crypto algorithms that would break under quantum attacks.
Quantum computers break several security assumptions. But not all of them and we usually can replace the broken assumptions. Discovering that P=NP, or that one-way functions do not exist, on the other hand, would imply that several secure cryptographic constructions that we want to use are in fact impossible and would be a much scarier discovery.
Grover's Algorithm gets you effectively a square root difficulty improvement. So, AES 256 is "only" as strong as a 128-bit symmetric encryption if you have equivalently cheap quantum computers.
Even ignoring that we don't currently have any usable quantum computers and there's no reason to believe they would be affordable, let alone cheap, this mean AES-256 is fine.
Grover's algorithm is not quite as powerful as popularly described. In particular, it (roughly) divides the number of times you need to run AES by the number of times you run it sequentially. So it effectively square-roots the time requirement rather than the difficulty, and those are very different if the attack is parallel.
Eg if you were planning on breaking AES-128 by running 2^30 cores for 2^98 AES calls, it now only takes about 2^49 calls per core (2^79 total effort) plus the overhead of Grover's algorithm itself. There are also huge overheads from running everything on a quantum computer, some of which are theoretically avoidable (100x cost for error correction; gates take one clock cycle) and some of which are probably not (you must rewrite AES so that all computations are reversible). So breaking AES-128 might eventually be feasible, but AES-192 probably would not be.
There is a theoretical barrier against efficiently parallelizing Grover: all generic quantum brute-force algorithms (the kind that would work against a random function in a black box) require Omega(searchspace / depth) queries, at least for some model that may not quite match reality. (Edit: Omega and not O, since it's a lower bound)
Of course, AES isn't a random function in a black box, so there may be better attacks against it, but I'm not aware of any.
Practical P=NP is a very real possibility (I am just testing a new algorithm) and (IMHO) not that scary. I am of the opinion that the advantages of P=NP far outweigh the disadvantages. People tend to focus on the latter, because that's the world we currently live in.
Even if we focus on crypto only, from the perspective of people being oppressed around the world, it would be far better for them to be able to see what is being done by the powerful publicly rather than not. This is essential for transparency, which is essential for democracy.
I don't reject the ideals of e.g. https://en.wikipedia.org/wiki/A_Declaration_of_the_Independe..., but I believe that the technological solution through cryptography is not tenable, and if we got rid of cryptography, it would actually level the playing field.
Sorry, this sounds like serious crackpot territory.
Btw, if your polynomial algorithm for NP is any good, you should be able to break any encryption at all. The problem of breaking cryptographic systems is typically inside of both NP and co-NP. That intersection is suspected to be substantial smaller than NP by itself. (Of course, if it all collapses to P, that wouldn't make a difference.)
If everybody considers P=NP a crackpot territory, then it will never happen, by definition. On the contrary, I think to believe that the sets described by NP-complete instances are somehow inherently "undescribable" (as is implied by naturalization) is crackpottery also.
But regardless, it's important to realize that modern cryptography relies on a hypothesis. It might be effectively true for now, but it might not in the future.
> Btw, if your polynomial algorithm for NP is any good, you should be able to break any encryption at all.
In theory, yes, in practice, there is a pretty big difference between "I think I just discovered how to do Gaussian elimination to solve linear equations" and "I can routinely solve sparse linear systems with millions of variables". Historically, it wasn't done by a single individual in a span of couple years.
> If everybody considers P=NP a crackpot territory, then it will never happen, by definition.
Oh, sorry, that's not what I meant to say. To be less vague, 'Practical P=NP is a very real possibility' is fine. 'Practical P=NP is a very real possibility (I am just testing a new algorithm)' is crackpot territory.
> But regardless, it's important to realize that modern cryptography relies on a hypothesis. It might be effectively true for now, but it might not in the future.
Different parts of modern cryptography rely on different sets of hypotheses. Eg the discrete logarithm problem being hard for some specific groups is one popular hypothesis.
> In theory, yes, in practice, there is a pretty big difference [...]
That's a valid point, but to make that point the approach you defend has to be more mathematically rigorous.
Basically, if you say '(I am just testing a new algorithm)', then that algorithm better be fast in order to convince anyone. Otherwise, you say something like 'I'm working on a proof for my new approach.' or 'I'm working proving sub-exponential runtime for a new algorithm'.
Ideally, if you don't want to be a crackpot, it helps to be well versed with the literature, and what has been done before and why it does or doesn't work.
(The former is especially important, if you want to claim you have a new proof for P != NP, because researchers have already formally ruled out lots of different approaches; so you better be able to explain why your approach does not fall under any of the ones that have already been proven not to work.)
I see. I didn't really wanted to claim anything (or much) in this respect. I am working on a "candidate algorithm", that I believe could be in P with low degree (o(n^8)). (And if it's not in P, I think it will be very useful to understand WHY it is not; that's one of several reasons why I am testing it.) The reasons why I believe it's in P are complicated (I would have to describe the method), and I didn't wanted to go into that. Still, that's why people should take P=NP as a real possibility.
AFAICT my approach is novel, but if some expert genuinely wants to help me understand where it is not novel, I will gladly explain how it works.
> I'm assuming you are working on solving some NP-hard problem?
Yes, my choice of the problem is 2XSAT, which is a SAT class whose instances have clauses from 2-SAT and XORSAT. I proved (and I think that's where it becomes novel) that this class is NP-complete. This leads me to believe there is a clever generalization of polynomial algorithms for 2-SAT and XORSAT, and that's the subject of my interest.
The trouble is, algorithms for 2-SAT and XORSAT are wildly different. My first attempt to unify them, half a year ago, has failed. Since then I have made a lot of progress on the theory; now I have another candidate algorithm which generalizes them, and I am about to test it.
> Yes, that's a good approach!
I know, but it gets even better. I roughly follow what Krom did with 2-SAT in 1967. And it turns out, even if your attempt at polynomial algorithm turns out to be incorrect, the character of the exponential blow-up gives you at least some hint of what you need to fix.
> I'd be happy to take a look.
I wish I had written more on it, but the research got priority. For two months now, I am sitting on blogpost draft that outlines 2/3 of my strategy, as well as half of a proper paper; I was waiting if I can somehow conclude with a candidate algorithm, but I don't really have yet that written down in a formal way (it's new so some understanding needs to happen). I'll probably publish the blogpost as it is, and the actual algorithm (if the test implementation works out reasonably well) later as another blogpost. If you're still interested, I will send you an email when I publish the first part, probably this weekend, and you can tell me what you think.
Yes, I'm interested. There's an email address in my profile here.
My academic background is more on linear optimisation (and integer and combinatorial optimisation), but looking more into the various flavours of SAT would also be interesting.
I'm not sure whether I agree or not, as without cryptography the only way to have a private discussion with someone, which I think is equally important to transparency, as people usually are "transparent" about personal stuff with close friends/family members only under the condition that other people don't have access to the information, would be to have such conversation in person behind closed doors. Which it isn't always easy, think people living far apart. Consider also that having no cryptography isn't by itself enough to have a transparent government: they can still lock important documents in a physically secure place and have a long jail sentence as a deterrent for people thinking about leaking them.
> Consider also that having no cryptography isn't by itself enough to have a transparent government: they can still lock important documents in a physically secure place and have a long jail sentence as a deterrent for people thinking about leaking them.
But here you say that: Security can depend on social contract and not technology. But the same argument can be applied to your original objection. We can mandate (in the social contract) that people have the right to privacy. (And there are some analogs where we do that, for example, we could have Gattaca-like dystopia where people's access to health care is based on genetics, yet all developed countries have a ban on such discrimination.)
I think it's far more dangerous that the government or adversary is technologically capable of being non-transparent than if ordinary people don't have capability to be non-transparent. It's because I believe power ultimately thrives on information asymetry, and the encryption only amplifies this asymetry.
> It's because I believe power ultimately thrives on information asymetry, and the encryption only amplifies this asymetry.
Quite the opposite, I find. In fact, the government has the infrastructure and political power necessary for surveillance of people. The reverse is definitely not true. Regardless, assume you are in an authoritarian state with a powerful state police: how would you even attempt fighting it if you do not have a way of coordinating with the other people being oppressed? How can you trust communications if the government can interfere?
I simply don't believe that in an actual authoritarian state, private (Internet) communications are a neccessity for fighting back. I was born in Czechoslovakia, and if I look at what actual dissidents did, it seems that once your goal is to fight back, you just have to get used to that you're gonna be surveilled no matter what. Because if you want to convince many others that the system is oppressive, you will have to make yourself known. Vaclav Havel discusses some of the strategies in the Power of the Powerless, and it also goes to Gramsci, who observed that it's mostly ideology that keeps the systems of power.
I think privacy is important in democracy, but if you're taking on an authoritarian state, that some police knows what you had yesterday for dinner, that's the least of your problems.
To clarify: The harsh truth is, in an authoritarian state, the police doesn't really need your communications; they already know what you're up to: no good. That's enough for some of them to justify any action they want against you.
If you really want to be pedantic, you should recognize there is a difference between technology as implemented (the social contract as it's implemented now) and choice of technology (all the different ways how the social contract could be different, reflecting history and geography).
This is an insane take. If encryption is broken I _guarantee_ that the public will still be as in-the-dark as they are now. The government (and telcos) are the only ones who control the infrastructure needed to inspect this kind of traffic.
I am not suggesting that, in an encryption-free world, the advantage for the public is in being able to inspect all government traffic. Rather, it makes it easier for whistleblowers to bypass the government protections and get relevant documents that cover some atrocities.
I really like Assange's essay on the topic, where he describes that every big operation requires a sort of paper or document trail (which is used to coordinate many people), and this fact makes whistleblowing always an option.
However, I believe, encryption (and related things like trusted computing) helps to minimize the exposure of this trail even to internal actors, and in doing so, it makes whistleblowing more difficult.
I thought about something related quite a lot. Developing a cryptographic hash^1 and some of the reactions were entirely dismissive. Hostile as if (not against me per se, but) almost against the idea of developing crypto outside of the Church. As if Scripture forbade such a blasphemy even being considered, or as if making new tech was a Denial of Service against the Priests who need to review every Miracle and Revelation for sanctified inclusion within the Cannon. Or whatever.
I formed my thoughts into a deliberately indirect dissection of this, and now I repost here:
Thanks for your input! I hadn't realized my post had made it to the subreddit since it was immediately removed, so it slipped my mind. Anyway, it's not really related to what you said, but I'll take your remarks as a starting point to have a riff / use as springboard for a thought that's been brewing:
Hearing you say that seasoned cryptographers wouldn't even bat an eye at this surprised me. I'd thought those well-versed in this domain could glance at an idea, scan the smhasher results, and almost intuitively know whether there's potential there or not. I imagined they'd have a sort of gut feeling, something like "this holds promise" or "this won't cut it," directing their decision to delve further. So, why is it such a challenge for non-experts to make substantial contributions? It seems as though there's an unspoken rule prohibiting it.
I suppose this isn't unique to cryptography; it probably exists in any specialized field, like bioinformatics. Yet, there's a distinction - when an outsider steps into a bioinformatics forum with innovative ideas and some groundwork, they're likely to find a receptive audience.
But with cryptography, it feels closed off. It's as if only certain individuals are permitted to work on specific topics within established parameters, and only an elite few are qualified to assess this work. It's reminiscent of religious doctrine where deviation is considered heresy and swiftly dismissed. This leads me to question who's guiding the crypto field to limit creative contributions and why? Who stands to benefit from curbing the development of new algorithms and preventing their widespread adoption? Who might be disadvantaged by a sudden surge of new, potentially powerful crypto algorithms?
It's a challenging balance to strike. Personally, I think the current system could be improved. As an impartial observer with no stake in the outcome, I see this as an intriguing creative outlet. I'm not advocating for a revolution and frankly, I'm not particularly concerned if things stay as they are. However, I do believe that when a field becomes too closed, we all stand to lose.
Here's the intriguing part: while the field is theoretically open, in practice, it mimics a closed one, similar I suppose to the restrictions and veil of esoterica surrounding nuclear technology. But nobody openly discusses this closed-off nature, which only adds to the strangeness.
I understand why it's structured this way, but I can't help thinking there could be a better approach. Given the diverse interests involved, it's challenging to identify what that might be. Surely, I can't be alone in thinking this way, right?
The problem with developing your own cryptography is that it's easy to make something you can't crack but which has holes an experienced cryptographer could drive a bus through. A false sense of security is worse than no security at all. Plus, of course, most people who claim to have a Bold New Encryption Algorithm are selling the absolute worst kind of snakeoil, like XORing the input with a single fixed-length key over and over again, because they know most of their victims will be even less knowledgeable than they are. The outright frauds poison the make-your-own cryptography field for the honest ignoramuses.
The field isn't closed off, either. It's just founded on mathematics that's beyond what the average untrained or semi-trained person reaches, especially the ones who feel entitled have Big Opinions on the subject. However, if some can't grasp the mathematical underpinnings of contemporary cryptography, their opinions of how cryptography ought to be done are worth less than nothing. Mathematics isn't a democracy, ignorance doesn't get a vote.
This is the old Schneier textbook defense that anyone can make a cipher they themselves can’t break, therefore don’t try, yet doesn’t hold water here.
Not that it’s not true when applied to a certain group of oil de serpenthe salesmen, or to noobs making toys, but it obfuscates the reality of outsider talent by unhelpfully conflating complete frauds with budding learners with enthusiastic amateurs with experienced amateurs.
And reinforces the notion of a priest class, upon which amateurs, encountering the same, should abandon their labors and despair.
It’s also a surprisingly hostile way to treat interested budding cryptanalysts and cryptographers. It is a kind of hazing designed not to forge bonds but verily to discourage.
As in: We only want insiders creating crypto and there’s a reason for that; and in the main it’s not about protecting against flawed security; in fact, this reason is about protecting our ability to break.
Protecting this crucial vital main and security mission of the high priests at the top of this hierarchy (who, nevertheless veil their faces in the shadows: the mages’ hood) is the reason for the weird, captured nature of this field.
And because this little aphorism comes from The Book of Bruce, none dare question for fear of being tarred a heretic and cast out. Luckily, for me, an outsider has no such fear: being already on the outside. And hence is revealed a weakness of the little system of forced organization.
Aside from that, such discouraging attitudes and aphorisms, obscure the fact that analyzing new primitives is difficult.
And they conceal the truth that, it’s not so much that tech created by amateurs has no merit, it’s that it may. And this maybe, makes the job that much harder for the code breakers; the mages; The “surveillance state” that this posted HN article, under which I am adding this comment, rails against.
The code mages would much rather everyone stick to a predefined set of spells that follow the designated parameters of good sense and civility, which i suspect coincides not a small amount with the technical exploitation window of the secret code breakers. Indeed some are secretly designed to be broken: one way designs, unbreakable, unless you possess the kernel, upon which a more elaborate design was scaffolded for release. The designer-in-secret retains the secret kernel.
It seems I’ve had to be more direct than I’ve initially desired to explain my point.
It’s hard to do cryptanalysis, and the experts who do it have limited resources, so they don’t want to have to deal with a deluge, so they architect an academic culture empowered with these cutsie aphorisms to further those ends, as well as capture, guide, curtail and direct the fields search from the shadows. Coercive funding withholding is but one means. The strongest is a culture and acolytes with a fanatical devotion to the orthodoxy that sustains this control.
I understand the complexity of the balance that needs to be struck, but i think with these deceptively closed and captured fields, we all lose out.
At least nuclear secrets is open about just how closed it is and why: it’s a weapon a small club of states wield, and no one else.
Crypto in reality is the same. It’s a shield wielded by the state, not by the public. The public gets, not security nor privacy, but what the regulator deems them safe to have.
This situation cannot be admitted because it dispels the illusion that the field is open and not captured. And also the illusion of privacy and security not being monopolized by the state.
I’m not sure of the exact best balance, but I don’t think the current system is the best.
Perhaps a better balance is crypto being more like nuclear secrets, where it’s open about how closed it actually is rather than deceptively, in the vein of the snake oil salesman, posturing a security that is not, in fact, delivered.
If not ignored this notion will, unfortunately, provoke, understandably, furious reactions from both those who depend on the belief and the security, as well as those in the captured industry, who would rail against it, but cannot do so, thus turn their frustrated expression towards any who expose their embarrassing constraint.
I have no vested interest either way, except in the good of everybody…and I think it’s high time for this discussion to be advanced in a better direction. Government transparency, and accountability depends on clarity with, and consideration towards, the public who depend on them.
> And reinforces the notion of a priest class, upon which amateurs, encountering the same, should abandon their labors and despair.
The priests are mathematical algorithms and their judgements are based purely on the ability of candidate algorithms to perform at the current standard or higher. There are no "code mages" in crypto. Anyone is free to contribute.
What you need to understand as an amateur coming into the field is that there are already rigidly tested standards that people depend on and have spent many, many hours thinking very critically about to ensure they are mathematically sound. At this point there are extremely high expectations for minimum levels of entropic distribution.
This is not an art project. New ideas must absolutely perform at a higher standard before anyone will even consider it. You are competing with what exists today and it will be 100% on you to prove that you are setting a higher standard. Everyone will follow you if you can prove this, I assure you.
This confuses (or may just respond to only one of) two things I’m saying: field is kept artificially weakened by the priests; and new designs by outsiders are deliberately kept out. These are connected.
What you’re talking about it seems is just genuine competence and performance. I’m fine with that. I’m okay to compete. But it’s not just competition we’re dealing with, it’s artificial bias, designed to keep the field weak.
With crypto hashes it’s more than just entropy: there’s many hashes that sit at the top of good entropy performance regarding smhasher results, but that’s not enough for crypto: you need analysis. And for analysis you need experts.
But if i was worried about just competing on performance, which I’m not (many of my hashes have excellent entropy results), then your piece world be encouraging, it’s a good pep piece! Well done!
You haven't proven that there are better designs out there being kept down by the priesthood, just waved your hands in the direction of Grand Conspiracy and anointed Bruce Schneier a member in that supposed priesthood. The thing is, cryptography isn't kept obscure. It's a mathematical discipline welded to the practical discipline of actually writing software, but so are other topics in that part of CS, such as compression and formal verification.
But here's the nub of my point: It is hard, though, and it's harder than it looks, and this matters more for cryptography than compression because of the things people do with cryptography. If you design a bum compression algorithm, you make files you can't really decompress and you get discouraged. If you make a bum encryption algorithm, and you get all excited and actually use it for something important, and it's easy to break, well, you might get a lot worse than discouraged. Major companies have been bitten by this. Is your algorithm better than HDCP? Probably not. You know what happened to HDCP? Yeah. Intel wishes it had taken the kind of advice any new cryptographer gets about not taking their own inventions too seriously.
Unlike in many fields, there is no value in a cheap local substitute for cryptographic algorithms. Indeed vanity ciphers (many smaller powers wanted their own symmetric ciphers for national pride reasons) fell out of favour because they're just worse in practice.
So the competition for your Cryptographic hash isn't some idea by a guy from the next town, it's SHA-512/256 and maybe SHA-3.
Try something else, should the US Government have a department which vets architectural plans for a new Capitol building. Sure, the US Capitol is a pre-civil war building, and there's no reason it would be torn down and rebuilt or replaced as a whole, but surely it's unfair to just have one dead guy get to design the Capitol when who knows, Suzy, a ten year old from Maryland, who has no architectural training and mostly just thinks everything should have ponies on it, might have a better design? Why are there so many Gatekeepers? Suzy's design for the Capitol might be great, it's unfair that she's not considered just because she isn't an expert and they don't need a new one designed.
There are indeed established standards just a few for each area like hashing, authentication, encryption and so on--one of the things I'm staying.
However, the idea that work is almost complete in crypto, and therefore new work is unneeded, is false (and likely an artefact of the order I'm describing). Take for instance this current work on ZK hashes: https://www.zellic.io/blog/algebraic-attacks-on-zk-hash-func...
With all due respect, in appears your basis for believing this was submitting to Reddit. Ten minutes of web search seems to indicate you're a guy from Australia who ten years ago was listed on LinkedIn as an unemployed "Hardcore Programmer" looking for work in Hong Kong, then at some point you started a company called Irrzl, LLC that aimed to create a "post-scarcity, space-traveling human population with arbitrarily extended lifespans." In order to fund your own research, you started businesses selling "trade-secret algorithms" while anonymously employing arbitrary people who had no salaries but could request to be paid whatever they wanted on a weekly basis, and now you have some other company Dosyago Corp with an address in Beaverton, OR that is a strip mall with a mail scanning service where you can have mail sent to you without revealing your actual location, so it isn't a real business address.
I really don't want to come across as mean, and I know what your response will be already, but everything a person can find out about you within a few minutes screams crank. You come here wondering why all these gatekeeper mathematicians in their cathedrals won't take you seriously, but it doesn't appear all that mysterious to me. You may very well be what you say you are. You'll live forever and colonize the galaxy, and along the way, also provide better cryptographic hashing functions. But you have to clear some kind of minimum bar for why a person should even look at your work. Most of these mathematicians are presumably not thinking they're going to live forever, so they only have so much time to spend considering everything a person on the Internet thinks they should consider. Whatever heuristic they're using may be as imperfect as Goldman Sachs throwing away any unsolicited resume that doesn't show an Ivy league degree, but they need some kind of heuristic.
> Cypherpunk-styled creations — think of Bitcoin, PGP, Tor, and WikiLeaks—were to be transformative because they challenge authority and address basic freedoms: freedom of speech, movement, and economic engagement.
I know it's not a widely accepted view on HN but you shouldn't downvote just because you hate cryptocurrencies. Bitcoin and Ethereum (Ethereum which has switched to proof-of-stake, now consuming a negligible amount of energy) are actually two semi-successes of the cypherpunks. They were created as challenges to authority and released as free for anyone to use.
What happened next is open for discussion but I don't think the intentions were bad.
The intentions were not malicious, but they were poorly thought out. The way these coins either trend towards matching stock market trends or going to zero was inevitable based on their design.
I reject the idea of crypto currencies in its current form because it only offers an illusion of freedom from a corrupt fractional reserve financial system. The systems that are built on top of of this infrastructure, at its roots, attempts to turn everything into a commodity. It's an inherently flawed principle and I think many people that frequent HN are smart enough to see that.
This and "Security Engineering as Caring-For" [0] have been the guideposts in my security career, helping me think about what I should work on (or not) and reminding me why I keep working on bad days.
Essays like these aren't just moral admonishments; they offer a way to find deeper meaning in your work or figure out what work would have meaning for you. Personally and politically, there can be so much more to your tech career than just "puzzles and math."
I take a more skeptical view of the relationship between politics and technology (not specifically on cryptography, though I think my point stands in this context) in a recent (scholarly) article:
Abstract: A consideration of the political meaning of software that tries to add greater philosophical precision to statements about the politics of tools and tool building in the humanities. Using Michael Oakeshott's formulations of the “politics of faith” and the “politics of skepticism,”[Oakeshott 1996] it suggests that while declaring our tools be morally or political neutral may be obvious fallacious, it is equally problematic to suppose that we can predict in advance the political formations that will arise from our tool building. For indeed (as Oakeshott suggests), the tools themselves give rise to what is politically possible.
While I am not a professor, apparently I get to say I was cryptographer according to this paper (albeit accidentally). Normal software developers take on pressure to manage technical debt, where engineers are pressured to reduce costs against safety and reliability. In finance, they get pressure to fudge models to hide risk. In security and cryptography, the pressure is the ethical pressure to be the one who takes the shortcut only you and your business sponsor undersand, which weakens the scheme in favour of some other economic purpose, or worse a political one.
To work professionally in privacy and cryptography is adversarial to the point of being gladiatorial, where there are very serious (and sometimes dangerous) interests involved. This work is not for the meek.
The moral aspect of it is that you need a belief in truth and the value of personal integrity that would sabotage most other careers. When you design security protocols, you are engaged in governance by other means. With real government and the economy using the internet as its substrate, using math and technical reasoning to moderate their more extreme urges is a kind of moral responsibility. Not to pontificate too much, but when I was a kid wanting to become a hacker, it was because it represented a way to be an ethical steward operating outside these systems. Not a gatekeeper or spoiler, but someone whose skills maintain a balance. In a career as a hacker working on these problems in govt, I did good work that I think has kept some specific totalitarian urges in check, by depriving them of the certainty and impunity an abuser requires. I tell younger people working in Privacy in govt that it is the only place in the public sector where demonstrating courage is the job.
This is a great paper, and almost a decade later I would observe that if you want to really make a difference in the world where you can secure the freedom for yourself and others to really flourish and be a benefit to humanity, finding ways to practice the triad of math, courage, and compassion together is the indomitable x-factor. Each without the other are useless, but the person using them together is often history's most decisive actor, imo.
He also teaches a call called "Ethics in an age of technology". The reading list is that of a philosophy professor rather than a cryptographer. I could not more highly recommend engaging with this surprisingly "unrelated" material.
https://web.cs.ucdavis.edu/~rogaway/classes/188/spring23/
Rogaway challenged us in small group settings to explore not the implications of computers and the internet, but if technology itself on humanity. I.e the automobile, industrialization, printing press, etc.
Thank you Phil, you've changed my life for the better