You can have your opinion, but I have mine after reading lots of research papers. Obviously, there are ways that things can still be improved. But even the design of systems that function in the face of incompetence and even adversaries will face intractable problems of incompetent leadership and governance. It's one thing to say "this is how it should be done" and it's another to say that we've managed to get people to do it the way it should be done. No matter how much improvement we see on ideas for how it should be done, we're still no closer to solving the second part, despite much effort. In the end, there will always be incompetent leaders in charge at some point for some project somewhere, and more often than we'd prefer. You can lead a horse to water, you can't make it drink.
It’s not an opinion and you can cop out to lame “what if the government is hitler” arguments, but resilient systems are definitely an engineering/science/math problem.
The entire field of cryptography wouldn’t even exist if the boundary of research ended at “good actors”.
And yet we still have people out there trying to create their own cryptography when the golden rule is to not roll your own crypto. For whatever reason, best practices don't get followed 100% of the time, even if they exist. For cryptography, the situation is better than most other domains. I think we're having different conversations here. You seem to be having a technical conversation. I'm having a sociotechnical conversations within the context of organizations and their workers and managers. I'm seeing you discuss technical solutions to sociotechnical issues, which is not what I am discussing. Even when the technical ideas are perfect, organizations still need to implement the ideas. That implementation tends to not follow allegedly perfect specifications for many reasons.
But you are entitled to your opinion and that's fine. We can agree to disagree, nothing wrong with that.
> For whatever reason, best practices don't get followed 100% of the time, even if they exist. For cryptography, the situation is better than most other domains.
That’s the point. Cryptography is significantly better precisely because the research effort has gone into systems that are hard for people to fuck up.
Just look at the fight was to get everyone to agree that the model should be, “everything including the algorithm should be public, except for the key”. That’s a socioeconomic argument.
> Even when the technical ideas are perfect, organizations still need to implement the ideas. That implementation tends to not follow allegedly perfect specifications for many reasons.
And that’s why making safe systems where mistakes are protected against is a critical area of research.
Rust is popular because it protects against whole classes of bugs, despite it being no faster than C/C++.
You fail to address the issue that none of these resolve the issue that people still don't follow best practices. This is especially true the further one is from best practices. The reason why cryptography is significantly better is not because the best research has been done. It's because it's impossible for cryptography work to be done without the best people because it's that hard. Drop cryptography engineers into this Post Office mess in the middle of the implementation and they'd say screw it, you can't pay me enough to deal with this crap. They'll leave to go work for an organization that respects them enough to listen to them and still make it worth their while.
You can talk about best practices all you want. You're not going to get most organizations to afford or convince the best people capable of following best practices to come work for them. And even if they did, those best people will leave before they can even change the technical culture. No effective person would put up with the insanity that exists in subpar organizations, many of which continue to exist in spite of their incompetence for many other reasons.
You have no concept of working in the real world where people who suck exist. You talk like you've only ever worked with all-star Linus Torvalds types. Of course it's easy to do what you are recommending when you're working on the Linux kernel, for FAANG, startups with competent founders, etc. All those organizations are able to do what you recommend for reasons that many other organizations can't.
You're a Xoogler working with startups. I get it. You're in that world. You have no idea how to fix an organization like the British Postal Office so that they will do IT competently.
You need to take a step back and take a breather. The people of the British postal office that you deride use state of the art encryption every day for things as trivial as putting a heart on an instagram video.
They didn’t have to do anything more than sign in to their account and they got better security than the military leadership of WW2.
It’s easy to lose focus and take “dumb users” as an undefeatable entity, but that’s the lazy way out. The most significant advancement of cryptography was asymmetric crypto that explicitly meant a moron leaking the public key didn’t mean shit.
I don't know what to say here. You don't understand the issues and you refuse to believe that such issues exist. We should agree to disagree. But I'll give it one last stab.
You can't just drop an Instagram-like app into the British Postal Office and then everything's great. Instagram works as a standalone app that doesn't need to comply with any exogenous processes or standards. It can set the standard process for itself, and then all of its users need to adapt to it. There is no way to design an app outside of the British Postal Office that will fulfill their needs and then drop it inside of the British Postal Office and expect to work. Even if you forced the organization to reorganize itself in order to adapt to the app (which happens a lot), the problems will be inevitable.
Even Office 365 or Google Apps, stars in the SaaS space, require internal administration and customization when being used inside organizations and they can be misused. Something as simple as this person should be part of this security group but not part of that security group. Such misconfigurations are inevitable because organizations are messy.
People smarter than you and I have been trying to solve the problem of good IT governance for decades and have so far failed. And the problem has nothing to do with the quality of the software engineers who make the product, nor their technical decisions. It is orthogonal to the real issues. The fact that dumb users can use encryption today without realizing it has zero implications on solving the issues that organizations actually face. It has zero implications on how they use their technology. The only thing it's done is made the technology more trustworthy for transactions of information, but it did nothing to change work habits, decisions, or perceptions about technology. We know because there are studies on how people interact with technology.
These are not technological problems. They are human problems. Things as simple as "I am petty and don't like that employee" or "I'm gonna make sure that my friend gets to have sole responsibility for that app's strategic focus, even though he knows nothing about how to do that department's work, but he's my friend" or "I need this political win and that's more important than hiring the right technology experts or implementing the right feature the right way." They're simple problems to express but intractable to solve. They're intractable because they're emotional and irrational, spawned by people who need therapy. And most often, the people with these problems aren't stupid or dumb. They're actually often smart, which is why they're also often in the position to make the wrong decisions for the wrong reasons. And then it trickles down throughout the organizational culture.
Technology cannot solve this simply because technology can always be discarded or misconfigured, despite the technology's design. Nobody can force an organization to use a technology, especially when it doesn't have the necessary experts to implement it properly. The biggest problem was that nobody at the British Post Office cared for quality control of the system. Bugs continue to be found in cryptography. They're rare, but they are found. Then they are patched. A lot of organizations don't care and then they have security holes simply because nobody cares about patching. Such lack of care extends beyond just cryptography. Automated updated certificates like LetsEncrypt does not solve this problem because the problem runs deeper than keeping a certificate up to date or running automated security patching. Certainly, nobody can force an organization to use LetsEncrypt. Nobody can force an organization to keep the right people in the right security groups. Nobody can force an organization to disable network accounts for employees fired for embezzlement. Nobody can force an organization to care about documenting, reporting, and fixing bugs.
Being right on a technology level has no bearing on whether one can ensure that an organization makes the right decisions overall. Most of the most important decisions aren't even directly related to technology. Even if the easy solution is as simple as use a SaaS that is as simple as Instagram. The dysfunctional ones will say, "Screw that, I want my bonus or I want my job security or whatever, I'll make sure we never use that Instagram-like app, or anything like it." Or worse, they'll try to use it with the best of intentions and then still screw it up massively when they deploy it for employees.
If you can't accept that possibility, we have to agree to disagree.
>You don't understand the issues and you refuse to believe that such issues exist.
I do, people are fucking hard to deal with an the entire field of cryptography is a technical solution to that. If you were at the helm it wouldn’t exist.
>There is no way to design an app outside of the British Postal Office that will fulfill their needs and then drop it inside of the British Postal Office and expect to work.
Yet it does every day and you just take it for granted. What voltage electricity do they use? Do they use phones? Do they use email? Do they use TLS? Do they use standard plumbing?
>These are not technological problems. They are human problems.
Human problems are for technology to solve. Keyword exchanges for ensuring Diffie Hellman secrecy are completely to deal with humans and the problem space it’s meant to address.
>Technology cannot solve this simply because technology can always be discarded or misconfigured, despite the technology's design.
Again, this is wrong. If the organization interacts with the public they are forced to interact with interfaces defined by browser manufacturers. It’s getting extremely difficult to run an insecure website that takes passwords without being flagged as such by systems outside of the organization’s control (e.g. chrome).
You can’t just refuse to implement smtp authentication, DKIM, and SPF anymore and expect to be able to email the rest of the world.
Your view is of the world as it existed during the 2000s. A giant buffet of technological choices, some good and some bad. That world is gone.
We now (for better or worse) have minimum bars set by a cabal of large internet companies if you intend to interact with the general public. Safari will not lower the bar for accessing the postal service, no matter how bristled the mustache of the politician who requests it.
The entire field of research you purport to not exist and not matter is what drives this cabal forward. They remove various footguns every day and take the reins out of incompetent operators hands further with every release.
Your argument seems to be “I’ve thought of a case where humans can screw up so therefore the entire field of taking the ability for them to screw up away is invalid”. It’s completely ridiculous and I’m pointing out that it is both active and making major differences.
If you don’t want to participate that’s fine. But telling yourself it’s because it’s an unsolvable problem is just a lie.
Look, you're not able to say how you would get subpar organizations to solve internal system problems. I see what's happening here. You're restricting all your technology examples and use cases to those where organizations need to be able to use technology to interact with other organizations in a trustworthy manner. There needs to be a counterparty that is important and who will give feedback that something is failing. Cryptography enables that. E-mail security mechanisms enable that. The problems the UK Postal Office experienced and that many other subpar organizations experience are not those problems. Why you are talking about oranges when the discussion was about apples is beyond me.
None of your solution philosophy would have prevented what the UK Postal Office experienced. Their issues had nothing to do with interacting with the public. Their use case was an internal black box where things went to hell and they refused to see that it went to hell because it looked like it was working.
You have no idea that we're talking about different things. You can just refuse to implement smtp authentication, DKIM, and SPF and not email the rest of the world. It is possible when you don't care about communicating with the rest of the world and you only use it for something it wasn't meant to do. And then weird phenomena emerge from that. Or you are emailing the rest of the world but you have no clue that you're having one-sided conversations because you don't care about replies. There are so many ways your assumptions can fall apart but it doesn't matter because they're not even trying to use the darn thing for e-mail.
You can design perfect technology in your perfect world but you can't force people to use it the way you're planning. In all your examples, you have the assumption that people will use technology for what you're designing it to do. Those assumptions mean nothing when they use your technology for something you didn't expect and it seems to work for them anyway. And there's no counterpart that they actually care about that tells them otherwise. You can try to explain to them that it's not working. You can even show evidence that their outputs aren't matching what they claim they want. But they won't listen because to them, it looks like it is working.
You need to stop bringing up examples and use cases that have nothing to do with the problems that the UK Postal Office was actually experiencing, and that all subpar organizations experience. Their problems are not the problems you are trying to solve in your logic. And even good organizations experience the same problems too, just to a lesser degree.
> The entire field of research you purport to not exist and not matter is what drives this cabal forward. They remove various footguns every day and take the reins out of incompetent operators hands further with every release.
I never said it didn't exist. I am saying it's not related. Again, the entire field of research that you champion solves a problem that is completely different from the problem that the UK Postal Office experienced. If you understand what happened with the UK Postal Office completely and then offer viable explanations of how your ideas would have prevented their problems and achieved their organizational goals, I will say I'm wrong. Hey, I'm not so arrogant to say it's impossible. But I will say that everything you've said so far is so unrelated to what their problems actually were at a root cause level. To say otherwise is a lie.
To be fair, I used to think like you. It was because I believed that we could create technological solutions to human problems that I didn't understand why organizations didn't just do technology properly. After diving into the research literature, I have realized that's naive. I no longer think like you. It is more complicated than what code alone can resolve. If you can't provide a solid analysis of how your ideas would have prevented the UK Post Office's problems, we really need to agree to disagree.
Surely the field of cryptography relies on conscientious and competent actors developing solutions that are robust in the face of malicious actors.
I am skeptical that there are software development practices that will allow me to hire a team of feckless incompetents and have them develop quality software. If you know of any I'm interested to hear about them.
That’s not necessary. The broader system of society, communication, and the encouragement of development of open solutions to this problem means your team of incompetents just doesn’t even need to solve them.