It will never be the case that software will be perfect. We can get closer and closer, but the closer we are the more expensive the next step in closing the gap is.
While I do agree that making software better/more reliable is a good goal, I believe we would be better off making the system as a whole more robust; the system that includes humans. For every situation where a piece of software has control of something that effects society (individual, group, etc), there should always be a clear and direct means of appealing / pushing back on the decision that was made. Those means should involve a human reviewing the information and making a decision based on that information, not on what the computer said. There's thread after thread of us saying the exact same thing about companies like Google and Facebook; it should apply as a general rule.
No one is arguing that software must be perfect. But we aren't really even trying. Most software is written in extremely error prone languages without adequate testing.
You don't hear anyone saying we should throw out finite-element analyses and other computational verification methods when designing bridges because bridges can never be perfectly secure. Yet that is exactly the sentiment I often hear on software.
Are bridges and software comparable in complexity though? How many engineers work on designing a bridge compared to software developers that are writing a program? There are more than an order of magnitude more software developers in the US than civil engineers.
Now think about the state of US infrastructure. Does it inspire confidence for the future?
I'd say a lot of software is comparable in complexity to the large bridges that exist. Of course there are massive software projects that dwarf any bridge ever build. But a lot of software is only moderately complex.
I don't know about infrastructure in the US. I don't live there. I'm happy with the infrastructure in west Europe though. I wish that much care was put in to the software I use every day.
I agree with other commenters. And I think another part of the problem is the consumer model of software. In a pen and paper system if there was some reason why a record was special or different from the others, you could just attach a note to it and the next person who picked up the record could read your note. Custom software systems deny that sort of ad-hoc flexibility from the people using them. There’s no way to do anything that wasn’t planned for and programmed in. So office workers who use flows managed by custom software are actively disempowered from authorship over their own workflows.
That’s one of the reasons I think Excel (and tools like Notion) are so popular - the people on the ground can learn to express themselves in the full context of the tools. I think this sort of software is far more important than we give it credit for. (It’s an invisible problem to us, because we can change the software.)
While I do agree that making software better/more reliable is a good goal, I believe we would be better off making the system as a whole more robust; the system that includes humans. For every situation where a piece of software has control of something that effects society (individual, group, etc), there should always be a clear and direct means of appealing / pushing back on the decision that was made. Those means should involve a human reviewing the information and making a decision based on that information, not on what the computer said. There's thread after thread of us saying the exact same thing about companies like Google and Facebook; it should apply as a general rule.