We have superpowers we don't usually properly understand until the shit hits the fan and something goes horribly wrong.
Software isn't engineering unless you're in aerospace, most software is so fragile if you look under the hood it should properly scare you. Some code you look at and you wonder how on earth it made it this long in production without breaking or anybody stumbling into it.
So we should take much more care about how software systems are developed to get out of this lack of robustness and predictability in the presence of unexpected inputs.
But that's not how we deal with software at all. The big trick is to read a couple of tutorials and one of those learn 'x' in 24 days books and you're off to the races, the money is good and never mind the ethics or the basic principles of good design.
So we get a very large amount of crap code (more so on the web than elsewhere, though every segment has it's horrors (embedded, banking, telco)), and the reviewers typically either don't have time or don't care as long as it checks off the feature set boxes. Or they downright don't understand what they are taking delivery of themselves.
Most software is ugly, most of it works (but barely so) and very little of it is actually understood completely. If we want to change that we're going to have to SLOW DOWN considerably, but that would leave the field wide open to the competition that doesn't give a damn. So you're damned if you do and you're damned if you don't.
If anybody has a real solution to the economic problem then I'm all ears but until we make it expensive to produce junk I see no way out of this. See also: ethics come at a price.
That's all very true, but I'm not sure that's what the author is talking about.
Most services, including hacker news, won't let you delete accounts. They could, but it would require more effort during implementation. It's easier for us engineers to say "accounts can never be deleted", and base the whole system on that. But that decision takes control from customers.
Here's another one: Someone coded that backdoor we found in all linksys routers a few years ago. A manager somewhere, maybe at the NSA, maybe at linksys, said to a developer "I want you to write a backdoor into all these production routers so they'll expose everything if you send a certain request over HTTP. We want that request to be the same for every router.", and the developer did it.
I think software engineering ethics with regard to how we implement things is in an ok place. It's improving, and that's an ok place to be.
Software engineering ethics with regard to what we implement, on the other hand, has never been worse.
>>Software isn't engineering unless you're in aerospace, most software is so fragile if you look under the hood it should properly scare you. Some code you look at and you wonder how on earth it made it this long in production without breaking or anybody stumbling into it.
I'm really tired of these "software isn't engineering because..." arguments. I think the first thing we need to do in order to have our profession move on to the next stage of maturity is to stop relying on arbitrary definitions and lines in the sand regarding what engineering is.
Let's look at the Wikipedia definition:
"Engineering (from Latin ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise") is the application of scientific, economic, social, and practical knowledge in order to invent, design, build, maintain, research, and improve structures, machines, devices, systems, materials and processes."
That's it. By this definition, software development is engineering. Note that it doesn't say anything about how robust the structures, machines, devices, systems, materials and processes must be in order for the profession to count as engineering. Even the most fragile thing could have been designed and built by an engineer.
The root of the word is actually interesting: it comes from the Latin word "ingenium", which means "cleverness." This is especially important in our field since a large part of software development involves clever hacks. Think about that next time you frown upon such a hack.
> I'm really tired of these "software isn't engineering because..." arguments. I think the first thing we need to do in order to have our profession move on to the next stage of maturity is to stop relying on arbitrary definitions and lines in the sand regarding what engineering is.
The inferiority complex with regards to "engineering"[1] seems to run deep in many programmers' mind.
[1] Which is a very, very broad term to begin with in this day and age, anyway.
I think for most decent engineers, if you have a small group of good ones, no matter the time pressures they'll deliver you something half decently architected -- most likely with understood compromises. Where things start to take a turn for the worse is when more people pile onto the codebase without the context to know what the trade-offs were or where the load-bearing paint is, get in, make changes and get out. Now nothing makes sense.
In order to safely make changes to any one part of a codebase, an engineer needs to understand how it all fits together and have all that context in their head to ensure changes don't have unintended consequences.
There are design patterns that make this easier of course and less error prone, but there's no enforcement at compile time, and with people coming and going it all goes out the window.
I think we can solve a lot of these problems with much, much stricter and smarter compilers. Eventually, I hope we'll all be writing software for which the compiler will be able to tell you authoritatively you've written the thing you set out to write and there are no bugs. As software complexity grows, no one or two people will be able to keep a whole understanding of the system in their heads (already true a most of the time). It follows for me that the only way to confidently make changes to a system you don't understand is to automate that understanding.
That's why I love what Rust stands for. It's a first step to be sure. However, it makes the assertion that the only way to allow people to reliably develop software they don't understand is to have the compiler 'understand' it for them (I use understand loosely of course).
Before Rust, it could be said that Haskell has the same philosophy of ensuring almost everything works at compile-time.
Anyway, I hope for a future where programmers will be able to specify all contracts, invariants, etc. in code, and have them be checked upon compilation. And not only for "smart" programmers, but the pretty bad ones, too.
>I hope for a future where programmers will be able to specify all contracts, invariants, etc. in code
Theoretically is this even possible? We simulate this with tests, and code-coverage analysis, but in the end all programs deal with unknown inputs, by definition.
There's certainly limits. Compilation-time type safety can only guarantee the absence of certain errors or presence of certain behaviours. Any question that can be reduced to the halting problem is something that cannot be automatically verified. However, there's lots of interesting and useful stuff that is not reducible to this problem. Such as "is this input really an integer?" (cough cough, dynamic languages)
This is a really old idea that turned out to be harder than it looked, and require more computational power to be available to programmers than has generally been available. It has also required that this be available for long enough that it could build up a head of steam as people built up libraries that could be used to do real work and build runtimes that were strong in practice as well as theory.
Past failures can not always predict future failures properly in a Moore's Law regime. For instance, as everyone knows, tablet computing is a totally stupid and repeatedly failed idea, except, iPad. Computer vision is a complete waste of time, unless you have GPUs sitting around that can chew through billions of operations for cheap. Etc.
This "old idea" is getting somewhere now. It's only early days.
> However, it makes the assertion that the only way to allow people to reliably develop software they don't understand is to have the compiler 'understand' it for them (I use understand loosely of course).
More like have the compiler make sure (and let it be able to make sure) that you didn't screw anything up. You still have to understand ownership and lifetimes, so I don't get what 'understanding' you're offloading.
What I meant is that I see increased compiler validation the future of software development -- not that Rust is there, just that it's a step in the right direction. I'd love a world where I can safely make changes to trunk features and the compiler would validate that there are no unintended consequences or side-effects.
I think this is important because it addresses both the need to move quickly and the desire for responsibility.
These are the reasons I advise using Haskell for startups. If you make everything validated (or much of it) at compile time you can move very quickly.
There's also the feeling of the code writing itself. In some places you can literally put in different functions until it typechecks which is great for exploring a problem domain.
In the future I hope to be able to recommend Idris as well.
So, the problem with this stuff is that mere additional compiler passes and better type systems are not how we get "responsibility". That's just a red-herring.
It's as though we said "Yes, the next generation of calipers will add an additional factor of ten in precision to our machinist's work and so increase our responsibility!"...and then those machinists go on to build gas chambers.
We cannot confuse the quality of the tools with the broader notion of ethics and responsibility, even if it's the only thing we have actual control over.
"Software isn't engineering unless you're in aerospace" ... I don't agree with this, what about real-time communication systems (ex: 911) ? Banking/financial transaction systems?
I don't think you can flag a discipline as engineering by it's industry, but mostly by the way it is being practiced.
Re: 911 - I've worked in 9-1-1 for VoIP off and on for years, and founded the first VoIP focused 911 provider. There's a massive amount of idiocy in telecom, VoIP, and 911. One company I know of turned off their alarming system that monitored the online database to the 9-1-1 system. The replication link died and no one noticed... for a whole month. And then they only noticed due to billing discrepancies as calls were routed to the wrong place. "Oops". Other proposals I've seen for 911 have put user safety way down the list, to save money.
And recently, Intrado, the big 911 company, had their system go off due to a hard coded limit on the number of calls that could ever run through it. Took down 911 for many people for nearly a day.
And not to mention how they handled address updates - in short, users would be sent to the wrong state, knowingly, until "data validation" workers manually fixed things. And they thought this was swell. (Bottom line, don't trust 9-1-1 over VoIP until you've verified it.)
Even across the US, just the format of some 911 operator consoles' data exchange isn't consistent. One PSAP was complaining that their system would show invalid data for certain calls containing lat/long, just because they were flagged as VoIP. Some dumb vendor made a poor assumption, and wrote a fail-close system that affected real people. Oops.
And next gen 911? It's been a while since I've seen the working group plans, but they basically wanted to connect every PSAP (locally underfunded answering points, about 7000 in the US) to the Internet. I don't think anyone knows how to secure such a thing, let alone when each node is autonomous. I'll let you think about how awesomely that can fail.
(All this said, the people answering the phones do an incredibly hard job for little compensation. I still almost start crying just remembering a call I audited when they weren't connected right -- company forgot to provision some payphones, panicked woman was literally dying and didn't know where she was (call disconnected with no definitive resolution, but I believe she saw a passing patrol and was able to get assistance). I've no idea how responders can handle that kind of load every day... OTOH the majority of calls are just idiots calling for non emergencies.)
The real telco engineering was possible inside a very closed and end-to-end operated system. Even then, we saw fun stuff like inband signaling allowing random end nodes to takeover the system.
I'd be shocked if banking was a whole ton better - everything indicates they're just as incompetent.
So, the world of software spans from proofs-of-concept to libraries to full-fledged services consumed by others to games to dedicated business applications. As such, any complaints about "ethics" really require additional context before we can make useful headway.
All of the aforementioned fields have different requirements, and honestly many of them don't need ethics. In computer games, for example, I don't think that "ethics" beyond a mere "okay, well, it sorta passes QA" is sufficient. In fact, in many cases, the rate of release of software and low barriers to entry have basically obviated the need for a sort of professional ethics to protect the consumer: any software that is grossly unreliable tends to be replaced in short order in any space (say, the social web) by a competitor. It's in the aloof silos of, say, aerospace or healthcare IT where the true garbage is fermented, because they have nobody breathing down their necks as long as the paperwork looks right.
Also, I think that we don't get and don't take enough credit in our profession for how deeply we influence businesses. Done correctly, a team of programmers should be able to automate away everyone at the company who isn't directly interfacing with customers--themselves included. What does ethics have to say about such a situation, especially when the people making requests don't understand how the business itself is implemented?
In some freemium games I'd say that the scaling of difficulty can be an ethics question. What if a free game promises free gameplay but actually requires hundreds of dollars of in-app-purchases to actually experience the core gameplay? There should at least be a gambling-industry level of ethics.
*Sure, it's not a big deal if the free player just loses some time trying the game out before dropping it, but what about the player who put in $20 thinking it would get them X, but it turns out that actually only got them 10% of the way to X?
Software isn't engineering unless you're in aerospace, most software is so fragile if you look under the hood it should properly scare you. Some code you look at and you wonder how on earth it made it this long in production without breaking or anybody stumbling into it.
So we should take much more care about how software systems are developed to get out of this lack of robustness and predictability in the presence of unexpected inputs.
But that's not how we deal with software at all. The big trick is to read a couple of tutorials and one of those learn 'x' in 24 days books and you're off to the races, the money is good and never mind the ethics or the basic principles of good design.
So we get a very large amount of crap code (more so on the web than elsewhere, though every segment has it's horrors (embedded, banking, telco)), and the reviewers typically either don't have time or don't care as long as it checks off the feature set boxes. Or they downright don't understand what they are taking delivery of themselves.
Most software is ugly, most of it works (but barely so) and very little of it is actually understood completely. If we want to change that we're going to have to SLOW DOWN considerably, but that would leave the field wide open to the competition that doesn't give a damn. So you're damned if you do and you're damned if you don't.
If anybody has a real solution to the economic problem then I'm all ears but until we make it expensive to produce junk I see no way out of this. See also: ethics come at a price.