Hacker News new | past | comments | ask | show | jobs | submit login
The responsibility we have as software engineers (benlog.com)
193 points by ingve on May 24, 2015 | hide | past | favorite | 130 comments



We have superpowers we don't usually properly understand until the shit hits the fan and something goes horribly wrong.

Software isn't engineering unless you're in aerospace, most software is so fragile if you look under the hood it should properly scare you. Some code you look at and you wonder how on earth it made it this long in production without breaking or anybody stumbling into it.

So we should take much more care about how software systems are developed to get out of this lack of robustness and predictability in the presence of unexpected inputs.

But that's not how we deal with software at all. The big trick is to read a couple of tutorials and one of those learn 'x' in 24 days books and you're off to the races, the money is good and never mind the ethics or the basic principles of good design.

So we get a very large amount of crap code (more so on the web than elsewhere, though every segment has it's horrors (embedded, banking, telco)), and the reviewers typically either don't have time or don't care as long as it checks off the feature set boxes. Or they downright don't understand what they are taking delivery of themselves.

Most software is ugly, most of it works (but barely so) and very little of it is actually understood completely. If we want to change that we're going to have to SLOW DOWN considerably, but that would leave the field wide open to the competition that doesn't give a damn. So you're damned if you do and you're damned if you don't.

If anybody has a real solution to the economic problem then I'm all ears but until we make it expensive to produce junk I see no way out of this. See also: ethics come at a price.


That's all very true, but I'm not sure that's what the author is talking about.

Most services, including hacker news, won't let you delete accounts. They could, but it would require more effort during implementation. It's easier for us engineers to say "accounts can never be deleted", and base the whole system on that. But that decision takes control from customers.

Here's another one: Someone coded that backdoor we found in all linksys routers a few years ago. A manager somewhere, maybe at the NSA, maybe at linksys, said to a developer "I want you to write a backdoor into all these production routers so they'll expose everything if you send a certain request over HTTP. We want that request to be the same for every router.", and the developer did it.

I think software engineering ethics with regard to how we implement things is in an ok place. It's improving, and that's an ok place to be.

Software engineering ethics with regard to what we implement, on the other hand, has never been worse.


>>Software isn't engineering unless you're in aerospace, most software is so fragile if you look under the hood it should properly scare you. Some code you look at and you wonder how on earth it made it this long in production without breaking or anybody stumbling into it.

I'm really tired of these "software isn't engineering because..." arguments. I think the first thing we need to do in order to have our profession move on to the next stage of maturity is to stop relying on arbitrary definitions and lines in the sand regarding what engineering is.

Let's look at the Wikipedia definition:

"Engineering (from Latin ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise") is the application of scientific, economic, social, and practical knowledge in order to invent, design, build, maintain, research, and improve structures, machines, devices, systems, materials and processes."

That's it. By this definition, software development is engineering. Note that it doesn't say anything about how robust the structures, machines, devices, systems, materials and processes must be in order for the profession to count as engineering. Even the most fragile thing could have been designed and built by an engineer.

The root of the word is actually interesting: it comes from the Latin word "ingenium", which means "cleverness." This is especially important in our field since a large part of software development involves clever hacks. Think about that next time you frown upon such a hack.


> I'm really tired of these "software isn't engineering because..." arguments. I think the first thing we need to do in order to have our profession move on to the next stage of maturity is to stop relying on arbitrary definitions and lines in the sand regarding what engineering is.

The inferiority complex with regards to "engineering"[1] seems to run deep in many programmers' mind.

[1] Which is a very, very broad term to begin with in this day and age, anyway.


I think for most decent engineers, if you have a small group of good ones, no matter the time pressures they'll deliver you something half decently architected -- most likely with understood compromises. Where things start to take a turn for the worse is when more people pile onto the codebase without the context to know what the trade-offs were or where the load-bearing paint is, get in, make changes and get out. Now nothing makes sense.

In order to safely make changes to any one part of a codebase, an engineer needs to understand how it all fits together and have all that context in their head to ensure changes don't have unintended consequences.

There are design patterns that make this easier of course and less error prone, but there's no enforcement at compile time, and with people coming and going it all goes out the window.

I think we can solve a lot of these problems with much, much stricter and smarter compilers. Eventually, I hope we'll all be writing software for which the compiler will be able to tell you authoritatively you've written the thing you set out to write and there are no bugs. As software complexity grows, no one or two people will be able to keep a whole understanding of the system in their heads (already true a most of the time). It follows for me that the only way to confidently make changes to a system you don't understand is to automate that understanding.

That's why I love what Rust stands for. It's a first step to be sure. However, it makes the assertion that the only way to allow people to reliably develop software they don't understand is to have the compiler 'understand' it for them (I use understand loosely of course).


Before Rust, it could be said that Haskell has the same philosophy of ensuring almost everything works at compile-time.

Anyway, I hope for a future where programmers will be able to specify all contracts, invariants, etc. in code, and have them be checked upon compilation. And not only for "smart" programmers, but the pretty bad ones, too.


>I hope for a future where programmers will be able to specify all contracts, invariants, etc. in code

Theoretically is this even possible? We simulate this with tests, and code-coverage analysis, but in the end all programs deal with unknown inputs, by definition.


There's certainly limits. Compilation-time type safety can only guarantee the absence of certain errors or presence of certain behaviours. Any question that can be reduced to the halting problem is something that cannot be automatically verified. However, there's lots of interesting and useful stuff that is not reducible to this problem. Such as "is this input really an integer?" (cough cough, dynamic languages)


This is a really old idea that does not increase productivity.


This is a really old idea that turned out to be harder than it looked, and require more computational power to be available to programmers than has generally been available. It has also required that this be available for long enough that it could build up a head of steam as people built up libraries that could be used to do real work and build runtimes that were strong in practice as well as theory.

Past failures can not always predict future failures properly in a Moore's Law regime. For instance, as everyone knows, tablet computing is a totally stupid and repeatedly failed idea, except, iPad. Computer vision is a complete waste of time, unless you have GPUs sitting around that can chew through billions of operations for cheap. Etc.

This "old idea" is getting somewhere now. It's only early days.


I don't think languages can be smarter than the people who use them. Maybe if we ran the tape again this time, Ada would win. But I doubt it.


> However, it makes the assertion that the only way to allow people to reliably develop software they don't understand is to have the compiler 'understand' it for them (I use understand loosely of course).

More like have the compiler make sure (and let it be able to make sure) that you didn't screw anything up. You still have to understand ownership and lifetimes, so I don't get what 'understanding' you're offloading.


What I meant is that I see increased compiler validation the future of software development -- not that Rust is there, just that it's a step in the right direction. I'd love a world where I can safely make changes to trunk features and the compiler would validate that there are no unintended consequences or side-effects.

I think this is important because it addresses both the need to move quickly and the desire for responsibility.


These are the reasons I advise using Haskell for startups. If you make everything validated (or much of it) at compile time you can move very quickly.

There's also the feeling of the code writing itself. In some places you can literally put in different functions until it typechecks which is great for exploring a problem domain.

In the future I hope to be able to recommend Idris as well.


So, the problem with this stuff is that mere additional compiler passes and better type systems are not how we get "responsibility". That's just a red-herring.

It's as though we said "Yes, the next generation of calipers will add an additional factor of ten in precision to our machinist's work and so increase our responsibility!"...and then those machinists go on to build gas chambers.

We cannot confuse the quality of the tools with the broader notion of ethics and responsibility, even if it's the only thing we have actual control over.


"Software isn't engineering unless you're in aerospace" ... I don't agree with this, what about real-time communication systems (ex: 911) ? Banking/financial transaction systems?

I don't think you can flag a discipline as engineering by it's industry, but mostly by the way it is being practiced.


Re: 911 - I've worked in 9-1-1 for VoIP off and on for years, and founded the first VoIP focused 911 provider. There's a massive amount of idiocy in telecom, VoIP, and 911. One company I know of turned off their alarming system that monitored the online database to the 9-1-1 system. The replication link died and no one noticed... for a whole month. And then they only noticed due to billing discrepancies as calls were routed to the wrong place. "Oops". Other proposals I've seen for 911 have put user safety way down the list, to save money.

And recently, Intrado, the big 911 company, had their system go off due to a hard coded limit on the number of calls that could ever run through it. Took down 911 for many people for nearly a day.

And not to mention how they handled address updates - in short, users would be sent to the wrong state, knowingly, until "data validation" workers manually fixed things. And they thought this was swell. (Bottom line, don't trust 9-1-1 over VoIP until you've verified it.)

Even across the US, just the format of some 911 operator consoles' data exchange isn't consistent. One PSAP was complaining that their system would show invalid data for certain calls containing lat/long, just because they were flagged as VoIP. Some dumb vendor made a poor assumption, and wrote a fail-close system that affected real people. Oops.

And next gen 911? It's been a while since I've seen the working group plans, but they basically wanted to connect every PSAP (locally underfunded answering points, about 7000 in the US) to the Internet. I don't think anyone knows how to secure such a thing, let alone when each node is autonomous. I'll let you think about how awesomely that can fail.

(All this said, the people answering the phones do an incredibly hard job for little compensation. I still almost start crying just remembering a call I audited when they weren't connected right -- company forgot to provision some payphones, panicked woman was literally dying and didn't know where she was (call disconnected with no definitive resolution, but I believe she saw a passing patrol and was able to get assistance). I've no idea how responders can handle that kind of load every day... OTOH the majority of calls are just idiots calling for non emergencies.)

The real telco engineering was possible inside a very closed and end-to-end operated system. Even then, we saw fun stuff like inband signaling allowing random end nodes to takeover the system.

I'd be shocked if banking was a whole ton better - everything indicates they're just as incompetent.


So, the world of software spans from proofs-of-concept to libraries to full-fledged services consumed by others to games to dedicated business applications. As such, any complaints about "ethics" really require additional context before we can make useful headway.

All of the aforementioned fields have different requirements, and honestly many of them don't need ethics. In computer games, for example, I don't think that "ethics" beyond a mere "okay, well, it sorta passes QA" is sufficient. In fact, in many cases, the rate of release of software and low barriers to entry have basically obviated the need for a sort of professional ethics to protect the consumer: any software that is grossly unreliable tends to be replaced in short order in any space (say, the social web) by a competitor. It's in the aloof silos of, say, aerospace or healthcare IT where the true garbage is fermented, because they have nobody breathing down their necks as long as the paperwork looks right.

Also, I think that we don't get and don't take enough credit in our profession for how deeply we influence businesses. Done correctly, a team of programmers should be able to automate away everyone at the company who isn't directly interfacing with customers--themselves included. What does ethics have to say about such a situation, especially when the people making requests don't understand how the business itself is implemented?


In some freemium games I'd say that the scaling of difficulty can be an ethics question. What if a free game promises free gameplay but actually requires hundreds of dollars of in-app-purchases to actually experience the core gameplay? There should at least be a gambling-industry level of ethics.

*Sure, it's not a big deal if the free player just loses some time trying the game out before dropping it, but what about the player who put in $20 thinking it would get them X, but it turns out that actually only got them 10% of the way to X?


If you could look under the hood of a lot of aerospace software it would scare you too.


That's interesting, could you please expand on that ?


Plz don't say that ;_;


Well, it is distasteful but liability for downtime is pretty much the answer.


If you want to ship "x in 24 days" code, that's up to you.

Aerospace code isn't "engineered" better. It's simply done with an eye towards accountability.

So really - not slowing down enough to finish something is a competitive edge? I don't believe you. We are not talking about that much time here.


A Hippocratic-like oath by itself would be useless without a powerful professional organization watching out for our interests. If I take the moral high road and get fired by my MBA boss, I have no course of appeal. If an MBA were to fire a doctor for refusing to compromise on ethics, I'm sure the AMA would unleash hell on the MBA. Same for lawyers and ABA. Actually those two organizations make it difficult for MBAs to manage their members so they have two safeguards we lack. (I've seen this in the bank I work. The lawyers report to other lawyers all the way up to the general counsel, who reports to the CEO and the board. Such structure makes it easy to keep ethical conduct high priority without fear of retribution from the MBAs.)


In Ontario Canada we have an organization doing so, the PEO. Any professional that has the word "engineer" in his title, including software engineers must do a license exam, old a valid engineering degree and have 4 years of supervised work experience with references. It isn't as common for software guys then let's say civil engineers.


Two corrections:

1) "Engineer" isn't protected; "professional engineer" is.

2) This applies to all of Canada. Each province has its own regulating organization.


But what happens in the above scenario? Does the PEO kick ass when necessary for its members? Does it, like the lawyers and doctors, ensure that its members are remunerated well?


It doesn't adjust the remuneration of it's members (nor do lawyers in Canada), it really is "market" based, as for doctors the government has a bigger say since our healthcare is universal.

The PEO will take action against non-members using the title engineer, or take action against engineers not acting lawfully.


Right but if an engineer refuses to do something unethical,and loses his job say, does the PEO got his back? Does it have teeth?


Or, you could just stick to your ethics even if it is inconvenient.


One can, and one should, but we as a society have a vested interest in providing incentives to make doing the right thing easier than doing the wrong one.

I've criticized the abrogation of responsibility elsewhere in this thread, I'm not giving anybody a pass or an excuse, but the reality of the cowardice of normal people necessitates that we provide some measure of cover for the people unwilling to risk their necks.


I agree with you both. "One can, and one should," and it’s even better if the profession provides a mechanism to remove from practice its own worst offenders. The AMA/ABA examples I gave above do not just offer protection to their members, but they also hold power to send their unethical practitioners to the poorhouse. (If you’re a junior physician with lots of student loans but lose your license due to an ethics violation, I don’t know what you’ll do. Loans don’t go away in a bankruptcy.) So those professions come with a terrible downside for flagrant violators. We don’t have any such downside in software, which may explain some of the moral depravity mentioned.


>In 2008, the world turned against bankers, because many profited by exploiting their expertise in a rapidly accelerating field (financial instruments) over others’ ignorance of even basic concepts (adjustable-rate mortgages). How long before we software engineers find our profession in a similar position?

How long until doctors find themselves in a similar position? Medical bills are, after all, the number one cause of bankruptcy in the US, and US doctors are the highest paid on Earth.

Meanwhile, software gets ever more cheap and plentiful, and programmers literally give away much of the fruits of their labor for free under permissive licenses. Yet it's supposedly programmers that are in need of reflection and humbling.

Programmers are effectively being punished--by the anti-tech worker protests in the Bay Area, a hostile media (Gawker, Mother Jones, to name a few), and by the incessant push for more cheap, indentured labor in the form of H-1Bs--all because we have not yet organized ourselves into a protectionist racket to gouge the citizenry in the same way doctors have.

Unlike doctors, we have little prestige, yet we are still able to earn high-ish salaries, and that makes us an easy target for resentment: "How dare these 'coders' earn $100k when a lot of them don't even have degrees! I have an MA in Journalism, and yet I struggle to pay my bills!"

The backlash the author predicts is already underway, but not for the reasons he cites.


> How long until doctors find themselves in a similar position? Medical bills are, after all, the number one cause of bankruptcy in the US, and US doctors are the highest paid on Earth.

Blame medical insurance companies. Physicians must charge 10x what something actually costs because insurance companies will only pay 1/10 of the bill. This insurance company bullshit works all fine and dandy, until it lands on a poor uninsured bloke.

Ask yourself: "why the hell should a medical insurance company be for-profit?" Why should the top 5 medical insurance companies make _billions_ of dollars in _profit_ _every_ _month_ ?

http://www.forbes.com/sites/peterubel/2014/02/12/is-the-prof...


Responses like this demonstrate just how well doctors have, with their high-prestige, managed to near-completely insulate themselves from just criticism.

"A clearer way to think about this is profits -- and insurers aren’t where the big profits in the health-care system go. In 2009, Forbes ranked health insurance as the 35th most profitable industry, with an anemic 2.2 percent return on revenue. To understand why the U.S. health-care system is so expensive, you need to travel higher up the Forbes list. The pharmaceutical industry was in third place, with a 19.9 percent return, and the medical products and equipment industry was right behind it, with a 16.3 percent return. Meanwhile, doctors are more likely than members of any other profession to have incomes in the top 1 percent."

http://www.washingtonpost.com/blogs/wonkblog/wp/2014/01/13/w...


Don't the physicians' associations artificially limit the number of students/graduates in a specific field? I had dinner with one urologist and he was boasting how great it was that there were no other urologists around and about his new house etc. I asked him how that surely wouldn't be corrected shortly by another doc serving the same area. He found it to be rather humorous, and explained there was very little new competition, even as many of his current colleagues were retiring.


> How long until doctors find themselves in a similar position? Medical bills are, after all, the number one cause of bankruptcy in the US, and US doctors are the highest paid on Earth.

Physician salaries are not the reason why average health care costs in the US are higher than the average in other developed countries—physician compensation is a very small part of the total overhead.


If, as software engineers, we all held ourselves to even minimal ethical standards, we wouldn't have Sony Rootkits and Superfish, etc. It requires self-discipline though, since there is no ethics board to answer to like other professions. It might be time for such an organization, since it's evident that there are plenty of unscrupulous programmers out there willing to implement this crap, and nobody is holding them accountable.


How would you spearhead such an organization?


The author gives 3 examples of confusion by the general public regarding how software works. We, as a society, should try to educate people more on these things. But this is mostly a matter for our schools, not our software engineers.

The author suggests we need a "Hippocratic oath" for software engineers, but provides no examples of unethical behavior. I'm not saying that there is none - there is plenty, but I see no reason to assume it is more than in other fields, like advertising or psychology or science or television or fitness. People need to be ethical no matter the field they work in. I see no reason to single out software engineering, except that the author and we happen to hold that profession.


"In 2008, the world turned against bankers, because many profited by exploiting their expertise in a rapidly accelerating field (financial instruments) over others’ ignorance of even basic concepts (adjustable-rate mortgages). How long before we software engineers find our profession in a similar position? "

Never. Software developers as a group are nothing like bankers as a group. Developers are far more ethical and have an incredibly low aptitude for opportunism (look how underpaid we are as a whole, not to mention half of us suffer from imposter syndrome. Ever meet a banker with imposter syndrome? Ha! ).

Being a predatory, unethical banker requires a certain kind of personality that would never find itself slaving away writing code for 12 hours a day (except to write software to assist in your task of being a predatory, unethical banker).


> Software developers as a group are nothing like bankers as a group. Developers are far more ethical and have an incredibly low aptitude for opportunism

Wow. That must be why half the posts on here are about security (which in effect is to stop those other ethical programmers from either owning our systems or intercepting our communications). And then there are all the scams, the rip-offs and the extortions.

Case in point, some groups of no doubt talented and extremely ethical programmers hold companies at ransom for bitcoin payments or they'll be DDOS'd into the ground.

No, we're doing just fine as a profession. I'm sure my bank has its warts but they don't hold a candle to some of the stuff I've seen in the programming world.


> Wow. That must be why half the posts on here are about security

really? so those discussions are all held amongst programmers eager to use these security vulnerabilities to illegally exploit systems?

Or are they entirely groups of programmers dedicating their careers to being very worried about the prospect that our systems might not be secure enough? Is that not an ethical position?

> Case in point, some groups of no doubt talented and extremely ethical programmers hold companies at ransom for bitcoin payments or they'll be DDOS'd into the ground.

We're talking about the prospect of an industry being held as globally unethical, not that people will become aware that software can be used to commit crime. Many crimes are committed with guns - is there an outrage against gun owners as a group, that they're all just criminals ?

> No, we're doing just fine as a profession. I'm sure my bank has its warts but they don't hold a candle to some of the stuff I've seen in the programming world.

I think you're grossly underestimating the scale of crime being perpetrated by banks even very recently. Start with worldwide currency manipulation: http://www.nytimes.com/2015/05/23/opinion/banks-as-felons-or...


> so those discussions are all held amongst programmers eager to use these security vulnerabilities to illegally exploit systems?

No, that's the other half of the equation. But you need both, people that are the 'bad guys' and people that are the 'good guys'.

> We're talking about the prospect of an industry being held as globally unethical, not that people will become aware that software can be used to commit crime.

Well, what if it's both. Software can be used to commit crimes and the software professions is at least actively helping in many crimes where software needs to be written in order for the crime to be perpetrated.

> Many crimes are committed with guns - is there an outrage against gun owners as a group, that they're all just criminals ?

Depending on the country, yes, people tend to frown at gun owners in some places and even though not all of them are criminals some of them are. Whether or not the %age of criminals that legally own guns is larger than the %age of people that do not own guns and that are criminals is something I don't have statistics on but it would not surprise me depending on the location where you polled.

> I think you're grossly underestimating the scale of crime being perpetrated by banks even very recently.

Yes, and they ALL needed software to do that, by themselves these bank dudes would be about as able as a general without an army.

> Start with worldwide currency manipulation:

Very bad stuff. So who got charged? Anybody go to jail yet? Still wondering why they feel that they can get away with it in the next round?

Society has checks and balances and those tend to fail if enough money is involved.

The point of having some rules of ethical conduct is that you try to take the money out of the equation and focus on the actual deed, and the responsibility flowing from that.


> really? so those discussions are all held amongst programmers eager to use these security vulnerabilities to illegally exploit systems?

Those discussions tend to occur on different forums. Their absence here is no sign of their absence elsewhere. And for many of us, our biggest issue with Aaron Swartz is not what he did, but how he was treated.

> Or are they entirely groups of programmers dedicating their careers to being very worried about the prospect that our systems might not be secure enough? Is that not an ethical position?

That's a nice, positive view of security researchers. For what it's worth, it's one I hold myself.

But we don't exactly hear a lot about those who skipped the whole "ethical disclosure" debate, of if their specific position is indeed ethical, and went straight to selling exploits to the NSA or criminal enterprise.

> I think you're grossly underestimating the scale of crime being perpetrated by banks even very recently. Start with worldwide currency manipulation: http://www.nytimes.com/2015/05/23/opinion/banks-as-felons-or....

Meanwhile, global botnets are common enough that we don't hear about most of them in the news - because they're not newsworthy - except for the rare occasions some security researchers manage to put a dent in one of the larger ones. And everyone gets pissed at Microsoft for mishandling no-ip.com's domains in an effort to fight malware. Not to mention the number of times I've heard of account details of services I use being compromised.

Meanwhile...

> More than 5% of people visiting Google sites have at least one ad injector installed. Within that group, half have at least two injectors installed and nearly one-third have at least four installed. > Thirty-four percent of Chrome extensions injecting ads were classified as outright malware.

http://googleonlinesecurity.blogspot.com/2015/03/out-with-un...

The only reason Superfish was newsworthy wasn't from potentially injecting ads into your banking website... but from completely fucking up the security of your browser to the point that anyone could MitM anything.


We could then talk about the ethics of the engineers working for the NSA whom are engaging in massive surveillance of the country and world. They may not be opportunistic like bankers, but they definitely have moral failings grounded in their delusion that they are making the world a safer place.


it has to do with who is deciding the activities to take vs. who is implementing them. Everything a bank does is via software as well. Software is just the infrastructure. It is the leadership that takes the steps to do things that are unethical.

The NSA and perhaps Facebook have the special caveats that there are developers there who can see that what they are being told to do is unethical. So the large bunch of them that go along anyway, you can make an argument. But that's not the majority of developers and also re: the NSA, the most famous software person of them all is Ed Snowden and he's a symbol of opposition to what the NSA is doing.

People aren't blaming programmers for the crimes of the banks, the NSA, or Facebook. The programmers are fully the ones executing the tasks for all three, but they aren't in general the ones making the decisions; to the degree that there are programmers who are making those decisions, they are considered to be a "banker / NSA goon / Facebook executive who also codes".


You can't just say, "Oh, well, I'm not the _decider_, so my hands are clean". When you decide to put fingers to keyboard and go along with implementing something unethical, you become just as culpable.

And no, I don't feel sorry for the construction contractors who bought it when the Death Star blew up.


What about writing open source software which you strongly suspect will be used by the NSA?


That depends, NSA almost certainly use GPG or PGP, OpenSSL or something like that (with their own patches) - yet the world would be drastically worse of if these technologies didn't exist.


> Software developers as a group are nothing like bankers as a group. Developers are far more ethical

I guess it was bankers that hacked Target and stole all that credit card info, then.


Criminals exist everywhere. We're not talking about embezzling bankers and hacking developers. Consider the median banker vs the median developer.


This is a good piece, and it is always good to take to heart lessons of ethics and professional responsibility. Ultimately I don't think the situation from the practitioner side is much different from that of civil engineers, architects, doctors, etc. We're all trusted to do something beneficial with the arcane skills we have mastered, and at least in the first instance do no harm. I do think that software is unique in its ubiquity and penetration into every corner of life, and it may well be that there has never been a technology relied on by so many while being understood by so few. But the responsibility incurred when you build something that others will rely on is what it has always been.


As a physician and, prior to that, military officer, I have been struck by the fierce non-ethics of many software folks. Strawman, I know, but there it is.

I think part of the issue is that software engineering is abstracted from the immediate problem the person in the world is trying to solve, and one engineer may cross many industries in their career. In some sense, that abstract transience inhibits the feeling of duty that, say, a small town cardiologist feels toward his patients. My grandmother's doctor quit medicine after she died of a heart attack. He couldn't stand seeing all his friends die.

Conversely, there's an opportunity here to think about methods to pierce the abstraction. Perhaps that is the first order issue for software ethics: pierce the abstraction. Know the user's ethical problems. Know the customer's ethical problems (the therapist may be the user, but the hospital is the customer).

I hypothesize that the ethical issues are more obvious at the user's level. Hospital business offices, for instance, are famous for claiming the ethical high ground while ignoring the strain productivity requirements place on their staff. How much of software ethics in healthcare could be boiled down to "making healthcare easier, one less click at a time"?

That's where money and abstraction prevents the development of a coherent ethics: why develop ethics when I'm getting paid and will never get sued. No one is going to sue the software house for the hours of delay that accrete 1/2 second at a time. No one is going to sue the IT guy for the hours of delay that accrete on slow networks. No one is going to sue the developer that doesn't bother to anticipate that the PACS system will eventually be called upon to store digital pathology images also (at 10x the storage per image and 10x the number of images).

In fact, no one's going to sue the hospital for those things. And the hospital is the customer. So there's definitely no path from the nurse or doctor to the software house. Who cares about the intern wearing running shoes at 2 am trying to keep patients alive in 10 different units on 7 different floors in the 3 different wings of 2 hospitals? Nobody.

Similar issues could probably apply to roughnecks in the oil industry watching oil blow into the ocean because there's no button to override power on the valve, home buyers who are just trying to get their kids into a good school district, etc, etc.


"Who cares about the intern wearing running shoes at 2 am trying to keep patients alive in 10 different units on 7 different floors in the 3 different wings of 2 hospitals?"

It certainly isn't the AMA and the rest of the medical establishment, which do everything they can to keep the supply of doctors low and the salaries high.


The hospital will care when it gets sued.


Nah. They'll just raise their rates some more to cover the increased cost of malpractice insurance. And so the cycle continues...


Most software has no need to be held to such high standards. Very rarely is a bug going to actually hurt someone beyond some mild inconvenience. Where it matters, like in space shuttle code, it's done. This whole comment section is entirely overblown.


>As a physician and, prior to that, military officer, I have been struck by the fierce non-ethics of many software folks.

Yes! Compared to the noble physician, programmers like like Linus Torvalds (Linux, Git) and Richard Stallman (Emacs, GCC) who freely give away the fruits of their labor, even when it generates billions of dollars in value of which they reap only a tiny fraction, are nowhere near as ethical.

Physicians are so ethical, in fact, that medical bills are number one cause of bankruptcy in the US!


As a regular donor to FSF, I don't disagree with you. But very fundamental ethics are shoved in front of us constantly by the nature of the work.

You can definitely make solid arguments for or against the ethics of medicine, particularly as a business, and you can definitely make solid arguments for or against the ethics of the military.

My point is more that it's quite difficult to make a lot of arguments at all about software engineering ethics. Find me a book on ethical network administration at Barnes and Noble.


Does it make sense that doctors have to charge 10x just to even-out the insurance companies only paying 1/10 the price? Nope. You should direct your anger at for-profit medical insurance companies, which make billions of dollars in profit every month.

To your ethical claim: can engineers go to prison for being unethical? No. Can physicians? Yes.


"But that’s not remotely true. The last time the OECD looked at this (PDF), they found that, adjusted for local purchasing power, America has the highest-paid general practitioners in the world. And our specialists make more than specialists in every other country except the Netherlands. What’s even more striking, as the Washington Post’s Sarah Kliff observed last week, these highly paid doctors don’t buy us more doctors’ visits. Canada has about 25 percent more doctors’ consultations per capita than we do, and the average rich country has 50 percent more. This doctor compensation gap is hardly the only issue in overpriced American health care—overpriced medical equipment, pharmaceuticals, prescription drugs, and administrative overhead are all problems—but it’s a huge deal.

Doctors aren’t as politically attractive a target as insurance companies, hospital administrators, or big pharma, but there’s no rational basis for leaving their interests unscathed when tackling unduly expensive medicine."

http://www.slate.com/articles/business/moneybox/2013/02/amer...


I am pretty sure an engineer can face some kind of serious charges if they were negligent in part of a system that ended up killing people, like a faulty airplane


It would start with accepting liability for the products we create. And no software company to date has been willing to do that.


That would require one hell of a liability insurance premium and as such would cause software development to stagnate due to all the bullshit it would require.


As I said, it would slow things down tremendously. I realize that and it's a huge problem, we basically get to choose between crap with huge unspecified risks (and those risks compound as systems become more interconnected and larger) and technological stagnation. Neither are desirable. But maybe we're overlooking a possible solution.


My primary objection to most people calling themselves "software engineers" is that professional engineers are personally and professionally liable for their mistakes. It feels like software engineers want to shirk professional responsibility and ethics, yet gain some sort of ego boost by calling themselves an engineer.


A civil engineer can certainly do great harm, but what he does is not exactly magic: you have to calculate the strength of steel, make plans for building a building etc, if he started to take into account earth quakes in New York I might question him.

If your field wasn't software engineering, could you describe it? "they sit every day in front of the computer and write colored text". Might as well be magic, then.


"if he started to take into account earth quakes in New York I might question him."

Earthquakes do occasionally occur in NY:

http://earthquake.usgs.gov/earthquakes/states/new_york/histo...

Even if there's only one earthquake in NY every 50 years, you want to make sure that when one happens, skyscrapers don't collapse and the Indian Point nuclear plant doesn't dump radioactive waste into the Hudson.


We have a code of ethics: http://www.acm.org/about/se-code

It sounds like many people haven't read it?


ACM and its members have a code of ethics. Less than one percent of computer programmers are ACM members.


"Software Engineering Code of Ethics and Professional Practice (Version 5.2) as recommended by the ACM/IEEE-CS Joint Task Force on Software Engineering Ethics and Professional Practices and jointly approved by the ACM and the IEEE-CS as the standard for teaching and practicing software engineering."

It certainly reads like it was written by a committee 15 years ago, but ACM and IEEE are the professional organizations for software, and I appreciate that this document exists.


I don't see the part where an ethical boundary was crossed. The examples given, such as when people realized that people working at Facebook write software to make users feel a certain way.

The knowledge that made the experiment unethical, came from the experiment itself. They did not otherwise know what their users would feel.

People found the story they wanted, but it doesn't change the facts.

When I write software, I write it so that people can use it, just as the farmer grows food so people can eat it, and the doctor practices medicine so that people can live with the benefits.

To me, just as to them, it is not my place to tell people what is right and what is wrong to do with what I make; nor do I have the facility.

Timidity and caution are good for managing rare and known dangers, but practicing them too widely will lead humanity to starvation and its natural death with the rest of the unambitious matter in the universe, you can stay back and incinerate if you'd like, but you're not bringing me with you.


"MBA Oath" for managers: http://en.wikipedia.org/wiki/MBA_Oath


The Order of the Engineer was established to address some of these issues. There is a similar Pledge given to graduates of Computer Science programs at various college/universities throughout the US via the Pledge of the Computing Professional (link at http://www.computing-professional.org).


Related: Mike Monteiro's talk, "How Designers Destroyed the World":

https://vimeo.com/68470326


This is one of the reasons why debt can be considered fundamentally immoral.

A fresh college student might well consider writing genocide apps if the alternative is a boot stamping on their face forever in the form of tuition debt.

You can't fix this form of 'corruption' completely, but ensuring that people don't end up in the banality of evil scenario in which they switch off their brains 9-5 and do bad things in order to feed their families is extremely important.


"To a non-engineer, even an incredibly smart person, this is absolutely non-obvious."

I had people at work who just assumed that we IT people can track everything they've done and read their emails anytime we wanted. I was a bit taken aback that they just blindly trusted like that, and explained that we have policies and don't do anything like that.


> For most engineers, including a number of very good and ethical people at Facebook, it’s surprising that this is even an issue.

Wow. That really hit home. I work for a large enough company that's constantly running experimentation, usually as A/B testing (not Facebook). It honestly never even dawned on me to consider how borderline ethical that is.


Why is experimentation unethical.

Deliberately infecting someone with syphilis is clearly unethical, regardless of whether or not there is a control group. However, if showing them different sorts of social media posts is ethical when done randomly, why is it unethical when done deliberately?


> ethical when done randomly

I would argue it may not be, but this would depend on the exact situation. Did the user ask for random posts? Or is the selection of posts being imposed on them by a service? Do they even know that they are being show a certain subset of posts and how that subset is chosen?

Any time there is misrepresentation or dishonesty about what is really going on - including lies of omission - then could easily be ethical issues.

> Why is experimentation unethical.

Yes, experimenting on people - even things like Facebook's experiment - is shockingly unethical in all cases if it is done without an external watchdog such as an IRB[1]. Usually, it will also require the explicit informed consent of tho people involved, among other requirements (e.g. an opportunty to remove themselves from the experiment at any time, even after the experiment is over). In some cases, if the experiment is not risky, some or all of those additional requirements could be adjusted or waived.

We require this (by law under the Common Rule if federal money is involved), because human experimentation has a long, disturbing history of unethical behavior. You probably believe that an experiment involving blog posts is nothing like the problems we've had in the past... and I would agree. Facebook's experiment seems incredibly low risks when compared to the experiments performed by a pharmaceutical company or research hospital.

So why the outrage at Facebook? While there is some concern over their failure to properly debrief those involved (and other problems relating toi informed consent), most of the outrage is about their lack of proper IRB approval. Running an experiment may be ethical. The point is that YOU, THE EXPERIMENTER are NOT do not get to make that decision on your own! The point isn't that these experiments were unethical in some way.

The entire point of an IRB is to review the experimental methodology to verify that it meets all ethical requirements. This should be easy for most experiments conducted by a social media business or other software/internet business. All silicon valley has to do is setup their own IRB (they could probably partner for a while with a university IRB to gain legitimacy faster).


Because you don't really know what the longterm effect on the subjects is going to be, especially w.r.t behavioral studies. The human psyche is still in many ways very much a black box, and there are plenty of questionable behavioral studies where the consequences of the study greatly outweighed the benefits.

As far as deliberate vs. random goes, it's an important distinction because intent and consent are arguably paramount when dealing with questions of morality and ethics - as reflected by our legal system, where intent and consent are often a source of debate. In this case, experimentation is clearly done with intent and often without explicit consent (as opposed to the implied consent given by the Terms of Use).

It's why giving someone HIV (or Syphillis) unknowingly isn't illegal or really all that immoral (because the infector is unaware), but it is illegal to infect people intentionally.


You don't know the long term effects of random social media posts either. Before doing some sort of a controlled study you can't know.

Recently I went to a conference and acted like a sales guy. I gave different (truthful) pitches to different people and observed how enthusiastic they seemed about the product afterwards. Was that also unethical? Would it become unethical if I did a hypothesis test afterwards, rather than merely going with my gut?


> You don't know the long term effects of random social media posts either.

Right, hence the whole rest of my message concerning intent, consent, and their importance with regards to morality and ethics.

> Was that also unethical?

Maybe? I wasn't there, I can't tell you. There are plenty of ways to act VERY unethically when it comes to sales and advertising, even while being truthful (e.g. you could omit some very important information). Deceptive advertising is a whole class of illegal actions in many developed nations, and not all of those actions involve the strictest definition of lying.

Not sure what hypothesis testing has to do with anything, experimentation certainly doesn't need to go hand-in-hand with statistical analysis.

I almost added a bit about taking the whole thing to its logical conclusion that the entire field of advertising and sales is arguably unethical. I haven't really thought that entirely through so I omitted it, but it's food for thought nonetheless.


Because this (arguably) medical experimentation was done without informed consent.


Depends - Facebook did it to manipulate their feelings, if you change the color on a bottom to see which one gets the most clicks I don't think that would be considered unethical.


Still, there's no oversight for either type of experiment (Facebook's or the color change). Where the border gets drawn between ethical and unethical is murky even WITH oversight, let alone without it.


If that is what bothers you then we could form a group of a/b test reviewers for which companies could submit their tests to - it would still be voluntary but it would be easier to say after the fact that it had been a approved by an ethics board.


This really sunk in for me when I learned we were able to create robots that always win at rock, paper, scissors. A regular person wouldn't naturally expect this.

https://www.google.com/search?q=robot+always+wins+rock+paper...


Are you in need of protecting your accounts, retreiving lost passwords and email account, Bank logins, and all Social Network accounts, Grade change, UNI Transcript and whatever service you may require with quick proof and result? mithackerurrep014@gmail.com is your best chance. DONT GET SCAMMED!!!


Every line of every code could be rewritten 1000 times. Every code of every engineer could be shared and reviewed. If you ask me to write a code that detects your zipcode based off the IP. Who am I to argue with the man paying my bill? Yes, I could change jobs, but they'd at some point notice my turbulent job history. Then the individuals that call the shots would pass up my resume because that other engineer was more 'steady.'

All I want to do is study mathematics because that's the only real control I have. Not what I do at work, what I do after work. I'm not going to waste my time educating business unless they pay me to. That's what a job is.

Business justifies themselves because its all about getting one on your competitor. Don't point the finger at us.


> Who am I to argue with the man paying my bill?

A human being, as are we all. And that comes with moral responsibilities. You can't hide from the reality that what you do is transformative in ways potentially both very powerful and very damaging, and you can't hide from your responsibilities.

There are lots of things I'd rather do than write code for someone else. But I do not abrogate my duties as a thinking, feeling human being because I don't care about what they pay me for. Do thou fucking likewise, get me?


At the end it is the money that decide. If your manager said that you have to skip tests or cut corner to achieve the end result in the budget than even if you are the greatest software engineer than you are stuck to do it the way the money said. This is true for small company with low budget and with huge top 500 companies that must deliver in time in the market. I never worked in a company that does not try to cut in quality to have a fast and cheap product. If you try to go against that, than they just pick someone else, you do not have any bonus while your teammate has or worst you will be out of the loop and finally out of job.


It appears good on the surface but it's way out of touch with reality. It's actually insulting, too, as it shifts blame from those controlling the situation to those executing their will within their constraints. Reality: There were attempts repeatedly to do what's truly good for customers from business models to language designs to superior hardware to privacy-preserving technologies. Those companies almost all went bankrupt, got acquired as profit declined, or cancelled those product lines with huge loss ($1+ billion for Intel's BiiN and i432 APX). The reason: the market almost always chooses against sacrificing features, time to market, or profit for quality and security. Making it more expensive, slower, less flexible, or less backward compatible for the users own good will get you shunned by consumers or fired by management. Lack of demand is so strong and pervasive, I left the high assurance industry except for private R&D and contracts. I just publish stuff online for free now to help the few that care.

On the other end, the users indeed don't understand the tech or trust boundaries. What the author missed is THEY... DO... NOT... TRY. Very important. We as a society treat certain things as responsible behavior, mandatory education, or mandatory evaluation. Understanding the tech's properties and risks? Not so much. Further, users as a whole are in favor of buggy systems/services that are cheap, free, or used by many friends. They're also happy to give away personal information and control for almost nothing in return (eg Farmville). And they outcast, demean, and underpay the people that build all this for them. Software engineer's habits and existing products/services are a result of their environment.

So, I've always seen the exact opposite: people as a whole and the marketplace need to act responsibly about technology. That will take a learning experience that combines (a) their efforts to learn along with weighing tradeoffs and (b) our best efforts to convert the technical things into layperson level. Metaphors will help as I illustrated [1] for Facebook's risk. Further, people need to understand that they get what they pay for, that their decisions create long-term effects on them, and that they might need to invest in ethical/private/quality alternatives to existing services. Fortunately, there's always a niche of those customers to serve for developers that care. Hard to get a job, though, since there's so few customers and so little money in it.

Doing the right thing professionally and in mainstream IT is an approach for those seeking to draw unemployment. Certainly little things you can do within constraints you are given. It's just that you're effectively working in a straight-jacket trying to help people ordering you to harm them. Most of the time, anyway.

[1] https://www.schneier.com/blog/archives/2014/04/ephemeral_app...


The author means well but most grey area business decisions are not in the control of software engineers who have day jobs.

For example, everyone loathes the example of banks re-sequencing customers' deposits and checks such that it's more likely to overdraft and net the bank a lucrative penalty fee. The business exec or MBA who dreamed up that scheme isn't the one who coded that in COBOL or whatever; it's the the software engineers who wrote that despised piece of code. But like 99% of the rest of us, software engineers have families to feed and they can't go on a moral crusade.

On the other hand, a software engineer who is making their mark in the world as an entrepreneur definitely has some choices. I suppose many of us can make an easy buck developing clever (and legal) sex/porn websites but many of us choose not to. We want to do something we wouldn't be embarrassed to show our mothers.

The Hippocratic Oath is more applicable to doctors because they have more professional independence administering medical care. Software engineers with day jobs do not have that type of autonomy.

EDIT to the replies:

I wasn't contending porn and naked bodies was unethical. Possibly awkward to explain to your parents yes, but certainly not unethical. Unfortunately, porn websites and the porn businesses are often a very shady (malware, fake singles ads, etc) and many of us choose not to get into that area.

As for the power for low-level software engineers to enact change in the world...

I would LOVE to lecture software engineers on morality. The "Fountainhead" Howard Roarke architect who blows up his building to remain true to himself is an incredibly appealing narrative. Howard did it and got the girl in the end, can't you do it too?!?

However, my standard for telling a software engineer what is right or wrong (that has serious personal consequences) is to back it up with a $100,000 check in my hand. This way, I can soften the blow of any consequences (e.g. his unemployment.) Since I can't do that, my words are just armchair lectures and empty platitudes. Besides, my words of wisdom are not really necessary. The software engineers are already morally troubled by it; they're just not in an easy position to override their managers.

Personally, I don't have to write questionable software but it's because I have the economic ability not to. My finances allows me to avoid writing software I don't agree with so it seems very wrong to use that as a pulpit for moral superiority over others.

I think many of the replies are being unfair to the life situation the typical software engineer faces. The rhetoric comparing the writing of code to optimize bank transactions to gassing the Jews" or "experimenting on babies" is unwarranted. Yes, writing code to engineer financials is distasteful but let's not get carried away with the counter arguments. The software engineers want to do good but they often feel powerless to do so. The harsh criticism levied by posters here does not change that at all.


The "hey, I've got a family to feed" argument is pretty weak. A lot of people have families to feed. A doctor is not going to behave unethically just because some "business exec or MBA" told him to experiment on babies. They take their oath seriously. We ought to have a "do no harm" oath also, and we should take it seriously, too. At the very least, we should apply the "Would we want our name and face all over national newspapers associated with this code?" test.


More people should go read "The Strategy of Conflict": doctors can point to penalties (far worse than the loss of a job) that they will suffer if they give in, MBAs know this so they won't even ask. Software developers on the other hand will suffer no such penalties so they are pressured to give in.

It is kind of the same as the books point on blackmail: you want to put draconian penalties on the victim of blackmail (if he does what you want) so that he can point to them and say that whatever you are going to blackmail him with ain't as bad as that, thus you wouldn't try to blackmail him in the first case.


Well, they also have a professional organization that covers for them.

Software engineers can't even bargain for overtime pay. Pathetic.


> Software engineers can't even bargain for overtime pay.

We do in Europe.


That's what it means to be salaried vs. hourly. You get the job done and don't mess around.


what do you mean can't? my god, did I break "the rules" every time I did it?


What did you do now... now you're just going to get a bunch of replies about how professional organizations would indulge slacker programmers and ultimately cramp good programmers' style and payslip.


> But like 99% of the rest of us, software engineers have families to feed and they can't go on a moral crusade.

I don't think "can't" is the right word. "Choose not to" is more appropriate. Are you willing to make a personal sacrifice for the greater good? There's always a choice.


> But like 99% of the rest of us, software engineers have families to feed and they can't go on a moral crusade.

That's such a crap excuse.


"That's such a crap excuse."

What, taking family before everything else? The relative effect a person can have to the society where they live in is usually non-existent (barring exceptional circumstances). Otoh the impact any parent can have on their families wellbeing by providing food and being an accessible parent, is usually significant.

So, does one choose the action that is unlikely to cause significant positive effect but is an opportunity cost, or the action that has obvious short term observable effects?

Man, it would be way cooler and more relaxing to be a moral crusader rather than a father but I choose the latter. It also drains me to a point where I really don't have any excess capacity to save the world.


The soldiers in the Waffen-SS had families, too.


reordering transactions was the example here. slow down with the nazi comparisons, they're out of place.


Godwin wins again.


Also the world has plenty of ineffective moral crusaders. Destroying yourself for no effective results can and should be frowned upon.


If at a dinner gathering, I ran into the COBOL programmer who happened to write that code, in my opinion, it would be highly inappropriate to back him into a corner and lecture him about morals. I'm sure he's quite aware that customers don't like it and I don't need to insult his intelligence. It's likely he's morally troubled by it -- he's just not in a position of power to change it.

That type of discussion about banking practices & fees is totally above his pay grade. Likewise, I'd be called out as a social justice warrior to berate that software engineer on something he doesn't control.

I guess we just have to disagree. I'm not sure what action or solution you're proposing? Should the software engineer quit his job?


I have quit my job in the past after being asked to write code I considered unethical. And it was nothing even close to the seriousness of deliberately screwing poor people with banking fees.

Let me turn it around. If I were interviewing someone for a job writing software, and somehow I found out he worked on that re-ordering software at the bank, while it wouldn't be an instant no-hire, I would very much like to hear an explanation about what good that person thought he was doing.


providing value for the banks shareholders who are pension funds.

the old cannibalize the young, because they can no longer work. if you look at any asset class, this is how the western advanced civilisations work.


I was asked a few years ago (when money was in fact still quite tight) to program a system that took the data from an insurance company and subsequently use that to launch a new product without the customers of the insurance company being the wiser that their data had been 'mined' by the newly formed company. It seemed totally un-ethical to me to do this, and it crossed some privacy laws as well regarding the creation of databases with privacy sensitive information on individuals.

Someone else stepped up and built it for them, I couldn't care less about that at least it wasn't my work that went to screw over a bunch of unsuspecting consumers.

At the same time I've had the porn industry as my customer for over two decades. I guess we all draw our lines in different places such that for every problem there is always going to be someone who will take up the slack.

Feel free to berate me for having sold to the porn industry, I have absolutely no excuse.

A guideline that would make stuff like this more clear would be appreciated, right now the only one I have is the law: if it's legal: I'll build it, if I feel that it is illegal or gray then I will refuse.


Did your porn industry customers practice human trafficking or other forms of coercion? If not, then how would that sully you any more than someone who visits a porn site?


I guess we just have to disagree. I'm not sure what action or solution you're proposing? Should the software engineer quit his job?

Without getting into the rest of the argument, a reply to that last paragraph:

The absence of an alternative course of action doesn't make it right, it just explains it. Saying "I can't think of another way to deal with it" isn't an excuse, it's an explanation. He still did The Thing. You can't think of an alternative? Okay, that doesn't change that he did it.

It's an important detail because you're putting the onus of dealing with his situation on the rest of the world, while he is the one who got himself in there.

"What do you suggest he'd do?" Well, not getting into that situation in the first place would be a good start. Or at least assume responsibility for your actions, when you do.


> That type of discussion about banking practices & fees is totally above his pay grade. Likewise, I'd be called out as a social justice warrior to berate that software engineer on something he doesn't control.

Morals are not determined by pay grades. And anyone who uses "social justice warrior" as a pejorative is someone not worth listening to. If you act in fear of being called a name that says you care about social justice, you need to reconsider your world, for it is a rotten one and bad for you.

Be mindful of you, and expect others to be mindful of themselves, and hold yourself, and them, accountable for the choices they make. That's the only route--the only route--to a society worth living in. And yes, that does mean giving somebody a ration of shit for jobsworthing us into a worse world.

> Should the software engineer quit his job?

I have. I'd do it again. We are so very, very privileged in that money can be had more cheaply than self-respect. But so many of us, I think because we as a profession house so many people who are stunted in their ability to measure the world, act as if it were the reverse.


>Morals are not determined by pay grades.

Please don't put words in my mouth to make you look ethical and make me look amoral. I said "banking fees" not "morals" are above his pay grade. Which bank allows software programmers to set banking fees? None.

>"social justice warrior" as a pejorative is someone not worth listening to. If you act in fear of being called a name that says you care about social justice,

No. The SJW pejorative[1] would actually say I don't care and I would deserve such an epithet. Lecturing someone about morals if I'm in no position to help them quit their job, is empty platitudes and theatrics. It's the theatrics (not the moral principles) that's called out as SJW and rightfully so. When a parent tells a child not to steal the candy from the store, that's not SJW. The parent has the power to buy the candy or find a substitute later at home.

Yes, we, to some extent as (anonymous) HN posters can all espouse the idea of holding people to a higher moral standard. Sure, in this thread, "someone losing their job" is just some abstract concept so I can definitely join the chorus and tell them to quit the job consequences be damned. Nobody here would know the difference.

However, I was talking about a real life situation where the COBOL programmer was directly in front of me. In that scenario, I will not lecture him. I hope others are not so crass to do it either (especially if the moral crusader is wearing tennis shoes made by exploited children but that's another discussion.) The programmer already feels bad about it. Lecturing him just depicts me as smug & superior and him as inferior.

>Be mindful of you, and expect others to be mindful of themselves, and hold yourself, and them, accountable for the choices they make.

I mostly agree with this. Where I disagree with the other posters is that I'm sympathetic to the all those programmers who are basically good people but are stuck in Dilbert cubicles writing code they really don't want to write. The programmers aren't happy about it but they don't have other opportunities. The list is endless...

The programmers writing dark patterns on LinkedIn, the correlation tracking on Facebook, the dynamic ticket pricing at airlines, etc.

I'm not talking about programmers writing Superfish, or ransom-ware that encrypts the harddrive unless the user pays into a bitcoin account. Those folks are evil and if they didn't have programming skill to write viruses, they'd find another outlet for their evil such as skimming casino accounts or falsifying company expense reports.

Not everybody gets to write software to send virtual "Get Well" cards to kids with cancer. There just aren't enough of those morally perfect jobs to go around.

I'm talking about the hundreds of thousands just getting through the day at the office. I'm actually shocked that more of us aren't sympathetic to the situation and see that not every programmer supporting business grey areas can just quit. And testimonials from a handful of HN posters proudly proclaiming they quit does not convince people. That shows a total lack of understanding in what type dialogue actually works. People need empathy and support, not righteousness.

It's not just programmers. It's the young lady selling overpriced cosmetics at the department store to the customers that don't need it. It's the minimum wage cashier at McDonald's constantly asking, "would you like extra supersize fries with that?" even though the customer in front her weighs 300 pounds and is one burger away from a heart attack. You think someone lecturing a low-level worker about selling fries is appropriate?

I believe in moral sermons backed up by support (financial, offering a home to say, etc) if the employee loses his job. Otherwise, it's empty platitudes.

[1]http://www.urbandictionary.com/define.php?term=social+justic...


> programmers writing dark patterns on LinkedIn, the correlation tracking on Facebook, the dynamic ticket pricing at airlines, etc.

are those jobs really dilbert cubicles? i really have negative sympathy for these people's essentially first world problems and a backlash against them is inevitable, and in my opinion welcome at this point.


>are those jobs really dilbert cubicles?

Yes... that, or open office plans with Dilbert bosses. Direct your backlash at the corporate management, not the programmers. Many of the software engineers already feel bad and would like to do something else (start their own company, whatever.)

However, they have $50k-100k in college debt and avoiding the programming jobs that reprice airline tickets doesn't make that debt disappear.


really never considered linkedin or facebook as lower tier undesirable jobs. if somebody works there, they can pretty much work most places bar, google nasa etc.

it's not my backlash, i don't own it - therefore i won't be directing anything.

just because somebody got into debt to get education that isn't necessarily required to do a job - doesn't absolve them - or warrant any sympathy.


This is a modern take on the Nuremberg defense.

You're right in a way. Values like these come from the top, in the project manager, the CEO etc. Morality is found in the company culture, and shouldn't be found exclusively in the worker bee software engineers.

But what else I see is that this can also come from the bottom up, in the way software engineers can "manage up". Software engineers are by definition more intimately aware of the details of a project, and they need to be listened to by management when they have a concern. Because frankly management can't be so aware of the details. It's our responsibility to be concerned about ethics, so that we can better manage up to spread that concern throughout an organization, so that it can filter down to someone else.


Labelling it a "moral crusade" is straw man rhetoric. It's simply about having the courage to do the right thing, to at least speak up rather than be a silent accomplice.

Most of the evil in the world is not due to top of the pyramid arch villains like the CEOs of the banks that screwed us all, Dick Cheney, the head of the NSA, or even Hitler. It's due to all the people below who enable it. It's called The Banality of Systemic Evil: http://opinionator.blogs.nytimes.com/2013/09/15/the-banality....


There is a secondary dimension here. The Hippocratic oath works because all the other doctors have also agreed to do it. Whenever faced with one of these scenarios, you also have to ask yourself: "If this is the hill I choose to die on, will it actually make a difference?" If the next person is line is going to look at your example, shrug, and do the thing you refused to do, then your self-sacrifice was a senseless waste, because the thing has happened anyway and the only change you have effected in the world is self-harm.

As a software engineer, it is extremely likely that you are replaceable, and you always have to factor into your considerations what your replacement is going to do. You don't usually have the choice between "the thing happens or does not happen"; your choice is often something more like "you can do the thing, try to convince people that it's a bad idea, or roll the dice and see if your replacement - who is expected to be more average than you - is both willing and more able to convince people that it's a bad idea".

The average doctor follows the Hippocratic oath. What does the average software engineer do?

(The above argument does not apply if your goal is publicity/drama rather than effective change)


The Hippocratic Oath is so that people place trust in doctors - doctors can just as easily choose to accelerate your death as take care of you. The oath is a statement against that outcome.

Software engineering does not have that sort of heavy load.


>On the other hand, a software engineer who is making their mark in the world as an entrepreneur definitely has some choices. I suppose many of us can make an easy buck developing clever (and legal) sex/porn websites but many of us choose not to. We want to do something we wouldn't be embarrassed to show our mothers.

That is a potential interesting point you (accidentally) skipped over: I have zero moral issues in making a porn site, unless it did scummy things (install malware). How can we agree about ethics if what we genuinely consider bad is so different?


>I suppose many of us can make an easy buck developing clever (and legal) sex/porn websites but many of us choose not to

My project pipeline is half full right now. Where are those easy buck projects ...


err don't all chartered engineering organisations already have this and Daniel McCracken was involved with this decades before?


It's hard to leave money on the table.


draft notes, no endorsement. could be wrong, read at own risk. perspective from over 51.3 years plus former professional engineer.

THE GENERAL PATTERN. like in 'climategate' for scientists 1.) Point to the pilot flying the plane where even the computer system tends to glitch. 2.) FASTEST WAY TO BLAME THE PILOT instead of fastest way to avoid errors is MORE RESTFUL SLEEP for the pilot.

Read book Death March and too much sprints, overtime for the software engineer. Yes, I am former system admin.

3.)top engineer and even GOVT engineer - professional license - helps with reputation but FUKUSHIMA Nuclear Disaster. alleged incompetence (like gotofail gotofail) in the DAMS of New Orleans, etc.

4.)very hard to stop the corporation - one size fits all PARADIGM plus THE ECONOMIC GAME - externality costs are often ZERO.

Simple. Microsofthardhard makes more money on rentals when your code breaks. Economics is called perverse incentives. THE NEXT SCAPEGOATS WILL BE SOFTWARE ENGINEERS. Every other profession/industry has had reputation destroyed and 'the bankers' will need to point the finger at someone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: