Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Evil behavior, however precisely defined, has been and always will be with us. Technology enhances what we do as human beings and, hence, always has the potential to be applied to ill uses. If someone, then, takes what you develop and applies it for a purpose you never intended in creating it, that is an item beyond your control. Alan Turing - who applied his genius to confer what can only be called immeasurable benefits on society and who used his skills to crack Nazi codes to help end a terrible war - is not ethically responsible for the many consequences inevitably brought into the world simply because computing power can be used to magnify the effects of human evil. Was he (or is any other engineer whose technology is misused) a causal agent in the various bad outcomes we can identify in, for example, the enhanced lethality of weaponry or in the massive spying by governments on their citizenry? In a narrow sense, perhaps yes. When one traces things back up a causal chain, one can theoretically identify every individual actor who made technical innovations that culminated ultimately in a particular bad use of whatever type that afflicts us today. But, though a cause-in-fact, Mr. Turing (and the many engineers who followed him respecting any given facet of computing technology) is not what the lawyers call the "proximate cause" - that is, the immediately enabling agent - of the outcome. Meaning, if you deem it unethical to build bombs, then don't do work for a defense contractor helping to build bombs because your every innovation will be immediately applied to a use you deem unethical. The same for working for NSA in developing sophisticated spying technology. Or for whatever other ill use you can identify in society. But, beyond avoiding direct conduct by which you are proximately helping to cause an outcome you deem wrong, you as a technologist basically have no control over how your work may be applied by others and, as the collective results of such work eventually permeate society, your moral responsibility for the indirect results of your work effectively stand at zero. If the operative standard were otherwise, then all innovation would stand frozen altogether because it is always possible to conceive of an ill use for any technology that makes things faster, more powerful, more efficient, etc. Put any such thing into the hands of human actors and some bad results are guaranteed to follow given enough time and opportunity. Thus, unless one is to freeze all productive activity or is to go insane second-guessing how others might pervert that which is being done for good, engineers must perforce ignore tangential ethical implications over which they have no effective control.

I think it is fair to say that each of us in our given professions (mine being law) ought to avoid being a proximate cause of something deemed wrong even though technically legal (for example, I would not be a "mob lawyer" even though there are some technically very good lawyers who do that work). But even here that is an individual choice for each actor to make. For engineers, some may see it as a great opportunity to do advanced work in some of the social media companies while others may regard such companies as being engaged in unethical conduct as they at least sometimes use dubious techniques to try corral us as consumers into their tight little worlds. For any given engineer, working for such a company doing such things is a matter of conscience. Some may say yes, others no. The same is true in working for a defense contractor or for the government. Or for any other work that is legal but ethically suspect in the eyes of some but not others. It is your choice and it is your conscience.

The author of this piece reflects what I call the bane of associational thinking. He uses the royal "we" to define a group (here, engineers) and then prescribes very broad goals for what "we" "ought" to do. Since all the things described now stand as a matter of private choice over which the "we" group as a whole has no say, then the only way to translate this sort of thing into practical action is to form formal associations, assign things to committees, and then issue a series of prescriptions on what the group members ought to do. That may be fine in terms of the association giving rules that amount to exhortations to do good (who would disagree with that). But, beyond that, do you really want an organized association dictating what today stand as your private choices for your career? Or, worse, do you want such an association to lobby governments to adopt their strictures and give them the force of law? I would think not. How, then, can "we" do better? Of course, the question is always described as difficult and is left for further discussion precisely because it has no real answer apart from acting on individual conscience or apart from the potentially coercive ones of letting some association or government dictate your career choices and opportunities. Perhaps this sort of reasoning is justified as encouraging people to have a heightened conscience about what they do, and in that respect it is fine. But that is really as far as it goes before veering into unacceptable alternatives.

Our capacity to do wrong is innate to our nature, as is our capacity to do good. We should not stop trying to do good through our creativity just because others can take what we do and commit wrongs with it. Nor should we feel guilty about what we do as long as we in good conscience can say to ourselves that we are doing something productive and worthwhile and not directly causing harm to others. The "we" issue is in reality much more of an "I" issue and, for that, you should examine what you do carefully and strive for the good regardless of what others may do with it. If you want to exhort others to do better by your standards, then all the better. Just don't dictate to them on matters over which people in good conscience may disagree.



Your point about causation is a good one. Its worth thinking through the causal chain of how your work is used as an engineer, even if you conclude that you are not the proximate cause and therefore do not bear responsibility.

I live in the Delaware Valley, an area of the country devastated by engineers. Engineers built communications and automation technology, which has allowed executives to outsource and eliminate jobs far more quickly than people here can get trained for new ones. The human toll of these changes has been higher than drone and surveillance technology combined.

Are engineers working in communications and automation the proximate cause of these changes? Maybe or maybe not. But how long or short is the chain of causation? I think its not very deep, not when an automation company might market its technology by mentioning the labor cost savings. At the very least, before anyone gets sanctimonious, they should think about what sorts of impacts their own work has on other people.


In my previous job this hit home quite hard, one day many of the people who helped us build the system that automated their jobs were laid off.

Then once the system was fully streamlined we developers were laid off as well.

In addition some of the end product was used in aerospace so there's a good chance it could be used for 'bad' (depending on your perspective) things.

Now I work in video games, not curing cancer, but in my search I was looking for a company that at least did no harm.


> Now I work in video games, not curing cancer, but in my search I was looking for a company that at least did no harm.

Ironically, one of the reasons I left EA was that I saw my CTO and half of the programmers around me assigned to the task of figuring out how to outsource more development. That didn't seem like a winning proposition to me.


I would say that outsourcing development jobs is at least a wash. Relatively wealthy people in developed countries may be (temporarily, let's be honest) out of work, but far less wealthy people living in less developed countries will have the opportunity to make what is, for them, a good wage.

Automation is harder to justify this sort of way. Outsourcing moves jobs around, automation is intended to eliminate them (yeah yeah, we need people to make and fix the robots, but let's be real, there is a net loss of jobs and we can only hope that cheaper products will trigger the creation of new, largely unrelated, jobs.)


Yeah, a $5 an hour job is created in China, but a $20 an hour job is eliminated in America, and much of the difference is captured by some executive or shareholder in the U.S.


> automation is intended to eliminate [jobs]

Automation has the potential to eliminate jobs, but also has the potential to allow much more work to be done by a single person, or allowing that person to do the same work with less effort.

Perhaps I'm overly simplifying your words, but I don't think automation is inherently "evil". Like all tools or techniques, they may be used toward good or bad ends.


Well, I wouldn't say that automation is evil, and automation certainly can be used primarily to scale processes, but I think that if a process is already running at capacity (say, you are already producing more wheat than the world needs), then automation will tend to reduce prices (or at least costs) and reduce jobs. The end-game is total automation (hopefully with everybody enjoying the fruits of that past labor, Star Trek style.)

I think that automation in general is a worthwhile endeavor, but we need to be mindful of the downsides and modify our society as we implement more automation to ensure that we are not causing undue harm. I believe that various forms of social safety-nets will become essential as we march towards automation's logical conclusion.


So if we used to build a road by having a group of 50 guys with shovels, should we just ignore the invention of the bulldozer so these men don't lose their jobs.

95% of Americans used to work in agriculture. Should we still all be farmers today because if we adapt technology then some of the farmers would lose their job?


One of the big problems with engineers is that they like solving problems, but sticking around to debate how those solutions will be used is "politics" and they say, "I hate that shit". Thus it's easy for the psychopaths who run this society to come in and use automation for bad (depriving others of participation, rather than distributing the benefits).


EA brings up a horrible (but true) thought. One of my colleagues was discussing the video game industry (which he left in disgust). People accept terrible terms to work in it, but in doing so, they make it worse for everyone. People who accept 60 cents on the dollar and mandatory 80-hour weeks and death-march projects to work in VG are making it worse for everyone else who wants to work in that industry and are, in a real way, being unethical.

I'm not anti-market, because I can't come up with anything better as a general economic problem-solving tool, but they do have the undesirable effect of often pitting have-nots against other have-nots, when it would be morally better for them to team up and maybe get a fighting chance against the haves. It's easy in New York (I lived there for 7 years) to hate "rich assholes" (and foreign speculators, and rent-control royalty) for the rent situation, but every time I paid that rent check, I was just as much a part of the problem.


Engineers built communications and automation technology, which has allowed executives to outsource and eliminate jobs far more quickly than people here can get trained for new ones. The human toll of these changes has been higher than drone and surveillance technology combined.

I think this comes down to a conflict (not very well fought from the engineers' side) between cost-cutters and excellence-maximizers. The first category want to take something that's already being done and cut people out of the action. That's not always a bad thing, because they attack inefficiencies and should, in theory, make the world richer. However, they end up taking almost all of the gains for themselves (and externalizing costs). The second also want to cut costs, remove grunt work, etc. but because they want to do more, i.e. "now that I shaved clock cycles off of this operation, that frees up resources to do more cool shit".

Businessmen tend to be cost-cutters, because that's the one thing people can agree on in executive tussles. For executives, R&D, philanthropy, etc. all devolve into bikeshedding, but the bottom line is a common language. People with vision, on the other hand, tend to get into conflicts and causes that have negative expectancy for their political fortunes. Engineers tend to be excellence-maximizers.

The excellence-maximizers do believe that they're helping society and adding value-- and they're right, at least on the latter. They cut jobs and create value. The problem is that society is run by greedy cost-cutters who have no vision but a lot of greed, and who make sure that none of the gains trickle back. Thus, those affected by the industry changes never get the resources (time, money, education) to survive them.


Right. Automation increases the overall productive capacity of a society, but does so in a way that (by reducing the demand for labor), allows holders of capital to capture more of the value generated by that production for themselves.

That said, I'm not sure what to do with that realization other than hold on to it as a vaguely disquieting feeling. I'm certainly not advocating that engineers do less in the way of creating automation or communications technology. I tend to believe its the job of the political class to reconcile technological change with societal well-being. But that same thinking applies to engineers in the defense industry as much as engineers in the automation industry.


> I tend to believe its the job of the political class to reconcile technological change with societal well-being.

In a democracy (including a representative democracy) the "political class" is the citizenry at large, so that responsibility belongs to everyone in such a society. (Accepting, arguendo, that it is an obligation of the "political class".)


I tend to believe its the job of the political class to reconcile technological change with societal well-being.

I agree-- but I also don't trust the current "political class".

It's made worse by the current Silicon Valley arrogance, which assumes everything "big" (esp. government) to be intractably mediocre (and, therefore, useless) because the whole populace (i.e. the full IQ spectrum) is a part of it. I feel like this secessionism is a rather Machiavellian move by the technological elite to convince their underlings not to see the big picture, because it's all mediocre and inefficient out there anyway. The attitudes coming out of both the technological and political elites (both anti-intellectual and limited in their own ways) are bad for both sides.


From what you write, I get an impression that you claim this article is irrelevant and worthless, especially because "engineering tools can always be used both for good and bad", and thus an engineer has totally no responsibility as to where his creations are used. But actually, an engineer can choose for which company he works, can't he? So, he already has some choice! Sure, in the end, a tool he created might get used by an "evildoing" company, if buyer of the engineer's work sells it, but then at least the engineer had his totally real chance to at least try to make it harder, by not selling immediately to the evildoer, doesn't he? I don't understand how can you claim that an engineer cannot make such a choice, and "must perforce ignore tangential ethical implications".

You also claim, that if engineer wanted to care about ethics, he would have "to go insane second-guessing how others might pervert [his work]". Classical rhetorical maneuver of exaggeration - why immediately claim his only option is "insane second-guessing"? Please don't remove engineer from ability of basic observation and critical thinking - as some totally sane observation has already power to reveal quite a lot of morally suspicious activity of a company! So, does an engineer have moral right to close his eyes, or claim that what he sees, and by his work facilitates, he actually does not facilitate? Is that what you claim?

Now, the problem is, first of all, that it's easy and cozy to claim no responsiblity and no power. But this always is, was, and will be easy. And second problem, and that's how I understand the gist of the most of the article, is that because of corporations (especially their size), this observation and reasoning actually did recently get quite a bit harder and more difficult. But therefore, it should be considered important to try and make an effort of thinking about how to reverse that, so that engineers could regain more control and insight here.

Also, from many problems I have with your response, one other that shines is your claim that the article calls "to form formal associations, assign things to committees, and then issue a series of prescriptions on what the group members ought to do". Where, oh where, does it give even a slightest hint in this direction??? Please!


As an engineer I really hate the idea that it should fall to me to quit as basically my only option if my boss wants me to do something that I think is unethical. I understand where the authors of the Guardian article are coming from. But I also think it's shitty to suggest that because of someone else's poor morals I have to find a new job.

I say this as a guy who's successfully avoided working at companies where I might be put in the position to have to make that choice. So I'm not bitter that I've already had to quit a job as a result. But it does kinda bum me out that there are a lot of jobs I can't take because of the possibility of my work getting abused.

What bugs me is that the implicit assumption here is that management is corrupt and can't be trusted to be ethical themselves, and thus it falls to the engineers to boycott actually doing what they pay us to do. Why shouldn't the blame go to the people who are holding the reins?


Responsibility isn't zero-sum. The fact that you have an obligation not to act irresponsibly does not absolve your manager of the obligation not to ask you to. This particular article focuses on one particular layer in the heirarchy, but the author isn't implying that morality is only relevant in the trenches -- it applies at all levels, and shortcomings at any level should be addressed. This was an article written by an engineer for an engineering audience; tomorrow you might see an article in a different newspaper for middle managers or company directors or shareholders.

As an aside, though "exit" may be a more powerful force than "voice", I wouldn't discount people's abilities to change corporate behaviour by mechanisms other than boycott. In fact, if moral workers are to steer clear of questionable businesses it is unreasonable to expect those businesses to maintain any kind of moral compass.


I think the idea is you can work for anybody, but you should think what you're doing. If everything the company does is immoral, don't work for them at all. If not all is immoral, you can do what's moral. Usually it's a whole gray of shades.


> He uses the royal "we" to define a group (here, engineers) and then prescribes very broad goals for what "we" "ought" to do.

That is pretty much the definition of ethics [0].

http://www.scu.edu/ethics/practicing/decision/whatisethics.h...


To me, an ethical decision is one in which the result does not, and will not, put an individual, or a group of people, at an obvious disadvantage, while benefitting another.


The closer in that link is representative:

> Ethics also means, then, the continuous effort of studying our own moral beliefs and our moral conduct, and striving to ensure that we, and the institutions we help to shape, live up to standards that are reasonable and solidly-based.


There seem to be a lot of defensive comments, which I guess, seem to be coming from the focus on individual guilt. Focussing too much on individual choice is examining a particular solution approach and not the problem itself which needs to be solved. Lack of awareness is not a valid excuse, because even when some people speak out on the harmful consequences of technology(people have long talked about the surveillance state), they are ignored since we dont have systems in place to do a cost-benefit analysis and coordinate around a solution. Sure, there are many problems with the individual approach(what if somebody else is hired?, what if opponents develop a weapon technology?). But this only highlights the difficulty of the problem. This is why the 'we' becomes appropriate, because individuals are weak(unless they are at some critical stage).

There are potentially many developments in areas like nanotech, bioengineering, ai with which humanity might face big troubles. For this, there needs to be some strategy identifying such potential problems, and putting precautionary measures on r&d. Something like this happened with nuclear scientists hiding their research before WW2. This is harder to replicate now, with the science and engineering community distributed more widely between competing nations. So, a first step could be to acheive a shared understanding of common interests and ways to coordinate with each other.

Alternatively, energy can be poured into defensive measures for defeating harmful technologies. We already have many engineers developing counter-technologies. Also, Snowden himself was an example of exposing something which he considered harmful.


> Evil behavior, however precisely defined, has been and always will be with us.

But from what I've heard, Zarathustra was the first to tell people of the difference so that they could see it.

No reference, just something I think I've read or heard somewhere, sometime. Or maybe Nietzsche[1] said it somewhere.

[1] http://www.gutenberg.org/ebooks/1998


you've said this much better than I ever could. I wish I could write so clearly. Much respect.


[deleted]


This kind of thinking only works if all intelligent people end up sharing and following your same ideals. Since that's just not going to happen, is it better to leave your country in the weak position?

How would the cold war have played out if only one side had nuclear capabilities?

I'm also not sure the NSA intelligence is on the same difficulty as the Manhattan Project. As I understand, most if it is general information retrieval techniques. The same tool I wrote to help troubleshoot our VoIP networks is literally the same thing I'd do if I was writing a mass-spying tool. (Collect and index every single packet, store indefinitely, provide fast on-demand access on any selection criteria.)

Edit: Parent comment was equating NSA surveillance in difficulty to the Manhattan Project, and making the case that if no one like Feynman had worked on it, there wouldn't be nukes.


Sorry about that. I retracted my comment because I felt it was meandering and muddled.

This kind of thinking only works if all intelligent people end up sharing and following your same ideals. Since that's just not going to happen, is it better to leave your country in the weak position?

Indeed, this is one of the reasons why there's not really much to say on the subject of ethics regarding the NSA's behavior. It's in people's nature to want to acquire as much power as possible, and there are endless justifications for doing so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: