Just as the historical perspective, the automatic data processing was used even before electronic computers, and even by the Nazi regime to support genocide:
That's why it's important to actively care and to organize the society to minimize the potential for the undesired use of the possibilities given by the technology: the technology can amplify both the good and and the bad acts. The society has the chance to influence that easier before the tipping points and only after a lot of harm already being done after them. On another side, once the tipping point is reached, even if some system didn't exist before, it would be implemented fast.
The position of the "consumers" in the digital world is also something that has some interesting comparisons:
In practice, every choice as to how to care and organize society become a question of what to control and what not to control. Who can be trusted, and who can't trusted. Then it just seems to go in a circle. Regulation on top of regulation. People make judgments and claim they are assertions rather than assumptions, based on correlative and observationally biased inferences. The system becomes deterministic based on individual impulse, rather than caution, patience, and 'opened' trust.
With individual impulse, it becomes a question of who decides to shape society, rather than how society is shaped. Whoever makes a move first has an advantage in the short term. But this has the potential to be twisted by greed, selfish desire, and delusions of grandeur, or the idea that "what works for me will work for you". Then things get directed towards tipping points. People start believing in power hierarchies, groups, differing orders and levels of intelligence and ability as intrinsic and permanent properties of their existence. And so the pendulum keeps swinging.
I don't think we will ever be at a point in our existence, where we know what to do before we have to do it.
> It's also a task of every engineer to consider what could go wrong.
Within a reasonably defined threshold. If I have a business making rubber ducks, I don't have to design those rubber ducks with the specification that they withstand temperatures of 200 degrees C.
> Then, not in theory, but in practice, do you think we should care about the topic of the article?
I do care. I choose to not work for places that I disagree with the intent and usage of such things. I know I have the capacity and capability to work at those places. I don't want to contribute my intellect to something I consider destructive, to the best of my knowledge and awareness to do so.
But even given my choices, I never feel like I have the right answers. I always can find perspectives in where I could be wrong. I try to pick the one I consider 'least wrong'. It's not really a lesser of two evils thing, it's more reducing the probability for things to go wrong. I'm also young and probably very naive in many ways.
http://en.wikipedia.org/wiki/IBM_during_World_War_II
That's why it's important to actively care and to organize the society to minimize the potential for the undesired use of the possibilities given by the technology: the technology can amplify both the good and and the bad acts. The society has the chance to influence that easier before the tipping points and only after a lot of harm already being done after them. On another side, once the tipping point is reached, even if some system didn't exist before, it would be implemented fast.
The position of the "consumers" in the digital world is also something that has some interesting comparisons:
http://www.wired.com/2012/11/feudal-security/