Hacker News new | past | comments | ask | show | jobs | submit login
MIT study finds that human subjects prefer when robots give the orders (newsoffice.mit.edu)
60 points by sgy on Aug 24, 2014 | hide | past | favorite | 33 comments



I spent more time than I'd like to admit trying to find the actual paper (it's in the right corner under "Related", I blame banner blindness!)

Link for others like me [PDF]: http://interactive.mit.edu/sites/default/files/documents/Gom...


Another dis-informational headline. The press can be excused for this, but MIT should know better.

More realistically, 'when working with machines to do repetitive manual work, people prefer not to have to think so much.'


I found it interesting. It would be nice to know more about the subjects in this test. It might be that science students would be more at ease with autonomously behaving robots than the avarage factory worker, so I hope they didn't just ask students from the department to volunteer.


Your suspicion is confirmed. Most participants are within or closely connected to MIT. They seem to use rigorous math for hypothesis testing, but if the personality and other characteristics of the participants are not representative, their results may not apply to the general public.

I wonder why they did not cast their net wider, e.g. advertise in the Boston Globe, and statistically account for differing backgrounds, which is a pretty standard methodology in social science research.

From the actual paper: http://interactive.mit.edu/sites/default/files/documents/Gom...

> The participants (14 men and 10 women) had an average age of 27 ± 7 years (minimum and maximum ages were 20 and 42) and were recruited via email and fliers distributed around a university campus.


Robots also don't play favorites or office politics. If there is a more efficent/better way, it will do it. Even better when it is allowed to re-calculate with new parameters.



That really depends on how the robot has been programmed. It's quite likely that robots in office-type settings will have political aspects to their decision making, at least to the extent that bits of corporate policy have been embedded.


KPIs might be a parameter here


I just can't find the Wiki page on it, but there is a great "rule" out there: As soon as you tell people what they are getting measured on they will put their whole energy into gaming the measurements to their advantage.



The issue with this is the efficient way is not always the better way.

If you keep assigning the same menial and crappy task to a worker because they get it done quick, yay efficiency! Except if you're asking the same worker to clean the bathrooms three times a day every day, it's not going to take long before he goes "fuck this, I'm going to work somewhere else!"

The robot will assign the next most efficient person, who's going to be right on the train out too and so will everyone who sees it coming. So you'll either end up with people gaming the system and slacking, or you'll end up with an empty business.

People might like the idea of robots because they don't play favourites or office politics. However, I'm willing to bet that people are going to hate that same robot really quick because it doesn't play favourites or office politics.

If you know you can always count on Joe to cover a shift, if he comes and asks you a human for a day off and you know it'll leave you short staffed for a day. Would you? Yes, because you know the 99 other days you're going to end up short staffed you won't because you've got Joe. You know if you say no that you'll be short staffed those other 99 days because Joe's going to make sure he's busy laying on the couch eating cheerios watching jeopardy because you pissed him off.

The worst managers I've personally faced are either the ones that blame everyone else, or the ones who are there to "do a job and not make friends". The latter is the robot.


All of the logic and anecdotes you presented could be computerized.

Retention is something you could optimize for in the long term.

Reliability for covering shifts is a number too.

I think you're right on a small scale, humans can make generally reasonable judgement calls with little data.

If you think about the future though, if a big corp can optimize middle management robots with 100,000 employees worth of data, they probably will.


While you're correct in that bad algorithms will produce bad outcomes, forward thinking companies will improve the algorithms with feedback from the workers.

If previous Lean / Six Sigma studies are correct, this feedback loop will lead to improved employee morale as they become the drivers of decision making and less likely to feel disenfranchised.

A layer of management can be removed and it is mismanagement that generates the most workplace animosity, as you say yourself.


Hopefully over time the system could also optimise retention, if it can correlate being assigned certain tasks often with quitting. You could also have employees fill out satisfaction surveys and weight efficiency with enjoyment, perhaps better employees could be rewarded with more weight given to enjoyment.

Naturally there's going to be more variance in employee performance on some tasks rather than others. For example being a cashier during a quiet time will have less variance than toilet cleaning. A workaround might be to pay a bonus for good performance on high variance tasks.


I'd invert that. You can't complain, whine, or plot social solutions in response to getting assigned tasks by a robot. There's a lower mental load in response to getting a task, since you don't have to worry about how your response to task assignment gets interpreted by your boss.


And I'd invert that: unlike a robot (unless programmed to be cruel), you can see your boss getting off on commanding you to do ridiculous shit. With a robot, you can just think, "Well it's just a brainless robot." With a boss, you think, "Why doesn't that glib asshole use their brain... or better yet, let me use mine?"

(Pardon if my language seems salty; "asshole" and "shit" are actually popular technical terms when dealing with the domain of bosses.)


Interacting with robots is often easier and simpler than interacting with people. Now that AI is on the cusp of being able to replace not just manual workers, but knowledge workers of all types, even those service jobs that have always seemed safe in the face of automation are in danger of dying out. Especially in light of evidence like this that people actually prefer robots to humans in many of those situations.


It's easier than interacting people who are stressed or under duress - which is usually the case with service jobs. In any case it seems that you didn't read the piece, since it says absolutely nothing about these situations.


immediately thought of manna - http://www.amazon.com/Manna-Two-Visions-Humanitys-Future-ebo....

(FWIW, don't think it has aged too well)


I liked it, although I was annoyed at the end after the author spent an entire chapter sermonizing about the developed world's failure to help out the developing world. And what do they do, after they end up in Aus, to help out people still stuck in a dystopian nightmare? Well, fuck all, as it turns out.


It's because many people don't second guess the motives or robots, just like in the past when they wouldn't second guess 'facts' that were professionally printed in newspapers and books.

I think it will eventually become anachronistic the more deeply people understand that the people who dictate a robot's behavior are just like they are, and have motives that are just as suspicious as anyone else's.


Fellow stranger... While reading the comments on this thread, you have to keep in mind that HN crowd is a highly computer-oriented crowd, who has faith in computers & algorithms.

ps. When someone will be able to write an algorithm to simulate/predict the perception of a specific/targeted human at 99,9% of cases, then I might take seriously into consideration the idea of letting a robot run a factory.


We let robots give order all the time. The traffic light is one example.


The research is related to artificial intelligence, not pure automation.


I think it's a fine line. The article did mention that the robots were guided by an algorithm, and traffic lights are, too, controlled by algorithms, so what's to say this situation isn't the same as following traffic lights?


Also it's possible to have traffic lights that vary the frequency of its light changes depending on the traffic on each side of the intersection.


This could be because, robots don't have ego or feel proud in giving an order., they don't have emotions too. So for human this is just a message received to act upon. than the 'order' given by a human in a tone which varies depending on his emotions. For instance if someone gives an order in an angry tone, it could be humiliating to his subordinate., so let these robots give orders in a rude tone and see the consequences for yourself :)


An example of a robot which gives people orders, in the sense of generating stimuli to which correct responses are required, is a video game, and those are shown to be addictive.

The next Tetris block that is about to fall is essentially a robot's order: when you're done with that one, place this one somewhere!


This is just preparing us for the post terminator world :)


Less Wrong told me this day would come.

This is robot written propaganda! You've all heard of them writing articles. Now look at the consequences.


Please don't make novelty accounts on HN.


I, for one, welcome our new robot overlords...


I came here to say this.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: