Hacker News new | past | comments | ask | show | jobs | submit login
Robot: I Now Have Common Sense. Engineer: Great, Go Fetch Me a Sandwich (singularityhub.com)
51 points by ph0rque on Oct 8, 2011 | hide | past | favorite | 25 comments



The part that really interested me was the process by which a robot orders a sandwich and pays for it. Unfortunately, the video fast-forwards through that bit. The way the cashier looks at the cameraman suggests that there was some human assistance there.

My point is that encoding probabilities about locations isn't enough. When the robot knew that the fridge was a probable place for a sandwich, it also knew how to open the door of the fridge, and that this was a required action to obtain a sandwich in that location. "Open the fridge door and look for a sandwich" is analogous to "Talk to the cashier and order a sandwich", but it seems like the robot can't do that yet. Subway also sells drinks, and if the person had asked for a drink, and subway had been found to be a probable location for a drink, the actions needed at the subway would need to be different to get a drink rather than a sandwich.

It's cool research, and very useful, but it's worth pointing out that this robot can't yet actually fetch a sandwich from subway unassisted.


The robot is currently able to go to the fridge to pick out specific beers. I don't see why a sandwich would present much more difficulty.

"it's worth pointing out that this robot can't yet actually fetch a sandwich from subway unassisted."

I think that sentences like this are actually being written these days is pretty damn impressive in itself.


I'm really not seeing anything impressive here. There's zero "true" AI at work, and common sense is actually the cornerstone of true AI.

The robot still needs to be told the existence of subway and the probability of getting a sandwhich there. It can't deduce for itself that the new restaurant across the street has sandwiches.

Edit: please don't downvote because you think my opinion doesn't coincide with your awe at seeing a robot buy a sandwich.


You've fallen victim to the AI effect. http://en.wikipedia.org/wiki/AI_effect#AI_is_whatever_hasn.2...

People still need to be told where sandwich shops are. If you walk by any given commercial space, you either need to see that it is indeed a restaurant, or have an explicit mention of sandwiches in order to make a guess at the probability of being able to get one there. Reasoning about restaurants isn't the purpose of the research presented here though.

There is no one thing called AI. AI could involve one or more of search algorithms, object recognition, parsing, statistical reasoning, state machines, and several other things. None of these on their own would be considered AI, but their research is crucial to developing AI.


Ah, for a second there I thought Cyc had delivered something truly interesting.


Actually it will be interesting and useful if an automaton displayed "common sense" _merely_ in the programming world!

"Hey, make me a web app with twitbug authentication and google payment integration."

Wouldn't you like it if it finds you aren't registered as a developer, applies to twitbug, waits for approval, follows up when approval is received (or whatever that process turns out to be) by creating a stub application, creates an Amazon account on your behalf, sets up a server and launches a proto web app using what it knows to be your favourite setup, or just decides for itself based on what others think is a good idea to use and a while later goes "TADA! Here you go!"?


And he didn't even have to use sudo!


Sounds like a permission escalation attack vector to me!


The moment we make a robot capable of reliably fetching a sandwich it will no longer be ethical for us to order it around (at least, not without paying a salary).


Why would the robot need a salary? So it can buy afford robot entertainment on the weekends?


It wouldn't necessarily be a money-salary.

But to be precise: at that point, it will no longer be ethical for us to order it around. We will have to resort to asking politely or to bargaining, such as giving it some form of salary in exchange for fetching sandwiches.


That is rich. Robots will be tools, nothing more. Until a bug in their hardware or software causes them to rebel, of course.


Tools can't reliably fetch sandwiches:

'My usual.' or 'ham salad, easy on the mayo.'

Then the robot has to cross the street, queue at the sandwich store, discover that they're fresh out of mayo, or whatever, and has to decide what to do next. Should it contact you for new instructions, take the initiative and visit a different place, buy ingredients and prepare a snack personally, or any other of countless possibilities? Any of which may involve handling new, unanticipated phenomena.

Thus fetching a sandwich, reliably, is a creative task, and any entity capable of doing it has achieved human status.


> Thus fetching a sandwich, reliably, is a creative task, and any entity capable of doing it has achieved human status.

Robots in Futurama are near human status. Real robots (including the one you described) are not. Fetching a sandwich as a robot is an algorithmic task.


The programmer will write an algorithm which is capable of generating a fitness function for each action based on its current goal(s). Then it'll decide what to do based on the action with the best fitness value.


Just like I do when I want a sandwich.


Except you wrote your own fitness function, it wasn't written for you.


Fetching a sandwich means nothing. Deciding to fetch a sandwich because I'm hungry, seem stressed and the robot likes me... but won't get a sandwich for my coworker, because he's a jerk - then it may be time for compensation. The robot still is not likely to need a salary anymore than a horse needs a salary, however.


First time I've heard etiquette equated with ethics.


What if the robot really enjoys fetching sandwiches? Sure, that may be because we programmed it to enjoy (experience a high fitness value for) obeying orders, but once we've created such a being, do we not have an ethical obligation to order it around, so that it can feel enjoyment?

Why should we go out of our way to build robots with similar ideas of what is pleasurable to us?


To me, "ordering it around" implies the threat of negative consequences if it refuses. If it really enjoys fetching sandwiches, there's no reason we shouldn't ask it to; but if it decides for whatever reason that it doesn't want to fetch sandwiches even though it really enjoys doing so, we can't say "fetch me that sandwich or I'll activate your disutility circuits".


What if you programmed a human to enjoy finding sandwiches?

Well, now that you think of it we are all in a way programmed to enjoy certain things through classical conditioning.


Ugh, singularityhub.

Willow Garage does, in fact, have a blog. And they did, in fact, post about this four days ago: http://www.willowgarage.com/blog/2011/10/04/jsk-and-tum-teac...

Did singularityhub mention the original source? No, of course not.


Willow Garage isn't the original source. They made the platform, but the developers of the "common sense" system, the owners of this robot, and the people who actually made the video, are from the University of Tokyo JSK Lab and Technische Universität München. If you read the link you posted, you would see that Willow Garage properly credits them as the original source.

Edit: Also, the fourth word in TFA is a link to Willow Garage. Your entire post is wrong.


Uh. The difference between writing a 'probability map' that says fridges and subways are good places to find sandwiches, and telling the robot to go to the fridge and to subway, are not so clear to me. Having the robot modify that probability map with no guidance--now that would be cool.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: