Hacker News new | past | comments | ask | show | jobs | submit login

Actually the video doesn't show the car navigating through the drive through space, it shows it driving up to the speakers and stopping, then it cuts to a scene from the restaurant where he's ordering, then it cuts to a scene where the car is leaving the parking lot.

As a robotics junky all my red flags went off with that bit of creative editing.

Even if the car spent 10 minutes working out the close quarters navigation it would be more 'real' than the editing which makes me suspicious that the 'side seat' driver actually did that tricky bit of navigating.

Not hating on the Goog here, just really looking forward to the era of self driving vehicles but want to be realistic about their capabilities.




There's quite a bit of footage of the self driving car making its way through congested areas with people and obstacles. It was more competent than many of the minivan drivers in my neck of the woods. :)


Got a link to that footage?


http://www.youtube.com/watch?v=YXylqtEQ0tk

9:00 has a good example, recognizing the state of a traffic light, other cars and pedestrians at an intersection, but lots of cool videos starting around 3:40. The whole talk is awesome and a presentation for an actual technical audience, too.


That is a great video, my observation is a bit different.

The process by which the car figures out what to do to achieve a goal requires a system for 'working backwards' from where it wants to be, and where it is. This is called 'inverse kinematics.' The more constraints you put on the path planner, the harder it is to do the plan, in fact my experience with my own robots is that the challenge goes up super-linearly at best and exponentially at worst.

In the situations in the video linked above the vehicle can often simply wait and the path options will change until there is one it can execute. But in a drive through there is a fixed route through tight constrictions where non-organic visibility is complex at best (lots of reflections / structures) and opaque at worst.

Cars that can parallel park themselves show that the problem can be solved for a given set of constraints (I actually think parallel parking is easier in this case) but the generalized solution is at least an order of magnitude above that.

Now please don't get me wrong, I have deep and wide respect for what these guys have accomplished. I want them to be successful. And solving the case of navigating into and through a drive up window (restaurant or bank for that matter) is a solid advancement in the area of self-driving transport. And making a video to show it off is a cool thing too.

Except they didn't show it.

And that is what bugged me. There is lots of video showing the car driving through traffic, and as magicalist shows video of it driving through crowded streets, and now we get a video about 'going through the drive-thru' and it doesn't show the car navigating itself through the drive-thru lane. We are left to imagine it.

Unfortunately for Google, this is a well known technique that film makers used for a shot that is either too expensive or impractical to shoot. They set up the theme, they show the characters starting toward and action, then a quick shot of them in the middle of that action, and then a shot of them exiting the action. They leave it to our fertile imaginations to 'fill in the rest.' And it is a great story telling technique.

But if you're talking about a real self driving car, and you say it can navigate these very difficult driving situations (and anyone who does robotics will immediately go "Whoa, that is a tough challenge.") then you use the film makers trick of not actually showing anything. Well its kinda like a research paper that doesn't include any supporting data. It looks like a publicity stunt and that Google is whoring out the research for some sort of 'feel good' brand buffing. I don't think that was where they intended to go with that spot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: