Hacker News new | past | comments | ask | show | jobs | submit login
Latest prototype self-driving vehicles cruising around Mountain View (plus.google.com)
70 points by msoad on June 25, 2015 | hide | past | favorite | 62 comments



Was discussing self-driving cars with a few colleagues on a drive the other day, and a question came up:

Does anyone have a sense how the cars handle situations where traffic is being directed by a human, with hand signals, etc? I.e. a cop standing in the middle of the road, waving through cars through an intersection where there's been an accident, etc.?

We pondered a few scenarios -- are they reading the hand signals? Are they judging other cars' movement? What if it's not a cop but a crazy random person who jumps into traffic and decides to play cop? We humans are remarkably good at determining "soft" things like "that guy looks crazy - let's get out of here" vs "that's a cop, I better pay attention", etc. The rabbit hole gets quite deep here...


> What if it's not a cop but a crazy random person who jumps into traffic and decides to play cop?

Most jurisdictions tell you to obey any random person directing traffic.

That's important, because there are good reasons why you'd want random passerby's to be able to direct traffic and be obeyed (eg a big accident that just happened, but it's hard to see). And obeying directions for traffic, even if you desperately want to obey, only really works when you can assume that other people on the road follow them as well.

Of course, if after the fact it turns out that the random guy didn't have a good reason for directing traffic, the law can get him. But you'd still have to obey him in the first place.

(A bit like a commander giving (bad) orders in the military.)


> And obeying directions for traffic, even if you desperately want to obey, only really works when you can assume that other people on the road follow them as well.

This is the key. In the end, the concurrent intelligent-agent-based system that is driving works because all the agents have a mutual goal (to have traffic proceed smoothly so they aren't stuck in it), and so will cooperate to fulfill that goal even without any central authority dictating behavior.

"Traffic" as a whole works as well as it does because every node in the system can intelligently compensate for the failures of the nodes around it, and can decide to cooperate—in a span of seconds—with another driver's compensation strategy.

I would guess that the majority of the Google automatic chauffeur's high-level behavior isn't explicit, but rather "arises" from implicit subroutines about following the flow of traffic. For example, if there's an accident in the middle of a six-lane road with roadside parking, the car could be programmed to notice cones and avoid them, and to notice people directing traffic and follow them—but then it might get stuck, because there's no single "lane" open any more. Or it could just be programmed to notice that up ahead the lanes are merging together and driving half-way between the middle- and right-hand lane—and so, to follow the flow of traffic, it should merge too.

In other words, if all the automatic chauffeur's friends jumped off a bridge, it would likely be a pretty effective optimization for it to jump off too.

(Though I'd hope it was taking the hint mostly from regular cars; a pack of driverless cars following one-another's lead could turn out badly if the one at the head of the pack is malfunctioning.)


> Most jurisdictions tell you to obey any random person directing traffic.

I watched the most amazing form of this in action after Hurricane Andrew. You'd pull up to an intersection and a random person would be directing traffic till (it turns out) the car they were passenger in got through. Then they'd go hop in their car and another random passenger from another car would hop out and direct traffic for a few minutes. Rinse, wash, repeat.


I actually ran into a simple example of a challenge like this. The power went out in mountain view a few weeks ago and a bunch of stop lights stopped working. I pulled up behind the self driving car which had stopped at the disabled light. Instead of treating the light like a stop sign, the car remained stop. After 5 or so seconds stopped at the light (there were no other cars at the intersection), I honked at them. I assume someone took manual control because it started to move right after.

While this situation could probably be addressed, it is a great examples of challenges these cars may come across.


> Instead of treating the light like a stop sign, the car remained stop. After 5 or so seconds stopped at the light (there were no other cars at the intersection), I honked at them.

There are many normal humans who don't know what to do when the traffic lights are in a non-normal situation

The difference is that once a self-driving car is programmed for what to do, it won't get it wrong again.

Self-driving cars will get continually better and will eventually surpass human drivers simply because human drivers will never improve.


It handled it completely correctly. Lights being out is an exceptional situation where the usual rules of the road are suspended.

Sometimes it means "treat it as a stop sign", but other times it means "turn the car around because ignoring the lights on this particular road would be positively suicidal". e.g. a level crossing.

Computers have no conception of danger, and so they don't get to decide what's dangerous and what's not.


Actually, in California, that isn't true. An intersection controlled by a traffic light converts to a 4-way stop when the traffic light is out.

However, even for your example of a level crossing, humans fail regularly. People regularly run into trains in Texas at a level crossing because the train has a long stretch of flatbed cars that someone missed at dusk or at night.


Human drivers follow a bathtub curve. Young drivers (particularly male), are very dangerous in the beginning (so much so, that their insurance rates are much higher). Drivers become more dangerous as they get older, and their reflexes, vision, situational attention/awareness decline. I think one of the best uses of self driving cars initially will be for those populations.


That's why they get tested for years before being sold; so almost every possible situation comes up at least once and can be included in the system. I bet that driver filed a note about what happened and it was fixed soon.


Woah.

What do you think actually went down, there? "Hey, of course, let's just implement the StopHandler::HumanStopSign() subroutine, thanks for the report!"?


1. Find a way to recognize when this situation occurs. Test that. 2. Decide what the optimal decision should be for the car. 3. Program the car to do 2 when it's in that situation.


This is Google we're talking about. I think they will spend time to generalize this situation as much as possible.


I hope it's not the same people that wrote the VAT handling code on Adwords.


I assume someone took manual control because it started to move right after.

This is why I believe self-driving cars will always need a human to take control whenever unusual situations arise, just as planes with autopilot still need a pilot's attention.


Airplanes on autopilot usually have many minutes and several hundred miles worth of buffer before something bad happens.

Cars that say "I don't know what to do, Jesus take the wheel!" might only give drivers a fraction of a second.

At some point quantitative differences become qualitative ones.


But there are autonomous planes known as drones.


Almost nothing "in production" with the military is autonomous for the entire flight, just piloted from the ground. Those that do have some autonomous operation still require manual takeover for critical situations i.e. landing. And we don't trust them with human passengers.

Drones are essentially treated as disposable, because they are. We just lose them sometimes. This is not a state of affairs you can have with a car containing multiple humans navigating pedestrian, cyclist, and construction-laden streets.

There's also a lot less that happens during (most of) flight. Surroundings are relatively static, other planes in the environment actually broadcast what they are and what they're doing, so you don't have to discern it from visual noise. There are no stop signs, no intersections, very little traffic, etc.


> Almost nothing "in production" with the military is autonomous for the entire flight, just piloted from the ground.

We are awfully close:

https://en.wikipedia.org/wiki/Northrop_Grumman_X-47B

http://www.wired.com/2013/07/navy-drone/

There is definitely a point where the drones are going to be better at take off and landing than human pilots, in that case...safety will become an issue (why let humans do it when the drone can do safer?).

The same will happen with cars. It is only a question of whether that happens in 2020 or 2030.


Maybe we're good at differentiating humans with authority vs. humans without. But there's a much simpler social-engineering tactic for rerouting traffic that both humans and autonomous cars will fall for. I'll summarize it like this: road pylons are $8 at Home Depot.


What's more, www.roadtrafficsigns.com will sell MUTCD-compliant traffic signs to... well... anyone.


The car could have a sensor platform that a remote driver could hook into and control over the air. In the 1% of cases the car gives up and can't handle, a human can pop in and guide it out of the situation before returning control to the car.


It looks like Google will be posting monthly reports of their self-driving cars from now on. Looks very interesting. See: http://www.google.com/selfdrivingcar/reports/


Is it just me, or does this incident seems like it was made worse by the autonomous mode?

​A Google Lexus model AV was traveling northbound on El Camino Real in autonomous mode when another vehicle traveling westbound on View Street failed to come to a stop at the stop sign at the intersection of El Camino and View Street. The other vehicle rolled through the stop sign and struck the right rear quarter panel and right rear wheel of the Google AV. Prior to the collision, the Google AV’s autonomous technology began applying the brakes in response to its detection of the other vehicle’s speed and trajectory. Just before the collision, the driver of the Google AV disengaged autonomous mode and took manual control of the vehicle in response to the application of the brakes by the Google AV’s autonomous technology.

It looks like the Google Vehicle (GV) was traveling on El Camino (I'm assuming it had no stop sign), the computer saw the vehicle approaching (what should have been a stop) and hit the brakes. The driver took over (right before the collision) to stop the brakes from being applied and the car hit the rear of the GV. With out the computer controlling the speed, I wonder if the GV would have cleared the intersection already?

Then again, depending when each event happened, maybe if the driver didn't take over, the GV wouldn't have been hit? Hard to tell.


How about everything would have been all right if the other vehicle stopped at the stop sign?


It is interesting that most of the accidents were other vehicles rear ending (or close to rear ending) their car.


One thing I've noticed seeing the Lexus cars drive around Mountain View is that they're incredibly cautious.

I wonder what will happen once they become more commonplace, and human drivers realize they can be incredibly aggressive around the self-driving cars - cut them off, etc - and the self driving cars will happily accommodate the human driver, except with better reaction time and precision than a human driver could ever have.


It'll probably make a lot more sense when you have a majority of cars that can act as a swarm rather than having to tolerate irrational and unpredictable human drivers. I can imagine there will be eventually some sort of standard for near field communication of autonomous drivers that can give hints about behavior rather than needing to infer them. Doesn't even need to be trusted or authenticated, a driver can just make more detailed signals than the human readable yellow and red lights that we currently use. Even just being able to say that you'll be turning in 1KM would make things significantly more efficient to navigate around. It extends even further if you want to have drivers like ambulances and fire trucks to have "sudo" powers, autonomous drivers can get themselves into a formation to allow the vehicle through before it is even visible over the horizon.


>human drivers realize they can be incredibly aggressive around the self-driving cars - cut them off, etc - and the self driving cars will happily accommodate the human driver

well, as my driving school instructor was saying - "Do yield to the moron".


Yeah, case in point, I was jogging on Cowper st (wide, 2-lane residential) Tuesday morning and noticed a self-drive Lexus had come to a stop behind a Palo Alto garbage truck that was making its usual stop-go-stop way down the street. After several long seconds, another car came up behind and stopped. More seconds. Second car pulled out and passed both Google and garbage truck. I jogged on and never saw whether the Google car turned off, or continued to tailgate the truck -- but it didn't pass me.


My assumption is that this is intended to make a good first impression with the public - even if making the cars more risk-taking would result in a lower accident rate, an accident where the driverless car is technically "at-fault" can be very damaging to the overall project. Googlers working on the driverless car have explicitly talked about cautiousness as a dial that they've turned all the way up - not as an inherent requirement of the technology, but as a conscious decision.


Probably the cars will hit them. If you look at the disclosed accidents list that seems to be not entirely uncommon (even if it isn't the robot car's "fault")

That said, an algorithm that learns how to prevent people from cutting it off would be interesting indeed.


Which cars will hit which other cars? If a driverless car can avoid an accident, especially by stopping, it will.


Ahh, the halting problem :)

I think it will be a long while before we have fully autonomous cars which drive better than humans even in the edge cases.


It depends on what you mean by better. Dealing with an unpowered stoplight by stopping is one thing, actually colliding into something else is different. It is already becoming common for new cars to have features which keep collisions from happening.


Not just other cars, but pedestrians too. It's common around here to see people trying to jam themselves into closing subway doors, often getting limbs or bags stuck in them. It's also not uncommon to see people try to help them by attempting to hold open or pry open the doors. One can imagine what some people will act like when they start to believe that cars will stop whenever they run out in front of them.


Well if you drive around any college campus you'll quickly find the results of that.


A more pedestrian friendly equilibrium in the game of chicken between people and cars?


"The prototypes’ speed is capped at a neighborhood-friendly 25mph"

What are the speed limits of the roads the car will be driving on?


I really wish all cars would drive 25 mph on residential streets. The big limiter on car travel time is timing of traffic signals and overall road capacity, not top speed.

It’s very common around me for cars to be driving on semi-arterial residential streets (with official speed limit 25) at 55+ mph at 2 AM, which is both noisy and potentially unsafe: there’s low visibility at night and at that speed it would be hard to maneuver out of the way if there was something moved into the road.

If we had smaller cheaper lighter-weight cars which maxed at 25 mph on residential streets, plus maybe some of those 15–20 mph electric bikes which are everywhere in China, plus fast, frequent, and reliable mass transit, there’d be little negative impact on routine chores and commuting, but huge improvements in pedestrian and cyclist safety.


Right now our infrastructure and zoning in the US is all optimized for upper-middle-class 25–60 year-old able-bodied adults, and really sucks for children, the elderly, sick or disabled people, the mentally ill, the poor, foreign tourists, etc.

Even among healthy young professionals, we optimize for everyone having a long commute, living spread apart, strictly separating shops from housing, etc., such that basic living pretty much requires hours of driving time every day. In better designed cities which are a bit more compact and have arranged the most common destinations more centrally, car trips are less necessary, on average are shorter, and smaller slower cars would be more practical.

I’d love it if I could drive around town in a electric golf cart type vehicle that only cost a few thousand dollars new and topped at 15 mph (perhaps slightly larger and safer than a golf cart, but that general idea), without getting honked at and run over by angry dudes in SUVs. That would be better than a bike for carrying my groceries or driving a few miles in the rain, but much cheaper and more convenient for most purposes than a full-sized car.


>I’d love it if I could drive around town in a electric golf cart type vehicle that only cost a few thousand dollars new and topped at 15 mph (perhaps slightly larger and safer than a golf cart, but that general idea),

They're called NEVs, and you can pick them up for a few thousand bucks. Electric, 25mph, cheap to insure, and fun.


The key part here is “without getting honked at and run over by angry dudes in SUVs”.


Just get one and use it and don't let the arrogant bastards get to you.

After a couple of near-fatal bicycle accidents with asshole cars, I got a front and rear camera for my bicycle and always take up the entire lane I'm entitled to. I'll pull off to the parking lane and let people pass when there's space and the guy behind me is being civil. But I just completely ignore the assholes who honk and yell, even if there's ample space in the parking lane.

And I buy myself a beer for every honk or yell I get, so I can stay pretty zen knowing I'm going to have a good Friday night, instead of getting angry at SUV dude. And, you'd be surprised how rarely I get drunk -- most people are pretty decent :-)

(for context -- I ride on 25mph streets and my cruising speed on my normal routes is 20-25 mph, and my max speed is typically around 27. I accelerate pretty aggresively for a cyclist, and only moderately slower than I think is safe for a car on these roads, which have lots of houses and pedestrian traffic. The last guy who passed me immediately slammed his brakes when he realized I was already ~10 over and he was now doing 35 in a 15mph school zone... idiot.)


I was going to ask the same - if that road in the picture is representative then surely 25 is way under the limit there?


This looks to me like the bridge over the Caltrain track on San Antonio road: https://www.google.com/maps/@37.410117,-122.107073,3a,15y,20...

The speed limit there is 35 mph.


Which I assume means people are actually going 40 mph.


40-45, yes. I don't think drivers are going to like having all of these slow-moving vehicles around, but, it'll be good training for the cars to see how pissed off human drivers are constantly attempting to pass.


So they won't be going with the flow of traffic?


No. And it's worth pointing out that the Lexus self-driving cars do drive with the flow, i.e. they speed like humans do.


So, why is this one capped and why are they trying to irritate other drivers?


One way that Google can launch its driverless Uber is by using remote drivers that can take over once the program can't decide what to do...


It depends on the legal framework.


I don't mean to troll, but I really hope that that is not the final look of the car. Looks like one of those toy red and yellow cars [0]. Such an amazing engineering achievement deserves a more serious, futuristic look.

[0] https://s-media-cache-ak0.pinimg.com/736x/2c/7b/4b/2c7b4b50b...


This was something that Elon Musk really got right with Tesla, he made it a point for his cars to have sex appeal. His model names are even S3X.

On the other hand I think this little car is probably going to be geared to the elderly or the disabled who would not be able to drive a regular car, so in that case sex appeal might not matter that much.


I think "cute and harmless" is actually the right approach for these cars, at least while they are in development. It is harder to project fears of unsafe, dangerous, menacing AI on something that looks likes a kid's toy.

Once autonomous cars are for sale and established in the market then I totally agree with you.


Reminds me of this: New Yorkers will help a cute but hapless robot get where it's trying to go: http://www.tweenbots.com/


Looks like the Howard.


> the same fleet that has self-driven over 1 million miles since we started the project

Of which only a fraction was in self driving mode.

They spent 1 million miles learning the model.


You are mistaken: "We’ve self-driven over 1 million miles" http://www.google.com/selfdrivingcar/where/

(The total miles driven in both manual and self-driving mode is closer to 2 million miles I believe.)


Ok, there was an article previously that I've misinterpreted. From their monthly report:

> Autonomous mode: 1,011,338 miles

> Manual mode: 796,250 miles


I asked why they don't include the miles their algorithms have trained on using simulated driving, and one of their safety drivers emailed me back and said they'd consider it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: