Hacker News new | past | comments | ask | show | jobs | submit login

>“All rides in the program will have an autonomous specialist on board for now” This tells me that we’re still a long way

Did you expect something different? I can't really see a boardroom writing a roadmap that goes straight from

  test rides (no passengers) with a backup driver onboard
to

  actual rides with passengers - no backup driver onboard
with no in-between steps.



Middle steps: (1) Board members use product to travel to board meetings. (2) Board members use product as replacement for personal vehicles. (3) Board members demand pay increases.

Back in 1999 the Chinese government announced that airline execs would be airborne at the changeover as reassurance that aircraft were safe from Y2K. Like or hate them, the incentive logic was sound.

https://www.wired.com/1999/01/y2k-in-china-caught-in-midair/

https://www.cbc.ca/news/science/chinese-airlines-won-t-be-bi...


I think of that as a parachute-rigger solution (https://en.wikipedia.org/wiki/Parachute_rigger).

Historically, people packing parachutes could be randomly selected to jump with the parachute they had packed.


It remains true in the military. Refusal to jump on a chute you have packed will cause you to lose your rigger qualifications.


> Middle steps: (1) Board members use product to travel to board meetings. (2) Board members use product as replacement for personal vehicles. (3) Board members demand pay increases.

How do you know board members aren't already using Waymo product heavily? It still doesn't mean they can go straight from board members using Waymo without backup drivers to arbitrary customers using Waymo without backup drivers.


Because if they did then they would surely tout that fact at every oppertunity. Musk doesnt drive a ferarri to work. He drives a tesla to promote his company's product line.


Elon takes a private jet to work.


Possibly to drive between their various houses?


> Possibly to drive between their various houses?

I'm guessing this is the old joke about Eric Schmidt?


"I save money by using nest thermostats in my various houses"


In Phoenix, people are being driven without a safety driver. They are doing these in between steps.


Maybe open up waymo rides to people with alphabet shares ? Then the owners and customers are the same group. Unfortunately multiple spouses aren’t really allowed in America, or you could limit ridership to spouses.


The acid test will be (4) Board members use product on their children.


Followed by (5) board members claiming legal bills for child custody disputes after caught leaving kids unsupervised in the custody of a 3000lb robot.


Arguably all modern vehicles are incredibly heavy robots. AVs are just supposed to be better robots.


No more so than a bicycle is a 20lb robot or an airplane is a 20,000lb robot. Clearly a self-driving vehicle is a different paradigm.


"The average car has 30 to 50 different computers, and high-end cars have as many as 100, and they're accompanied by 60 to 100 different electronic sensors." [1]

The median modern bicycle has 0 computers and sensors.

[1] https://www.ceinetwork.com/cei-blog/auto-computers-means-com...


Skin in the game. Nassim Taleb would agree with this measure!


Always makes me happy to see some good philosophy poking its head up. Great choice too, since the AV problem is fundamentally about Black Swans.


Waymo is already doing test rides with neither passengers no backup drivers in CA, so they wouldn't be jumping from no passengers plus safety drivers to paid passengers without safety drivers if they did offered paid, full-driverless rides.


The book Halting State by Charlie Stross (2007) [1] had an interesting self driving car model, where it was autonomous on simple roads like highways/motorways, and a human driver took over remotely for more complex city streets.

Of course the book showed some failure modes for that, but I wonder if network coverage and latency, as well as "backup driver response time" could be considered good enough, perhaps this sort of model could have an acceptable risk trade-off.

[1] https://en.wikipedia.org/wiki/Halting_State


For the computer it doesn't make much difference if there's a passanger or if there isn't.


For a company having paying customers does matter a lot.

Having customers paying for the R&D will help make it sustainable.


I suspect the revenue from passengers in this case looks like a rounding error. But the PR and feedback from early adopters is very valuable.


Yeah you can put the people with the good feedback in your ads video and ignore the ones who went thru the windshield.


I wonder where "a country road with no lanes which barely fits 1.5 car in winter in the Czech republic" is on your scale... Something like this, just imagine the snowdrifts around it https://www.google.com/maps/@49.080269,16.4569252,3a,75y,307...


Now add the completely blind switchback turns, where your "visibility' into whether another car is coming comes from a convex mirror nailed to a tree or post at the apex of the corner - if it hasn't fallen off or been knocked crooked...

basically all of Italy


Or an ambulance going on the opposite direction (because that’s the only available choice) on a boulevard in a busy capital city like Bucharest. Saw that a couple of hours ago, the ambulance met a taxi which was going the right way but of course that the taxi had to stop and find a way for the ambulance to pass (by partly going on the sidewalk). I said to myself that unless we get to AGI there’s no way for an “autonomous” car to handle that situation correctly.


You don't even need to go that far, the other day I saw an ambulance going down on Burrard Street in Vancouver, BC without lights or sirens then I guess a call came in , it put on both and turned around. It's a six lane street where normal cars aren't allowed to just turn around. It was handled real well by everyone involved, mind you, it wasn't unsafe but I doubt a computer could've handled it as well as the drivers did.


a very complex looking behavior sometimes comes from the very simple easy to implement principles, like say a bird flock behavior https://en.wikipedia.org/wiki/Flocking_(behavior)#Rules

I don't believe people are using their full AGI when driving (and the full "AGI" may as well happen to be a set of basic pattern matching capabilities which we haven't discovered yet). After decades of driving the behavior is pretty automatic, and when presented with complex situation following a simple rule, like just brake, is frequently the best, or close to it, response.


To me the solution to that is obvious and far better than the current status quo. The cars are all attached to a network and when an emergency service vehicle needs to get somewhere in a hurry there is a coordinated effort to move vehicles off the required route.

As things stand emergency vehicles have to cope with a reasonable minority of people who completely panic and actually impede their progress.


This has to work even if network reception is weak or absent. You can't be certain that 100% of cars will receive the signal and get themselves out of the way in time.


Right, so don't use the network: broadcast a signed message on a band reserved for emergency services.


> This has to work even if network reception is weak or absent.

Or hacked maliciously.


Oh you can have that in Bucharest even with regular cars. Lanes are pretty fluid there, as is the preferred direction of travel, I've lived there for only two years and I've seen more vehicles go in the opposite direction ('ghost riders' we call them here) than anywhere else over the rest of my life. Romanian traffic is super dangerous, especially if you are a pedestrian and you can just about forget cycling in traffic. It is also the only place where a car behind me honked to get me to move over when I was walking on the sidewalk.


That is 101 for autonomous driving. Solved years ago.


People at Tesla and other autonomous driving companies, of course are aware and worry about such situations. If you have a few hours and want to see many of the technologies and methods that Tesla is using to solve them, check out Tesla's recent "AI day" presentation. Tesla is quite cool about openly discussing the problems they have solved, problems they still have, and how they are trying to solve them.

An incomplete list includes:

1) Integrating all the camera views into one 3-D vector space before training the neural network(s).

2) A large in-house group (~1000 people) doing manually labeling of objects in that vector space, not on each camera.

3) Training neural networks for labeling objects.

4) Finding edge cases where the autocar failed (example is when it loses track of a vehicle in front of it when the autocar's view is obscured by a flurry of snow knocked off the roof of the car in front of it), and then querying the large fleet of cars on the road to get back thousands of similar situations to help training.

5) Overlaying multiple views of the world from many cars to get a better vector space mapping of intersections, parking lots, etc

6) New custom build hardware for high speed training of neural nets.

7) Simulations to train rarely encountered situations, like you describe, or very difficult to label situations (like a plaza with 100 people in it or a road in an Indian city).

8) Matching 3-D simulations to what the cars cameras would see using many software techniques.


They're cool about openly discussing it because this is all industry standard stuff. It's a lot of work and impressive, but table stakes for being a serious player in the AV space, which is why the cost of entry is in the billions of dollars.


> People at Tesla and other autonomous driving companies, of course are aware and worry about such situations.

Yeah, a Tesla couldn't possibly drive into a stationary, clearly visible fire engine or concrete barrier, on a dry day, in direct sunlight.


As awful of a failure as that is, and as fun as it is to mock Tesla for it, that claim was that they're aware of edge cases and working on fixing them, not that they're already fixed. So your criticism doesn't really make sense.


A system dealing with 'edge cases' by special casing them is not going to work for driving, driving is a continuous string of edge cases, and if you approach the problem that way you fix one problem but create the next.


I don't think anybody said anything about special casing them.

I dislike saying anything in defense of tesla's self-driving research, but let's be accurate.


Neither could a human, I'm sure.

At least, I never would...


If you never fail, you aren't moving fast enough.

A million people are killed globally each year by motor vehicles. Staggering amounts of pain and injuries. Massive amounts of property damage. Tesla's cars are not supposed to be left to drive themselves. The chance to save so much carnage seems worth letting some people driving Tesla's, that fail to pay attention to the road, suffer the consequences of poor decisions.

Plus these problems are likely too be mostly fixed due to the fact that they happened.


> If you never fail, you aren't moving fast enough.

Start-up religion doesn't really work when there are lives on the line. That's fine for your social media platform du jour but please don't bring that attitude to anything that has 'mission critical' in the description. That includes medicine, finance, machine control, traffic automation, utilities and so on.


But what about that million people who die every year now? Are the few thousand people who will die because of AI mishaps worth more than the million who die due to human mishaps?

Not to say that we shouldn't be cautious here, but over-caution kills people too



You described a lot of effort, but no results.


From what I've seen of Tesla's solution at least - even busy city centers and complex parking lots are very difficulty for present day autonomous driving technologies. The understanding level necessary just isn't there.

These things are excellent - undeniably better than humans at the boring stuff, highway driving, even major roads. They can rightfully claim massive mileage with high safety levels in those circumstances... but throw them into nastier conditions where you have to understand what objects actually are and things quickly seem to fall apart.


That is like trying to judge modern supercomputing by your experinces with a 6 year old Dell desktop.

Waymo drove 29,944.69 miles between "disengagements" last year. That is an average California driver needing to touch the wheel once every 2.3 years.

Tesla by comparison is classed as a SAE Level 2 driver assist system and isn't even required to report metrics to the state. While they sell it to consumers as self-driving, they tell the state it is basically fancy cruise control.


"disengagements" is a disingenuous statistic - that'd be like a human driver just giving up and getting out of the car.

What you want is "interventions". Additionally, look at where those miles were driven. Most of them are some of the most simplistic road driving scenarios possible.


> That is an average California driver needing to touch the wheel once every 2.3 years

From my experience of California driving, that doesn't sound too bad. Compared to the entire Eastern seaboard, y'all have great roads and better drivers.


> Waymo drove 29,944.69 miles between "disengagements" last year.

You know better. If most of those miles were in sunny Mountain View suburbs, they don't count.


It's unclear to me why Tesla's solution is so discussed. They are definitely not on the same playing field as Waymo or even Cruise.


There's a lot of people on here who have invested in Tesla


also a lot of people on here who have actually experienced tesla's self-driving. certainly a lot more than have experienced any other self-driving product (at least above a "lane-keeping" system)


Are there a lot of people who have experienced tesla's self-driving?

As I understand it, if you pay for FSD, you don't actually get anything like self-driving, you just get lane-changes on the highway in addition to the lane-keeping. Effectively, you get lane-keeping, which you have if you don't pay too.

All the videos of "FSD driving" are from a small number of beta-testers, and there's no way to opt into the beta.

Because of that, my assumption would be very few people on here have experienced tesla's self-driving. It's only open to a small number of beta testers, whether you have purchased it or not.

On the other hand, waymo is available for the general public to use, though only in specific geographic areas.


Would you describe Tesla's tendency to crash full speed into stopped emergency vehicles during highway driving as "excellent"?

https://www.cnn.com/2021/08/16/business/tesla-autopilot-fede...


While controversial, we tolerate a great deal of casualties caused by human drivers without trying to illegalise those.

While we can (and should) hold autonomous vehicle developers to a much, much higher standard than we hold human drivers, it is precisely because of excellence.


We actually do "illegalise" casualties by human drivers.


I'm sure the grand poster meant banning human driving entirely in order to prevent human driving casualties.


The failure modes are going to be very strange and the technology is not strictly comparable to a human driver. It is going to fail in ways that a human never would. Not recognizing obstacles, misrecognizing things, sensors being obscured in a way humans would recognize and fix (you would never drive if you couldn't see out of your eyes!).

It is also possible that if it develops enough it will succeed in ways that a human cannot, such as extremely long monotonous cross-country driving (think 8 hour highway driving) punctuated by a sudden need to intervene within seconds or even milliseconds. Humans are not good at this but technology is. Autonomous cars don't get tired or fatigued. Code doesn't get angry or make otherwise arbitrary and capricious decisions. Autonomous cars can react in milliseconds, whereas humans are much worse.

There will undoubtedly be more accidents if the technology is allowed to develop (and I take no position on this).


That's autopilot, not FSD beta though, at this point it's probably 10 generations old


Ah yes, because "autopilot" is not autonomous.


Well yeah, it's like other autopilots:

An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems).


That's just devious marketing on Tesla's part. They can always excuse customer misunderstandings with the original meaning you explained, while normal people can be savely expected to interpret autopilot as full self driving (and I'd be surprised if they didn't have actually tested this with focus groups beforehand). So not really lying (great for the lawsuits), but constructing misunderstanding on purpose (great for the brand image).


Except for the manual and all the warnings that pop up that say you need to pay attention.

3000 people die every day in automobile accidents, 10% of which are from people who are sleepy. Even standard autopilot is better than a tired driver


I would say it's better then the Human's tendency to drive full speed into anything while impaired by a drug. Especially since the bug was fixed in Tesla's case but the bug in Human's case is probably un-fixable.


Drugs (or alcohol)? There are so many more failure modes that drugs are the least of my concerns. Especially of unspecified type. I'm not the least bit worried about drivers hopped up on tylenol. Humans get distracted while driving, by texting, or simply boredom and start daydreaming. Don't forget about driving while tired. Or emotionally disturbed (divorce or a death; road rage). Human vision systems are also pretty frail and have bad failure modes, eg the sun is close to the horizon and the driver is headed towards the sun.


Computer vision systems also have bad failure modes. The camera sensors typically used today have better light sensitivity but less dynamic range than the human eye.


They fixed driving into stationary things? That's news to me. What's your source?

It's not an easy problem to fix at high speed without false positives, and they seem to really hate false positives.


I live in north-central Idaho. 10 minutes from 2 state-universities, but in an otherwise relatively rural part of the county with a 1/4 mile long, somewhat steep driveway.

Every year, I'm amazed at how quickly our personal "veneer of civilization" collapses in the snow.

The prior owners of our home would just keep their kids home from school, and work from home an average of "about a week every winter."

We're a little more aggressive with snow removal, but there are still mornings every winter where I'm getting up at 5 to spend a couple hours plowing and blowing out drifts on our driveway (after typically doing the same thing the night before) just in order for my wife to make it down to our county road which might still have a foot or so of snow covering it.

Similarly, in windy snow-covered conditions, there are a couple spots between us and town where the snow regularly drifts back over the road in a matter of hours, causing a "well, I know the road goes about here, I think I can make it through these drifts if I floor it so here it goes" situation.

Even when the roads are well plowed and clear, there are plenty of situations where it's difficult for me, a human, to distinguish between the plowed-but-still-white-road and the white snow all around it in some lighting conditions.

And let's take snow out of it. Our fastest route into town involves gravel roads. And our paved route is chip-sealed every couple years, and typically doesn't get a divider-line drawn back on it for 6-months or so after.

Which is all to say, I think it's going to be quite a while before I have a car that can autonomously drive me into town in the summer, and global warming aside, I'll probably never have one that can get me there reliably in the winter.


Northern Canada here. We have all been down that road. I had a rental once that wouldn't let me backup as the sensor was frozen over. I doubt AI will ever handle winter situations without help.


> I doubt AI will ever handle winter situations without help.

Sure it will, at least eventually. However, I suspect the humans at the time won’t like the answer: that it’s not always safe to drive in these conditions, and then the car refusing to drive autonomously, even if it is technically capable of navigating the route. It may deem the risk of getting stuck, etc. to be too high. Or you may need to accept a report to your insurance company that you’ve opted to override the safety warnings, etc.


Lol. Good luck selling that in the north, the mountains, farm country or anywhere else more than 10 miles from a starbucks. Sometimes lives depend on being able to move and there isnt time to reprogram robot to understand the risk dynamic. Malfunctioning sensors or broken highbeam circuits (tesla) are no excuse for a car to remain stuck in a snowbank.


Why do you live in a place where you have to shovel snow from 5am on a week day? I mean I appreciate building character but at some point you're just deciding to live life on hard mode.


First, they are "plowing and blowing", not shoveling (or not shoveling much) - if you have a significant amount of snow, shoveling is just impractical as well as back-breaking. Second, even if you don't get snow overnight, you get the drifting they mention, which is where winds blow snow onto the nice clean driveway you had cleared previously. Drifting can be quite significant with lots of snow and big open areas!

Lastly, not the OP, but winter is my favorite season for the most part, and I love being around lots of snow!


A large band of the United States reliably gets heavy overnight snows. In my case we're talking an hour west of a major metro--Boston. These days, the inevitable travel snafus notwithstanding, I just stay home. But when I had to go into an office barring a state of emergency digging out in early am was a regular occurrence.


Jesus christ HN. Not everyone is an IT guy with comfortable salary. Some people have families or other roots they don't want to severe, or lack the money and useful skills to move...


Autonomous driving systems are set at various levels of autonomy.

Level 0 is no automation, level 1 is just a dumb cruise control, level 2 is radar adaptive cruise control plus lane keeping (which is where most production systems like Tesla Autopilot and GM Supercruise are currently at). Level 2 still requires full human supervision, if you engaged it on the road above it would either fail to engage or you'd crash and it would be your fault. Level 3 is the same plus an ability to handle some common driving tasks, like changing lanes to pass a slower vehicle.

Level 4 is where it gets really interesting, because it's supposed to handle everything involved in navigating from Point A to Point B. It's supposed to stop itself in the event of encountering something it can't handle, so you could theoretically take a nap while it drove.

However, an important limitation is that Level 4 autonomy is geofenced, it's only allowed in certain areas on certain roads. Also, it can disable itself in certain conditions like construction or weather that inhibit visibility. Waymo vehicles like these are ostensibly level 4, if you tell them to drive through a back road in the snow they'll simply refuse to do so. It's only useful in reasonably good conditions in a few big cities.

Level 5 is considered to be Point A to Point B, for any two navigable points, in any conditions that the vehicle can traverse. You could build a Level 5 vehicle without a driver's seat, much less an alert driver. I kind of think this will require something much closer to artificial general intelligence; level 4 is just really difficult conventional programming.


It's not obvious that Level 4 falls within what one would call really difficult conventional programming. That level entails something like "in the event of any exceptional situation, find a safe stopping location and safely bring the car to a stop there," and even that alone seems incredibly hard.


Actually it doesn't matter if your cruise control is dumb or adaptive. If you have only cruise control, of either kind, then it's level 1.

And if you have lane-keeping but not cruise control, that's also level 1.

The difference between 1 and 2 is weird.


I'd still buy a self-driving car that refuses to drive on that road.


In the back seat of the Waymo there's a "Pull Over" emergency lever.


You can't always "pull over."


Lots of roads like that in Britian as well and the speed limit is 60mph/100kph. Not uncommon for two cars on a single track road to adjust speed to pass each other at a passing place without slowing down much, so at a closing speed of over 100mph. Perfectly safe for human drivers who know the roads.


This sounds like the sort of "perfectly safe for human drivers who know the roads" that actually results in a fair number of road deaths.


If you look at the accident maps, there are almost none on single track roads and lots on twin track roads. My hypothesis is that driving on a single track road feels much more risky so people pay more attention and slow down more on blind corners. Also, it’s not possible to overtake and a lot of accidents are related to overtaking.


Believe it or not there are tons of two-way roads like that just 30 minutes from Silicon Valley that self-driving cars could practice on. Here's an example: https://goo.gl/maps/1CVb7Mpiwv1VL2sd7


There're also similar roads 30 minutes from Silicon Valley that have all that, plus residences, pedestrians, parked cars, sheer cliffs, unclear driveway splits, and porta-potties, eg. https://goo.gl/maps/57jzzK6fvtCqvu5w5

Strangely I've never seen Waymo vehicles practicing on that. They're all over Mountain View, but I have never once seen them in the mid-peninsula hills.


Just have them drive up to the Lick Observatory and back.


That’s just stunningly beautiful - Czech countryside is something else!

I’d gladly buy a self-driving car that require some additional input on such a road and had additional aids to spot oncoming traffic I can’t see behind the tractor that’s a few hundred meters forward of the spot linked to. It would still be safer.

To really make things work, we need cars to be able to negotiate the way humans do on the right of way, etc. There is a lot of non-verbal (and when that fails, very verbal) communication while driving. Currently, cars can’t communicate with each other and the pedestrians, which limits possibilities a lot.


You can replicate that without going overseas. Send that autonomous vehicle over the Golden Gate bridge, take any of the next few exits, and turn right. The street I live on is a paved horse path from the 1910s. No snowdrifts, but a lot of aggressive drivers angrily refusing to back up, which will be fun to see software deal with!


As someone who learned to drive in the city, those roads make me sweat bullets.

My grandpa who drives on those roads primarily, sweats bullets in the city.

Maybe you’ll have different driving models to load in different scenarios …


My mother thinks nothing of driving on deserted roads in significant unplowed snow. She gets nervous on a dry, Texas highway at rush hour.


Yeah, that seems perfectly rational. There is nothing to hit on a deserted highway. Driving in traffic, on the other hand, is more stressful and has worse downsides.


> significant unplowed snow

Spinning out on a deserted highway and hitting a snowbank and getting trapped in your car kills a large number of people every year. Even with smartphones, calls for help can't always be responded to in time, resulting in death. (Have an emergency kit in your car if you live above the snow line!)

Driving in city traffic can be quite harrowing, but hitting another car at 20-30 mph isn't usually fatal. (Wear your seatbelts!)

The point that GP post was trying to make is that humans have different preferences, and what seems dangerous to one doesn't (and possibly isn't) dangerous to another. Humans are also notoriously bad at judging danger, eg some people feel threatened by the idea of wearing of papers masks.


The computer doesn't have to be perfect; it just has to be better than a human.


Adding to this to really drive the point home: it doesn’t even need to be better than a human that’s good at driving. It only needs to be better than the average human driver. Anecdotally speaking, that’s not such a high bar to pass (relative to the topic at hand).


For general acceptance I think it has to be better than how good the average human thinks they are at driving.

Secondly, its dumbest errors have to be better than what the average human thinks their dumbest errors would be. If there is an accident and every one thinks they would never have made this error, it will crush the acceptance.

Looking at the general accidents stats and saying to people that, on average there are less deaths on the road but they might die in a stupid accident they would never have been into, had they been driving themselves, will be a very hard pill to swallow. Most people prefer to have the illusion of control even if statistically it means worse expectations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: