Hacker News new | past | comments | ask | show | jobs | submit login
Welcoming our first riders in San Francisco (waymo.com)
620 points by EvgeniyZh on Aug 24, 2021 | hide | past | favorite | 669 comments



“All rides in the program will have an autonomous specialist on board for now”

This tells me that we’re still a long way from full level 4 (and certainly level 5) autonomy in a busy city like San Francisco. The edge cases requiring immediate human attention are still too frequent for the human safety driver to be remote, as is the case in Phoenix.

Also, just a reminder that Waymo in Phoenix is nowhere close to being level 5, since it is still heavily geofences and requires those remote safety monitors. I still think that true level 5 (i.e. ability to drive autonomously everywhere with zero human oversight with a safety record equivalent to the median human driver) requires AGI. Would love to be proven wrong!


>“All rides in the program will have an autonomous specialist on board for now” This tells me that we’re still a long way

Did you expect something different? I can't really see a boardroom writing a roadmap that goes straight from

  test rides (no passengers) with a backup driver onboard
to

  actual rides with passengers - no backup driver onboard
with no in-between steps.


Middle steps: (1) Board members use product to travel to board meetings. (2) Board members use product as replacement for personal vehicles. (3) Board members demand pay increases.

Back in 1999 the Chinese government announced that airline execs would be airborne at the changeover as reassurance that aircraft were safe from Y2K. Like or hate them, the incentive logic was sound.

https://www.wired.com/1999/01/y2k-in-china-caught-in-midair/

https://www.cbc.ca/news/science/chinese-airlines-won-t-be-bi...


I think of that as a parachute-rigger solution (https://en.wikipedia.org/wiki/Parachute_rigger).

Historically, people packing parachutes could be randomly selected to jump with the parachute they had packed.


It remains true in the military. Refusal to jump on a chute you have packed will cause you to lose your rigger qualifications.


> Middle steps: (1) Board members use product to travel to board meetings. (2) Board members use product as replacement for personal vehicles. (3) Board members demand pay increases.

How do you know board members aren't already using Waymo product heavily? It still doesn't mean they can go straight from board members using Waymo without backup drivers to arbitrary customers using Waymo without backup drivers.


Because if they did then they would surely tout that fact at every oppertunity. Musk doesnt drive a ferarri to work. He drives a tesla to promote his company's product line.


Elon takes a private jet to work.


Possibly to drive between their various houses?


> Possibly to drive between their various houses?

I'm guessing this is the old joke about Eric Schmidt?


"I save money by using nest thermostats in my various houses"


In Phoenix, people are being driven without a safety driver. They are doing these in between steps.


Maybe open up waymo rides to people with alphabet shares ? Then the owners and customers are the same group. Unfortunately multiple spouses aren’t really allowed in America, or you could limit ridership to spouses.


The acid test will be (4) Board members use product on their children.


Followed by (5) board members claiming legal bills for child custody disputes after caught leaving kids unsupervised in the custody of a 3000lb robot.


Arguably all modern vehicles are incredibly heavy robots. AVs are just supposed to be better robots.


No more so than a bicycle is a 20lb robot or an airplane is a 20,000lb robot. Clearly a self-driving vehicle is a different paradigm.


"The average car has 30 to 50 different computers, and high-end cars have as many as 100, and they're accompanied by 60 to 100 different electronic sensors." [1]

The median modern bicycle has 0 computers and sensors.

[1] https://www.ceinetwork.com/cei-blog/auto-computers-means-com...


Skin in the game. Nassim Taleb would agree with this measure!


Always makes me happy to see some good philosophy poking its head up. Great choice too, since the AV problem is fundamentally about Black Swans.


Waymo is already doing test rides with neither passengers no backup drivers in CA, so they wouldn't be jumping from no passengers plus safety drivers to paid passengers without safety drivers if they did offered paid, full-driverless rides.


The book Halting State by Charlie Stross (2007) [1] had an interesting self driving car model, where it was autonomous on simple roads like highways/motorways, and a human driver took over remotely for more complex city streets.

Of course the book showed some failure modes for that, but I wonder if network coverage and latency, as well as "backup driver response time" could be considered good enough, perhaps this sort of model could have an acceptable risk trade-off.

[1] https://en.wikipedia.org/wiki/Halting_State


For the computer it doesn't make much difference if there's a passanger or if there isn't.


For a company having paying customers does matter a lot.

Having customers paying for the R&D will help make it sustainable.


I suspect the revenue from passengers in this case looks like a rounding error. But the PR and feedback from early adopters is very valuable.


Yeah you can put the people with the good feedback in your ads video and ignore the ones who went thru the windshield.


I wonder where "a country road with no lanes which barely fits 1.5 car in winter in the Czech republic" is on your scale... Something like this, just imagine the snowdrifts around it https://www.google.com/maps/@49.080269,16.4569252,3a,75y,307...


Now add the completely blind switchback turns, where your "visibility' into whether another car is coming comes from a convex mirror nailed to a tree or post at the apex of the corner - if it hasn't fallen off or been knocked crooked...

basically all of Italy


Or an ambulance going on the opposite direction (because that’s the only available choice) on a boulevard in a busy capital city like Bucharest. Saw that a couple of hours ago, the ambulance met a taxi which was going the right way but of course that the taxi had to stop and find a way for the ambulance to pass (by partly going on the sidewalk). I said to myself that unless we get to AGI there’s no way for an “autonomous” car to handle that situation correctly.


You don't even need to go that far, the other day I saw an ambulance going down on Burrard Street in Vancouver, BC without lights or sirens then I guess a call came in , it put on both and turned around. It's a six lane street where normal cars aren't allowed to just turn around. It was handled real well by everyone involved, mind you, it wasn't unsafe but I doubt a computer could've handled it as well as the drivers did.


a very complex looking behavior sometimes comes from the very simple easy to implement principles, like say a bird flock behavior https://en.wikipedia.org/wiki/Flocking_(behavior)#Rules

I don't believe people are using their full AGI when driving (and the full "AGI" may as well happen to be a set of basic pattern matching capabilities which we haven't discovered yet). After decades of driving the behavior is pretty automatic, and when presented with complex situation following a simple rule, like just brake, is frequently the best, or close to it, response.


To me the solution to that is obvious and far better than the current status quo. The cars are all attached to a network and when an emergency service vehicle needs to get somewhere in a hurry there is a coordinated effort to move vehicles off the required route.

As things stand emergency vehicles have to cope with a reasonable minority of people who completely panic and actually impede their progress.


This has to work even if network reception is weak or absent. You can't be certain that 100% of cars will receive the signal and get themselves out of the way in time.


Right, so don't use the network: broadcast a signed message on a band reserved for emergency services.


> This has to work even if network reception is weak or absent.

Or hacked maliciously.


Oh you can have that in Bucharest even with regular cars. Lanes are pretty fluid there, as is the preferred direction of travel, I've lived there for only two years and I've seen more vehicles go in the opposite direction ('ghost riders' we call them here) than anywhere else over the rest of my life. Romanian traffic is super dangerous, especially if you are a pedestrian and you can just about forget cycling in traffic. It is also the only place where a car behind me honked to get me to move over when I was walking on the sidewalk.


That is 101 for autonomous driving. Solved years ago.


People at Tesla and other autonomous driving companies, of course are aware and worry about such situations. If you have a few hours and want to see many of the technologies and methods that Tesla is using to solve them, check out Tesla's recent "AI day" presentation. Tesla is quite cool about openly discussing the problems they have solved, problems they still have, and how they are trying to solve them.

An incomplete list includes:

1) Integrating all the camera views into one 3-D vector space before training the neural network(s).

2) A large in-house group (~1000 people) doing manually labeling of objects in that vector space, not on each camera.

3) Training neural networks for labeling objects.

4) Finding edge cases where the autocar failed (example is when it loses track of a vehicle in front of it when the autocar's view is obscured by a flurry of snow knocked off the roof of the car in front of it), and then querying the large fleet of cars on the road to get back thousands of similar situations to help training.

5) Overlaying multiple views of the world from many cars to get a better vector space mapping of intersections, parking lots, etc

6) New custom build hardware for high speed training of neural nets.

7) Simulations to train rarely encountered situations, like you describe, or very difficult to label situations (like a plaza with 100 people in it or a road in an Indian city).

8) Matching 3-D simulations to what the cars cameras would see using many software techniques.


They're cool about openly discussing it because this is all industry standard stuff. It's a lot of work and impressive, but table stakes for being a serious player in the AV space, which is why the cost of entry is in the billions of dollars.


> People at Tesla and other autonomous driving companies, of course are aware and worry about such situations.

Yeah, a Tesla couldn't possibly drive into a stationary, clearly visible fire engine or concrete barrier, on a dry day, in direct sunlight.


As awful of a failure as that is, and as fun as it is to mock Tesla for it, that claim was that they're aware of edge cases and working on fixing them, not that they're already fixed. So your criticism doesn't really make sense.


A system dealing with 'edge cases' by special casing them is not going to work for driving, driving is a continuous string of edge cases, and if you approach the problem that way you fix one problem but create the next.


I don't think anybody said anything about special casing them.

I dislike saying anything in defense of tesla's self-driving research, but let's be accurate.


Neither could a human, I'm sure.

At least, I never would...


If you never fail, you aren't moving fast enough.

A million people are killed globally each year by motor vehicles. Staggering amounts of pain and injuries. Massive amounts of property damage. Tesla's cars are not supposed to be left to drive themselves. The chance to save so much carnage seems worth letting some people driving Tesla's, that fail to pay attention to the road, suffer the consequences of poor decisions.

Plus these problems are likely too be mostly fixed due to the fact that they happened.


> If you never fail, you aren't moving fast enough.

Start-up religion doesn't really work when there are lives on the line. That's fine for your social media platform du jour but please don't bring that attitude to anything that has 'mission critical' in the description. That includes medicine, finance, machine control, traffic automation, utilities and so on.


But what about that million people who die every year now? Are the few thousand people who will die because of AI mishaps worth more than the million who die due to human mishaps?

Not to say that we shouldn't be cautious here, but over-caution kills people too



You described a lot of effort, but no results.


From what I've seen of Tesla's solution at least - even busy city centers and complex parking lots are very difficulty for present day autonomous driving technologies. The understanding level necessary just isn't there.

These things are excellent - undeniably better than humans at the boring stuff, highway driving, even major roads. They can rightfully claim massive mileage with high safety levels in those circumstances... but throw them into nastier conditions where you have to understand what objects actually are and things quickly seem to fall apart.


That is like trying to judge modern supercomputing by your experinces with a 6 year old Dell desktop.

Waymo drove 29,944.69 miles between "disengagements" last year. That is an average California driver needing to touch the wheel once every 2.3 years.

Tesla by comparison is classed as a SAE Level 2 driver assist system and isn't even required to report metrics to the state. While they sell it to consumers as self-driving, they tell the state it is basically fancy cruise control.


"disengagements" is a disingenuous statistic - that'd be like a human driver just giving up and getting out of the car.

What you want is "interventions". Additionally, look at where those miles were driven. Most of them are some of the most simplistic road driving scenarios possible.


> That is an average California driver needing to touch the wheel once every 2.3 years

From my experience of California driving, that doesn't sound too bad. Compared to the entire Eastern seaboard, y'all have great roads and better drivers.


> Waymo drove 29,944.69 miles between "disengagements" last year.

You know better. If most of those miles were in sunny Mountain View suburbs, they don't count.


It's unclear to me why Tesla's solution is so discussed. They are definitely not on the same playing field as Waymo or even Cruise.


There's a lot of people on here who have invested in Tesla


also a lot of people on here who have actually experienced tesla's self-driving. certainly a lot more than have experienced any other self-driving product (at least above a "lane-keeping" system)


Are there a lot of people who have experienced tesla's self-driving?

As I understand it, if you pay for FSD, you don't actually get anything like self-driving, you just get lane-changes on the highway in addition to the lane-keeping. Effectively, you get lane-keeping, which you have if you don't pay too.

All the videos of "FSD driving" are from a small number of beta-testers, and there's no way to opt into the beta.

Because of that, my assumption would be very few people on here have experienced tesla's self-driving. It's only open to a small number of beta testers, whether you have purchased it or not.

On the other hand, waymo is available for the general public to use, though only in specific geographic areas.


Would you describe Tesla's tendency to crash full speed into stopped emergency vehicles during highway driving as "excellent"?

https://www.cnn.com/2021/08/16/business/tesla-autopilot-fede...


While controversial, we tolerate a great deal of casualties caused by human drivers without trying to illegalise those.

While we can (and should) hold autonomous vehicle developers to a much, much higher standard than we hold human drivers, it is precisely because of excellence.


We actually do "illegalise" casualties by human drivers.


I'm sure the grand poster meant banning human driving entirely in order to prevent human driving casualties.


The failure modes are going to be very strange and the technology is not strictly comparable to a human driver. It is going to fail in ways that a human never would. Not recognizing obstacles, misrecognizing things, sensors being obscured in a way humans would recognize and fix (you would never drive if you couldn't see out of your eyes!).

It is also possible that if it develops enough it will succeed in ways that a human cannot, such as extremely long monotonous cross-country driving (think 8 hour highway driving) punctuated by a sudden need to intervene within seconds or even milliseconds. Humans are not good at this but technology is. Autonomous cars don't get tired or fatigued. Code doesn't get angry or make otherwise arbitrary and capricious decisions. Autonomous cars can react in milliseconds, whereas humans are much worse.

There will undoubtedly be more accidents if the technology is allowed to develop (and I take no position on this).


That's autopilot, not FSD beta though, at this point it's probably 10 generations old


Ah yes, because "autopilot" is not autonomous.


Well yeah, it's like other autopilots:

An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems).


That's just devious marketing on Tesla's part. They can always excuse customer misunderstandings with the original meaning you explained, while normal people can be savely expected to interpret autopilot as full self driving (and I'd be surprised if they didn't have actually tested this with focus groups beforehand). So not really lying (great for the lawsuits), but constructing misunderstanding on purpose (great for the brand image).


Except for the manual and all the warnings that pop up that say you need to pay attention.

3000 people die every day in automobile accidents, 10% of which are from people who are sleepy. Even standard autopilot is better than a tired driver


I would say it's better then the Human's tendency to drive full speed into anything while impaired by a drug. Especially since the bug was fixed in Tesla's case but the bug in Human's case is probably un-fixable.


Drugs (or alcohol)? There are so many more failure modes that drugs are the least of my concerns. Especially of unspecified type. I'm not the least bit worried about drivers hopped up on tylenol. Humans get distracted while driving, by texting, or simply boredom and start daydreaming. Don't forget about driving while tired. Or emotionally disturbed (divorce or a death; road rage). Human vision systems are also pretty frail and have bad failure modes, eg the sun is close to the horizon and the driver is headed towards the sun.


Computer vision systems also have bad failure modes. The camera sensors typically used today have better light sensitivity but less dynamic range than the human eye.


They fixed driving into stationary things? That's news to me. What's your source?

It's not an easy problem to fix at high speed without false positives, and they seem to really hate false positives.


I live in north-central Idaho. 10 minutes from 2 state-universities, but in an otherwise relatively rural part of the county with a 1/4 mile long, somewhat steep driveway.

Every year, I'm amazed at how quickly our personal "veneer of civilization" collapses in the snow.

The prior owners of our home would just keep their kids home from school, and work from home an average of "about a week every winter."

We're a little more aggressive with snow removal, but there are still mornings every winter where I'm getting up at 5 to spend a couple hours plowing and blowing out drifts on our driveway (after typically doing the same thing the night before) just in order for my wife to make it down to our county road which might still have a foot or so of snow covering it.

Similarly, in windy snow-covered conditions, there are a couple spots between us and town where the snow regularly drifts back over the road in a matter of hours, causing a "well, I know the road goes about here, I think I can make it through these drifts if I floor it so here it goes" situation.

Even when the roads are well plowed and clear, there are plenty of situations where it's difficult for me, a human, to distinguish between the plowed-but-still-white-road and the white snow all around it in some lighting conditions.

And let's take snow out of it. Our fastest route into town involves gravel roads. And our paved route is chip-sealed every couple years, and typically doesn't get a divider-line drawn back on it for 6-months or so after.

Which is all to say, I think it's going to be quite a while before I have a car that can autonomously drive me into town in the summer, and global warming aside, I'll probably never have one that can get me there reliably in the winter.


Northern Canada here. We have all been down that road. I had a rental once that wouldn't let me backup as the sensor was frozen over. I doubt AI will ever handle winter situations without help.


> I doubt AI will ever handle winter situations without help.

Sure it will, at least eventually. However, I suspect the humans at the time won’t like the answer: that it’s not always safe to drive in these conditions, and then the car refusing to drive autonomously, even if it is technically capable of navigating the route. It may deem the risk of getting stuck, etc. to be too high. Or you may need to accept a report to your insurance company that you’ve opted to override the safety warnings, etc.


Lol. Good luck selling that in the north, the mountains, farm country or anywhere else more than 10 miles from a starbucks. Sometimes lives depend on being able to move and there isnt time to reprogram robot to understand the risk dynamic. Malfunctioning sensors or broken highbeam circuits (tesla) are no excuse for a car to remain stuck in a snowbank.


Why do you live in a place where you have to shovel snow from 5am on a week day? I mean I appreciate building character but at some point you're just deciding to live life on hard mode.


First, they are "plowing and blowing", not shoveling (or not shoveling much) - if you have a significant amount of snow, shoveling is just impractical as well as back-breaking. Second, even if you don't get snow overnight, you get the drifting they mention, which is where winds blow snow onto the nice clean driveway you had cleared previously. Drifting can be quite significant with lots of snow and big open areas!

Lastly, not the OP, but winter is my favorite season for the most part, and I love being around lots of snow!


A large band of the United States reliably gets heavy overnight snows. In my case we're talking an hour west of a major metro--Boston. These days, the inevitable travel snafus notwithstanding, I just stay home. But when I had to go into an office barring a state of emergency digging out in early am was a regular occurrence.


Jesus christ HN. Not everyone is an IT guy with comfortable salary. Some people have families or other roots they don't want to severe, or lack the money and useful skills to move...


Autonomous driving systems are set at various levels of autonomy.

Level 0 is no automation, level 1 is just a dumb cruise control, level 2 is radar adaptive cruise control plus lane keeping (which is where most production systems like Tesla Autopilot and GM Supercruise are currently at). Level 2 still requires full human supervision, if you engaged it on the road above it would either fail to engage or you'd crash and it would be your fault. Level 3 is the same plus an ability to handle some common driving tasks, like changing lanes to pass a slower vehicle.

Level 4 is where it gets really interesting, because it's supposed to handle everything involved in navigating from Point A to Point B. It's supposed to stop itself in the event of encountering something it can't handle, so you could theoretically take a nap while it drove.

However, an important limitation is that Level 4 autonomy is geofenced, it's only allowed in certain areas on certain roads. Also, it can disable itself in certain conditions like construction or weather that inhibit visibility. Waymo vehicles like these are ostensibly level 4, if you tell them to drive through a back road in the snow they'll simply refuse to do so. It's only useful in reasonably good conditions in a few big cities.

Level 5 is considered to be Point A to Point B, for any two navigable points, in any conditions that the vehicle can traverse. You could build a Level 5 vehicle without a driver's seat, much less an alert driver. I kind of think this will require something much closer to artificial general intelligence; level 4 is just really difficult conventional programming.


It's not obvious that Level 4 falls within what one would call really difficult conventional programming. That level entails something like "in the event of any exceptional situation, find a safe stopping location and safely bring the car to a stop there," and even that alone seems incredibly hard.


Actually it doesn't matter if your cruise control is dumb or adaptive. If you have only cruise control, of either kind, then it's level 1.

And if you have lane-keeping but not cruise control, that's also level 1.

The difference between 1 and 2 is weird.


I'd still buy a self-driving car that refuses to drive on that road.


In the back seat of the Waymo there's a "Pull Over" emergency lever.


You can't always "pull over."


Lots of roads like that in Britian as well and the speed limit is 60mph/100kph. Not uncommon for two cars on a single track road to adjust speed to pass each other at a passing place without slowing down much, so at a closing speed of over 100mph. Perfectly safe for human drivers who know the roads.


This sounds like the sort of "perfectly safe for human drivers who know the roads" that actually results in a fair number of road deaths.


If you look at the accident maps, there are almost none on single track roads and lots on twin track roads. My hypothesis is that driving on a single track road feels much more risky so people pay more attention and slow down more on blind corners. Also, it’s not possible to overtake and a lot of accidents are related to overtaking.


Believe it or not there are tons of two-way roads like that just 30 minutes from Silicon Valley that self-driving cars could practice on. Here's an example: https://goo.gl/maps/1CVb7Mpiwv1VL2sd7


There're also similar roads 30 minutes from Silicon Valley that have all that, plus residences, pedestrians, parked cars, sheer cliffs, unclear driveway splits, and porta-potties, eg. https://goo.gl/maps/57jzzK6fvtCqvu5w5

Strangely I've never seen Waymo vehicles practicing on that. They're all over Mountain View, but I have never once seen them in the mid-peninsula hills.


Just have them drive up to the Lick Observatory and back.


That’s just stunningly beautiful - Czech countryside is something else!

I’d gladly buy a self-driving car that require some additional input on such a road and had additional aids to spot oncoming traffic I can’t see behind the tractor that’s a few hundred meters forward of the spot linked to. It would still be safer.

To really make things work, we need cars to be able to negotiate the way humans do on the right of way, etc. There is a lot of non-verbal (and when that fails, very verbal) communication while driving. Currently, cars can’t communicate with each other and the pedestrians, which limits possibilities a lot.


You can replicate that without going overseas. Send that autonomous vehicle over the Golden Gate bridge, take any of the next few exits, and turn right. The street I live on is a paved horse path from the 1910s. No snowdrifts, but a lot of aggressive drivers angrily refusing to back up, which will be fun to see software deal with!


As someone who learned to drive in the city, those roads make me sweat bullets.

My grandpa who drives on those roads primarily, sweats bullets in the city.

Maybe you’ll have different driving models to load in different scenarios …


My mother thinks nothing of driving on deserted roads in significant unplowed snow. She gets nervous on a dry, Texas highway at rush hour.


Yeah, that seems perfectly rational. There is nothing to hit on a deserted highway. Driving in traffic, on the other hand, is more stressful and has worse downsides.


> significant unplowed snow

Spinning out on a deserted highway and hitting a snowbank and getting trapped in your car kills a large number of people every year. Even with smartphones, calls for help can't always be responded to in time, resulting in death. (Have an emergency kit in your car if you live above the snow line!)

Driving in city traffic can be quite harrowing, but hitting another car at 20-30 mph isn't usually fatal. (Wear your seatbelts!)

The point that GP post was trying to make is that humans have different preferences, and what seems dangerous to one doesn't (and possibly isn't) dangerous to another. Humans are also notoriously bad at judging danger, eg some people feel threatened by the idea of wearing of papers masks.


The computer doesn't have to be perfect; it just has to be better than a human.


Adding to this to really drive the point home: it doesn’t even need to be better than a human that’s good at driving. It only needs to be better than the average human driver. Anecdotally speaking, that’s not such a high bar to pass (relative to the topic at hand).


For general acceptance I think it has to be better than how good the average human thinks they are at driving.

Secondly, its dumbest errors have to be better than what the average human thinks their dumbest errors would be. If there is an accident and every one thinks they would never have made this error, it will crush the acceptance.

Looking at the general accidents stats and saying to people that, on average there are less deaths on the road but they might die in a stupid accident they would never have been into, had they been driving themselves, will be a very hard pill to swallow. Most people prefer to have the illusion of control even if statistically it means worse expectations.


Level 4 is where most of value is. If a system could drive in all cities and highways, that's more than 90% of benefit.


Agreed 100%. There will be special exit/on ramps built along highways and the trucks will largely just stay in their lane even if slower. It would cut the number of truckers needed by probably 50+%.


For depot to depot runs, sure. Most runs aren't that though, and require direct delivery from manufacturer to purchaser. Plenty of deliveries, for example in Chicago, basically happen off a residential street. Alley docking and turning around in these environments is challenging even for a human.

Add to all this one thing and we're further than I think most people realize: Weather. Show me an FSD doing better than a human in the snow or we're not really anywhere yet.



More likely driving on most highways in decent weather which is a big win. I'd pay for that.


The more I travel, the more I consider myself an SAE level 4 driver :)


This is legally required today. It's not necessarily a reflection of Waymo. Whether their system was perfect or not, it's legally required they do this till the government changes their minds.


California DMV regulations do allow testing of autonomous vehicles without a safety test driver if certain conditions are met, spelled out in Title 13, Division 1, Chapter 1 Article 3.7 Section 227.38 [1]. The most technically challenging of these is that the car must be capable of operating at SAE level 4, which goes back to the OP's comment. CUPC licensing for commercial passenger services also allows this [2].

That said, I agree with others that this is the natural progression of testing rollout and doesn't tell us anything about the pace at which the rollout will occur, in particular whether it will be faster or slower than Phoenix.

[1] https://www.dmv.ca.gov/portal/file/adopted-regulatory-text-p...

[2] https://www.cpuc.ca.gov/regulatory-services/licensing/transp...


Yeah, the commenter straight up assumes that a person is present because the L4 tech doesn't work, when in reality there are legal, liability, and even user comfort reasons to have someone on board with this new pilot.


L4 with ability to phone home for remote assistance is good enough.

By the time L5 arrives people will have been happily riding around in vehicles with no steering wheels for decades. L4 cars that phone home less and less every year.

Eventually someone will notice that no L4 car has phoned home for a whole year and almost nobody will care. Just a footnote to an era that already feels taken for granted.


Is remote assistance good enough though? It probably works fine when you have a fleet of 100 cars on the road, but 10,00? 1,000,000?

How many of those cars have caused a traffic jam at a given moment because they’ve encountered a traffic cone? How long does each issue take to resolve? It seems like, in addition to the technical hurdles, there are many more logistical hurdles before this can be rolled out at scale.


When manufacturers (after taxi companies) start competing for this, the manufacturer that can have 1 remote driver per 100 vehicles will beat the one that needs 1 remote driver per 10 vehicles. A manufacturer might require massive halls with thousands of workers to pilot their fleet of cars, but so long as customers pay the bills, that's no problem. And customers will need to pay the bill through subscriptions on FSD packges.


> I still think that true level 5 (i.e. ability to drive autonomously everywhere with zero human oversight with a safety record equivalent to the median human driver) requires AGI.

This might be true. Most of the time (95%) I am on complete human brain autopilot when I’m driving but those other 5% need my full focus and attention. I shut of the radio and tell other passengers to be quite (if I have the time for it).


This assumes that the challenges that are hard for a human are the same challenges that are hard for a self driving car - that might be the case, but self driving cars may have some theoretical advantages such as 360 cameras/lidar and an ability to follow satellite navigation without having to take its eyes off the road.

Put another way, the 5% of times I need to focus are usually the times where I am somewhere new and don’t necessarily understand the road layout - which something like Waymo may avoid through mapping for instance.

It might be true, but plenty of problems that have been thought to require true AGI have later been found to not require it after sufficient research - for example it’s not long ago that we thought good image recognition was entirely out of reach.


Anybody who rides with me on a normal basis have come to learn to recognize the sudden stop halfway through a word when I switch from autopilot brain to active driving. There are times when you need more focus on everything than others.


It's only AGI until someone achieves it. Then it becomes statistical analysis.


> Also, just a reminder that Waymo in Phoenix is nowhere close to being level 5

Because they are not even trying to be level 5. They've made it very clear that will only ever be a level 4 company and level 5 is not feasible.


Anyone who says L5 is bullshitting honestly.

L4 is enough to be viable and safe, and is all that is needed.

In fact this level crap is bullshit. It's the speak of MBAs at Bain and McKinsey at who think they understand tech, not engineers.

Real engineers don't stare at their debugging screens going "check out this data, is it L3 or L4?"

Instead engineers look at things like safety-critical interventions per kilometer, non-critical interventions per kilometer, accidents per kilometer, etc.


> In fact this level crap is bullshit. It's the speak of MBAs at Bain and McKinsey at who think they understand tech, not engineers.

So how do you explain, succinctly, the difference between a car that can hold 55 mph (but do nothing else automatically) without driver intervention and one that can change from 25 mph to 55 mph and change lanes without driver intervention.

The differences between levels are drastic in terms of implications on overall utility of the technology. There's a reason the terms are used.

> L4 is enough to be viable and safe, and is all that is needed.

Needed for what exactly? Each level has its own benefits. That's exactly the reason the levels were established. I want to be able to get into a car, put in an address and fall asleep for the whole trip. When do I get that? L4? L5? If I can't do other tasks while the car is driving, then the point of full autonomous is essentially pointless.

> not engineers.

Precisely. The levels have nothing to do with engineers. It's about understand the benefits for the population. For example - can I fall asleep in the back of the Tesla autonomous car? No, because it's not L5. That's the point.


> When do I get that? L4? L5?

It doesn't really matter so long as there are fallback drivers in all situations. An lower level autonomy car would remotely connect to the backup driver (in a third world call center) several times per hour in order to behave like a higher level one.

A higher level one would connect more rarely. The only difference I notice as a customer is I pay more out of the box for the higher level car and less as a subscription , while the lower level car costs less out of the box and has a higher subscription fee because the human costs are higher.


It's not an engineer's thing at all. The classifications are very specific differences in the overall system. From an engineer's point of view they are just creating a fully autonomous car. L4 -> L5 is more about how many scenarios is that fully autonomous car been tested through.

https://en.wikipedia.org/wiki/Self-driving_car#Classificatio...


I think the key thing people need to realize from the SAE definition [1] of the levels is that they represent designs of the system rather than abilities of the system. I could slap a camera on my dashboard, tell the car to go when it sees green pixels in the top half of its field of view and stop when it sees red pixels. Then I could get out the car and turn it on, and for the 5 seconds it took for that car to kill a pedestrian and crash into a tree, that would be level 5 self driving.

So when people talk about a particular company "achieving" level 4 or level 5, I don't know what they mean. Maybe they mean achieving it "safely" which is murky, since any system can crash. Maybe they mean achieving it legally on public roads, in which case, it's a legal achievement (although depending on what regulatory hoops they had to go through, maybe they had to make technical achievements as well).

[1] : https://web.archive.org/web/20161120142825/http://www.sae.or...


> L4 -> L5 is more about how many scenarios is that fully autonomous car been tested through.

Not really. L5 is impossible, period.

What I think will happen is L4 with 99.999% cases covered and have it come to a safe stop for the 0.0001%, assuming there was a way to safely stop.

L5 which means 100.000% covered, will not happen, but the PR people will continue to use the term.


> Not really. L5 is impossible, period.

Agreed.

> L5 which means 100.000% covered, will not happen, but the PR people will continue to use the term.

Which is precisely why so many people are critical of the term "fully autonomous".

> 99.999% cases covered

What cases? The point is that edge cases are the issue with autonomous driving. I can fall asleep on a train or a plane because I know there is human conductor who can handle the edge cases. This doesn't exist with L4. Everything else that doesn't let me fall asleep (read a book, look at my phone, etc.) is only marginally better.

> assuming there was a way to safely stop.

That's a pretty damn strong assumption.


I've always thought of L5 as a car that can operate via its sensors + onboard computing alone, at least as well as a median human driver.

No communicating with a server to download maps, no perfect performance, just a car that knows the state's traffic laws driving a brand new road in any reasonable weather, and getting into less crashes than a human would.


>It's the speak of MBAs at Bain and McKinsey at who think they understand tech, not engineers.

Really? Because the L5 claims come more out of the Ubers and Teslas than the "MBAs".


It's usually the MBAs and PR people at those companies, not the engineers, that use that term.


My point is that "the MBAs" don't have a monopoly on hyperbole.


Level 5 isn't feasible as much for legal reasons as technical ones.

I don't think any company wants to sign off on the notion that their software will handle all classes of problem, even ones they have no data for at all.


Except Tesla, who are going to be "L5 by the end of the year" every year!


I’m pretty sure Tesla’s whole strategy is to overpromise so much it’s as much a legal liability to not have L5 than to have L5.


Tesla brand sells a lifestyle at this point, not just a vehicle. They have to keep pumping it.


> Level 5 isn't feasible as much for legal reasons as technical ones.

That point is taken into account under J3016_202104 § 8.8:

“There are technical and practical considerations that mitigate the literal meaning of the stipulation that a Level 5 ADS must be capable of ‘operating the vehicle on-road anywhere that a typically skilled human driver can reasonably operate a conventional vehicle,’ which might otherwise be impossible to achieve. For example, an ADS-equipped vehicle that is capable of operating a vehicle on all roads throughout the US, but, for legal or business reasons, cannot operate the vehicle across the borders in Canada or Mexico can still be considered Level 5, even if geo-fenced to operate only within the U.S.”.


> human safety driver to be remote, as is the case in Phoenix.

It's not a remote human safety driver, it's more like a remote human safety coach.

The difference is giving high level directions vs directly driving the car. They don't remotely drive the car because that would obviously be super dangerous w/r/t connection stability/latency.


Notably, SAE level 5 is actually well below the standard that you've laid out here. The vehicle simply has to be able to make itself safe in situations that it can't handle. This allows room for remote assistance or a human takeover in certain situations.


> I still think that true level 5 ... requires AGI.

In case anyone else was wondering what AGI means, its Artificial General Intelligence. [1]

1: https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


Definitionally, the achievement of self driving vehicles does not require AGI. Doing one task very well requires a subset of AGI called Weak AI.


Part of driving well requires a diverse of array of abilities, right? You know what is litter and what is debris because you can make a guess about material properties based on some observations, like looking at something that moves without being touched and is translucent, you probably conclude that its a plastic bag of some sort and not a hazard. Similarly you probably use a wealth of experience to judge that a small piece of tire is not a hazard, but a chunk is a hazard and a whole one is definitely a hazard.

Or, on seeing an anomalous <children's toy> enter the road you can probably guess that a child might follow shortly after.

I'm not suggesting that the problem cannot be solved without AGI, but you can see why some people might think that though, right?

My personal feeling is that we shouldn't be setting the bar at making a car that can handle any situation anywhere way better than any human at any time, but that we should also try to make roads that are more suitable for self driving vehicles. I'd rather we move to driving agents that don't get bored, frustrated, or angry.


I think the engineer's answer to the child entering the roadway would be: The car SHOULD never drive at such a speed that if the child WERE to enter the visible zone that it could swerve+slow enough to not hit it, forget the toy. After that we can move the goal posts and say it's a FAST child on a bike - but then the reasonable solution to that is a human driver may have also hit the biking child. Then, of course, we get into the ethics of fault for the accident.


My agreement with you falls largely under my last paragraph. I'm trying to illustrate a couple examples where driving as a human on roads built for human drivers requires perceptive powers and understanding that are beyond 'merely' driving safely, but also require a sort of holistic understanding of the world. If your goal is to make a better than human substitute driver then I don't think it is a completely unreasonable position to believe you'll need some level of AGI. Of course, as we figure out how to do concrete tasks and incorporate them into a system they'll stop being considered traits that would require general intelligence, but I suppose that is a different discussion.

And your example isn't moving goalposts, its just another legitimate example of a situation thats gotta get figured out. If you think that things like understanding that some kid learning to skateboard nearby could fall a surprisingly far distance and thus you should exercise caution, or being aware of factors that imply fast biking children (say, an adult and a child implies the potential for another fast moving child on the same trajectory), that this sort of situational and contextual awareness is critical for proper driving.. then yeah, that would be a reasonable sounding argument to support "I think self driving cars will require some level of progress in AGI".

That's all I'm long-windedly getting at.


"Making roads more suitable for self-driving vehicles" will make roads much worse for pedestrians and cyclists if you're not very careful.


Yes, this is very important to keep in mind, thank you. I wonder what sort of things one could do to make roads easier for automation and still serve regular people trying to be outside.


Stacking shelves in a warehouse is one task. Driving is not one task. There are too many corner cases for a modern-day AI system to perform as well as a median driver in, say, 95% of environments and settings in North America and Europe. I think the argument is that such a system might as well be AGI.


The idea that an AI must have the ability to learn how to do anything in order to learn how to drive seems like an extremely pessimistic and misguided goalpost. That is also not how iterative development works.


I think ML is fantastic, and combined with LiDAR, inter-vehicle mesh networking, and geofenced areas where humans take over, we could quickly arrive at mostly automated driving without trying to reinvent the human brain. We should also be more focused on enforcing established legal limits to newly manufactured cars. Just preventing someone from exceeding the speed limit or driving the wrong way would start saving lives immediately. It would also allow traffic flow to be optimized, and eventually prioritize emergency traffic or allow metro areas to be evacuated efficiently for things like natural disasters.

It would be great to see the dawn of AGI, but I don't think it will ever happen with classical computation. GPT-3 spits out nonsense with the input of the largest and easiest to parse portion of reality, and I have not seen any ML approach replicate the abilities of something as simple as bacteria. ML requires constant validation from human operators, so the same is going to hold true for ML powered vehicle navigation.


Driving is a set of tasks, but not AGI. AGI would be if it could drive and then also learn to write poetry without any code update.


There's no such thing as a remote safety driver.

Cell data connections aren't reliable enough, and having the car emergency stop (and potentially get rear-ended) when it loses signal wouldn't be acceptable.


> ability to drive autonomously everywhere with zero human oversight with a safety record equivalent to the median human driver

I think this statement is off the mark. Comparing to a human is hard. Not many accidents happen because people are bad at driving. Driving is honestly pretty easy. They happen because people are distracted, tired, drunk, or perhaps just an asshole driving recklessly for thrills or for speed.

A self driving car might be a lot "worse" than the average human driver but could still be a huge improvement in terms of expected safety record for driving overall.

They don't need to be better than humans, they just need to be not shit 100% of the time unlike humans.


Note sure about the AGI requirement. The current systems will always need the heavy involvement of human intelligence to be able to rescue stuck cars, drive in new areas or monitor for changing driving conditions and update the driving model. There does seem to be at least some hope these systems will be able to run a true driverless taxi service with minimal geofencing. On the other hand, a human can go to a new country with different road markings, signage, rules, and traffic flows and be able to drive safely pretty much immediately or maybe a quick Google search. That would truly require AGI.


Would it be easier to just build a "futuristic" test city built around the idea of self-driving cars to make it easier for them to work? If self-driving cars are so great people will move there naturally due to improved quality of life. Trying to make self-driving cars work in current cities is like building around crippling technical debt

Seems like Google and a few other tech companies could easily bootstrap a small city by planting some offices somewhere


If we're building a futuristic city from the ground up, wouldn't it be better to rely on mass transit? Self driving cars still have almost all the same problems that human-driven cars have.


A futuristic city would have public transport and bikes. 5,000 pounds of metal for transporting one person is horribly inefficient.


> This tells me that we’re still a long way from full level 4

The only thing it tells me is that regulations are more lax in Arizona than California.


I still think that true level 5 ... requires AGI

I agree today's AI tech is a long way off from completely supplanting a human driver. I'm surprised the average consumer I talk to about this seems to think we're on the cusp.

But as vehicles with neural nets become more prevalent I expect we'll see the problem morph as it gets tackled from other angles as well. e.g. Self-driving corridors with road infrastructure aimed to improve AI safety (whether that be additional technology, modified marking standards, etc).

Once upon a time street signs with speed limits, curve warnings, and such didn't exist. After faster cars supplanted horse-drawn carriages, highways became a thing. Eventually when the only reason humans drive is for recreation (e.g. off-roading) the problem from the car's perspective will look somewhat different than it did during the transition.


Is this not a legal requirement?


I was at Yellowstone this last week. https://ridebeep.com/ had some shuttles there doing a pretty straight forward navigation on one straight road, they had human minders as well. The day I went to try them, it was mildly misting, just barely able to see it on the glass. They were not running the machines because they kept stopping because they saw the water drops as obstacles initially. The humans had made the call that it just wasn't an acceptable experience.


> This tells me that we’re still a long way from full level 4 (and certainly level 5) autonomy in a busy city like San Francisco.

I think its in part regulatory relating to paid rides in autonomous vehicles in CA, which is why Cruise is dodging it with passenger-carrying but unpaid rides that are fully driverless. I can't find a good summary of the rules, but I infer from the coverage I've seen that the threshold for having no safety driver when offering paid passenger rides is different from that without paid passengers.


You are making a ton of assumptions about what is driving this decision.


The edge cases requiring immediate human attention are still too frequent for the human safety driver to be remote, as is the case in Phoenix.

I think you can only determine that if you know how many times the human attendant takes over.

Just having a human behind the wheel doesn't tell you much, I don't see how to get full self-driving without an intermediate step of human supervision.


>I still think that true level 5 [...] requires AGI.

Oh, but of course! I'm still surprised by people who think otherwise. The amount of corner cases that you have to solve while driving is pretty much limitless, you cannot "train" something on that, heck, some humans are not even fit for that. We are *FAR* from truly autonomous vehicles.


The key metric is probably "obscure incidents" per miles driven, probably classified manually into various levels of danger. Once the "incidents that lead to disaster" count reaches 0 statistically, it will definitely roll out en masse without the need of safety drivers.

My guess is that they know how many miles they have to drive in order to reach that number, and it's a whole lot. Statistics and math stuff but you can probably pin it down to the month or quarter based on trends. Either that, or it's about driving every road in the city with all sorts of weather / traffic / pedestrian conditions until there's no issue. This isn't generalized AI driving (L5) but it's a much more logical approach to getting autonomous driving coverage where it's the most valuable.

My guess is that each city will involve a safety driver rollout until they have enough data to know the incident rate is zero. There might be a lot of variance between cities - maps data, weather conditions, customs, etc. Then remove the safety drivers.

I'm sure they also are experimenting with disaster/safety protocols while they do the roll out.

My prediction is that waymo will be a mainstream option within the next 5 years.


It doesn't matter for the rider though, unlike a Tesla where you still have to keep your hands on the wheel.


I'm guessing if all cars become Waymo right now, there will probably be reduction in vehicle fatality by 99%.

But people have hard time accepting with the notion that unmanned vehicle may be part responsible for that 1%.


This tells me that you can show progress and draw magnifying glass criticism from people.

can only please some of the people some of the time.


I suspect the main "roadblocks" are about the environment for the SDC.

All traffic signs and signals need to be machine readable at a distance. That is, a traffic light might beam out "I am light 'SFTL783', will be green for 8 more seconds". The location and other data for SFTL783 is in a preloaded database. Same for speed limits and other signage.

An updated 3d map of all roads would also help a lot. As would car-to-car communication systems.


Ah yes, the SF elite will gladly shell out fortunes to ride around in a car just for the opportunity to witness the gross power of AI! Not to mention it's a nice Jag. I wonder if they are hiring models for the "autonomous specialist" role?


They are not legally allowed to not have a human driver onboard. It's a legal requirement that is not a relevant signal one way or another.


All that tells you is that they are cautious in the famously litigious California yet confident enough to actually launch their service there. In the unlikely event something goes wrong, even if it is not their fault, they had a person onboard. It's the difference between that being just a minor incident or getting a class action lawsuit with millions/billions in damages. The law is by far the biggest obstacle to level 4 and 5 driving. So, launching in San Francisco is kind of a big step for them.

Once it has proven itself for a bit of time and they know how to set up their geo-fences and which streets to avoid, they can probably get rid of the person. That would be evident by that person never actually doing anything long before that.

The remote safety monitoring is a smart feature but it's not going to help much for the type of accidents people worry about most where something unexpected happens very rapidly. AIs are actually really good with dealing with those situations. Arguably better than humans for whom this is probably a leading cause of traffic fatalities. The way Waymo operates without hands on the wheel (i.e. level 3 & 4), basically means they have safety nailed already in those kind of situations.

There's no way a remote person would be quick enough to intervene. That person is there for other reasons. It's complex traffic situations that cause AIs to get stuck occasionally that require human intervention. Usually this is less of a safety concern and more of an annoyance.

The key metric of interventions per hundreds of thousands of miles of distance traveled is the key metric that both Tesla, Waymo, and other companies use for this. Both companies boast some pretty impressive statistics for that. Though of course it's hard to confirm those independently.

It's interesting the way Tesla and Waymo approach this problem in different ways. Waymo goes for level 4 but only in areas they've thoroughly vetted. It's taken them many years to start moving to new areas other than relatively safe and easy Phoenix. Tesla on the other hand offers their AI features just about anywhere they can but positions it as level 2 that is really aspiring to be a level 4 but with a requirement for hands on the wheel just in case (which means it's level 2). Level 2 is a legal tool to dodge a lot of legislation and bureaucracy. It basically means that if anything goes wrong, the driver is at fault. It will stay in place a long time for that reason. Liability, not safety, is the key concern here.

Arguably, Teslas would probably do pretty well in the areas that Waymo has vetted as well. But they are not in the ride sharing market and need to sell cars world wide and not just in specific geofences in Phoenix and San Francisco. But I wouldn't be surprised to see Tesla offer a similar ride sharing service in similarly geo-fenced areas at some point to get to level 4 & 5. I suspect that race is a lot closer than some people seem to think.


Beating humans doesn’t require AGI. Just not drinking, texting, or falling asleep will get you halfway there.


Waymo is currently the only SAE Level 4[0] self-driving car in use, right? The wheel must not be used. Even though it is limited to two cities.

Their technology seems an order of magnitude above the rest, although factoring in its cost could make it look worse. (Can it turn a profit, when including the development costs?)

Tesla FSD is at best SAE level 3 (can need human fallback), and at worst level 2 (needs constant human monitoring).

[0]: https://blog.waymo.com/2020/10/revealing-our-approach-to-saf...


Most of the commercial value is achieved at Level 4.

I'm curious what all the folks who claimed this was decades away are thinking?


> I'm curious what all the folks who claimed this was decades away are thinking?

What is the "this" you're talking about here? I don't think there were many skeptics out there saying that it'll be decades before a taxi service will be able to launch a small, manned trial in a single municipality. Given that it's manned it still isn't quite at Level 4.

The cynicism (and I'd consider myself a mild cynic I guess) was and is around the notion that the vast majority of us would be sat in self-driving cars by now. That was always wildly optimistic. We're making progress and that's great! But there were a lot of breathless predictions years back that have not come to fruition.


This is the 2nd market and the other is unmanned.


This market is manned and a serious challenge. The trial is Arizona is in an extremely quiet, predictable suburb as well as being small scale; unmanned but not as serious a challenge.

Each of these is different but neither's existence by itself proves unmanned in challenging locations is right around the corner. It's kind of an exercise in Baysian statistic, how much more likely this makes one think unmanned taxis in serious location is depends on what one thinks the initial probability is.


SF seems like a pretty serious location to me. Granted, its still manned, but it makes sense that they'd start out manned in any new environment out of an abundance of caution. It seems likely that we could see a switch to unmanned in SF in the next 1-2 years if things go well. I have to think at least Waymo believes this is likely, otherwise they wouldn't be doing this.


Okay... let me know when their market is the contiguous United States.


Surely this is just moving the goal posts. If they can launch in Chandler and in San Francisco then a huge amount of the contiguous United States is on the table.


Nope, not even close. There's a very specific reason they've picked Chandler, and it's that there's relatively little weather and the roads are fairly simple (among many other things specific to Chandler).

The vast majority of the contiguous United States is decidedly not "on the table" as of now.

"The contiguous United States" is still a decade (or more) away. It may literally never happen.


> The vast majority of the contiguous United States is decidedly not "on the table" as of now.

So the vast majority of the US has either significantly worse weather than SF, or a significantly harder ODD than SF?

Could you elaborate on that?


It generally doesn’t snow in SF or Chandler. I personally like snow, so I wouldn’t describe that weather as significantly worse. But my understanding is snow makes it hard for lane keeping, and then there’s all sorts of edge cases like streets that aren’t plowed and require special driving techniques, people placing cones to reserve parking spaces, etc.


This is why I wouldn't be surprised if they choose Pittsburgh next.


I feel like San Francisco is a good progression - still generally good weather, but much more crowded, more traffic, more special cases, pedestrians and bikes, one-way roads, topography, limited visibility due to hills and no-setback buildings, construction, buses, etc. It's a significant leap in urban complexity, probably greater than 98% of the rest of the USA.

Regarding the weather, lots of people have mentioned snow, but much of the Midwest, Northeast, and especially the South can have sudden torrential rain. I haven't researched it but I would guess that a Florida rainstorm would be hard for both radar and visual guidance. I predict their next city will be Orlando, in partnership with Disney. Then somewhere like Boston (harder - older street pattern and more snow) or Philadelphia (easier). Each of those 3 would "unlock" new territory they can cover. I predict that NYC will be one of the last areas "unlocked".


No sorry, the vast majority of the US has significantly worse weather and/or more complex traffic than Chandler, AZ.

I didn't make any statements about SF traffic or weather.


I completely agree. I was pretty critical of them when they were just a tiny slice of Arizona, but going from n to n+1 is really major progress.

The fact that the +1 is a city as complex as SF is a really good sign as well. If they can handle SF, they can handle Charlotte, Atlanta, LA, and many other major metros just as "easily".

I'm a lot more optimistic for their rollout now.


many places have fairly strong summer storms.

SF does not. Also it looks like it almost never rains in Chandler.

we won't have all-weather reliable level 4/5 until an onboard computer has the equivalent power of a human brain


That's the point of this comment chain. They don't have to make it work everywhere in the contiguous United States to have a useful product. The (robo)taxi market is concentrated in big metro areas and that's what their focus is.


The thing is, with apps providing start and end point, they don't even need to be able to handle a full city to get a benefit. They can geofence really aggressively, and the moment they can ditch the safety drivers from a sufficient subset of journeys based on routes requested, they have an advantage.


They can cherry-pick the easiest routes. Discussing the entire "contiguous USA" is pointless.


Exactly. They can outright ignore really hard markets,. and be present in markets that have a decent percentage of routes they can handle and fall back to dispatching regular cars on routes their full automation can't handle.


flimflam man Musk stated in 2015 that their robotaxis would be up and running by the end of 2015...

and he made similar claims each year after that.

You can thank him for the hype and cynicism


We look right on track for my prediction of 2050


AFAIK no one recently claimed level 4 with a safety driver was decades away, and I was working on self driving until a few months ago so I was kinda "plugged" into the news. A handful of serious companies have plans and funding to deploy L4 taxi fleets or L4 features to consumer cars in 4/5 years.

There's no commercial value at L4 whatsoever if you still pay a safety driver. It's usually even more expensive per hour than a normal human driver and the cars don't work under adverse weather conditions.


When someone says 'in about 5 years' it means they have no idea when.


Don't have to pay safety drivers with L4.


The program was first publicly revealed nearly 11 years ago. People saying this project would take decades aren’t wrong.


> Most of the commercial value is achieved at Level 4.

There are two markets:

- Level 4 driverless taxis (~$500B market)

- Level 5 driverless cars for every human being (~$10T+ market)

So, most of the commercial value for Uber, Lyft, etc. is from L4, but L5 is where the most overall economic value is.

But L4 has to be achieved in all locations, otherwise that TAM is highly variable.


What does L5 get you that L4 does not (commercially)?


Did anyone claim _this_ was decades away? The claim is usually about when self-driving cars will actually be viable for anyone to use in arbitrary areas, not when experimental pilot programs are launched in individual cities.


That things are progressing about as expected. More specifically, Waymo is doing a bit better I than my general expectations, Cruise is about on par, and the rest slower.

Waymo has been at it for a bit over 12 years now. We'll have to see if Waymo's progress ramps up after San Francisco, but to me this is still looking like an 80/20 problem, with the added twist that each new region they expand to will have it's own unique 20% to learn that adds another 80% to the schedule. But that just validates Waymo's approach even more in my eyes.


AFAIK most of the cynicism around self driving cars is not that it is not possible, but rather that it doesn’t solve the problems they claim to be solving. E.g. self driving cars won’t solve traffic (busses do), they won’t be safer (trains are), they won’t be convenient (unless you are rich enough to afford one), and at the end of the day, they are still massive things that takes tonnes of space and infrastructure and pollute a bunch. Just like human driven cars.


I would think most commercial value comes when you eliminate the human labor from driving, which is the dominant cost of transporting when there is less than ~10 passengers. What makes you think Level 4 is the most economically impactful threshold?


I think almost no commercial value is at 4. I'm fact, it could be negative value. Because you need the expense of a "digital driver" and the expense of a human driver.

It's basically level 5 or bust.


Level 4 (e.g. highways in good weather) is something that people buying cars will pay a significant amount for as an option in good weather. It does nothing for those who want robotaxis. But the former don't care about the latter.

You have multiple groups of drivers including those who appreciate self-driving assistance on long boring highway stretches and may not even drive in cities that much and those who never want to get behind a wheel or perhaps ever own a car.


Level 4 doesn't necessarily mean just highways - it could also easily include suburbs with well-designed roads that have been mapped in excruciating detail.

You could force the car to avoid certain intersections with lots of close calls, or give the car advance info about unexpected objects detected in the road by cars up ahead, etc. which would make the task a little easier.


A selection of highways in good weather is just one example that seems particularly easy (relatively speaking), well-defined, and genuinely useful.


Current California regulations for remote operators are identical regardless of whether the car is considered SAE level 4 or 5. I'm not sure what constitutes "continuous monitoring" there (I could interpret that as anything between eyes on road at all times down to positive SOH messages arrived).

Apart from regulations, level 4 doesn't require a remote operator monitoring at all times. It just requires them to be available when the car recognizes it can't handle the situation. Depending on how frequently interventions are needed, that could mean a remote operator for every car, or one for every 10 thousand.

The important part about Level 4 is that the car recognizes its own limits and doesn't act outside of them. How practical those limits are is completely undefined by SAE. Level 4 is a continuum whose limit is level 5. We will definitely be seeing practical commercial value as we approach level 5 but haven't yet gotten there.


How is there commercial value if you still need a driver in the car?


You don't need a driver in the car for level 4. That is level 3 and below.


Most commercial value is at Level 2 for quite a while.


Level 2 is becoming a commodity. Its commercial value will fall to zero very quickly.


Level 2 varies dramatically by car maker. On the low end you have companies like Ford and Honda doing lane keep cruise control, and on the high end you have Tesla which does automatic lane changes, takes on/off ramps, and stops for traffic lights. Tesla also charges a high premium for their software making it basically the opposite of a commodity.


I disagree. Only the most simple basic highway lane keeping is commodity and that is only small part of overall driving.


Those SAE levels apparently don’t specifically mention the geographic range the vehicle can operate in [0], but at some point that’s pretty important. Having “full automation” on a very tiny section of roads is hardly what I would call “orders of magnitude above the rest.”

[0] edit: that's incorrect, L4 and L5 are primarily defined by differences in geography range.


The difference between SAE level 4 and 5 is explicitly whether the autonomy is geographically limited or not. https://en.wikipedia.org/wiki/Self-driving_car#SAE_Classific...


Am I the only one seeing the change between l4 and l5 abysmal? Also L5 description, at least in the Wikipedia article, is much more vague than the rest


In terms of convenience, it can be irrelevant.

However, in terms of technology, level 5 is a big step up.

Consider for instance that a level 4 system can assume that the ground is purely flat (an assumption that Tesla’s FSD vector space makes), while a level 5 system needs 3D information to navigate some vertically-diverse terrain.


There is a full document available on the SAE website with full descriptions and technical details. Its much better than the common summaries.


Still, Level 3 can be more "advanced" than Level 4.


How? Level 3 requires a driver who needs to take over when alerted. Level 4 is fully driverless.


No. It's fully driverless in geographically limited areas.


I'm aware. It's really between "maybe works, maybe doesn't everywhere (L3)" vs "works with no driver in a defined area (L4)". I consider the latter as more advanced as they are taking full responsibility for your safety.


It depends. If I could choose one car for person use, I'd take any modern adaptive cruise control + lane-keep assist system over an L4 that only worked in one city (even if it's a major city where I live). I'm not really sure how you determine which one is "more advanced," but I would consider the "level of automation" to be the portion of my normal driving habits that are able to be automated.


You're talking about personal driving, which is a different use case than robotaxis. For that, yes, you're better served with an ADAS system. It will take a while for L4 systems like Waymo to trickle down to passenger cars.


The point is that a "level 4 system" like Waymos is never going to trickle down to personally owned vehicles, because nobody wants to pay the hardware premium required for Waymo's autonomous system if it will be locked down to a specific city with specific routes.

Which is exactly GC's point – they don't want a car that can drive itself in Phoenix and SF and that's it. Most people either want a car that can make their life easier for a huge portion of driving but still require them to be in the driver's seat (e.g. Tesla's Autopilot) or a car that can actually drive itself basically anywhere (e.g. Tesla's FSD). Waymo's Level 4 just isn't that appealing for a personal vehicle.

This is why the SAE level system is poorly thought out and not particularly useful for anything except arguing over minutiae.


Any l4 autonomous system is capable of adaptive cruise control anywhere.


Expanding the geographic range for Waymo is straightforward: do the same thing in a new place (plus new hurdles like snow, but that isn't currently the limitation to growth).

It isn't clear how Tesla goes from "FSD" to "full automation". They are working on the "draw the rest of the owl" step.


I'd say the opposite. It is clear how Tesla goes from where they are to "full automation" – they just improve their models. Easier said than done, sure, but there is no conceptual leap required.

Meanwhile, for Waymo to jump from level 4 to level 5 requires handling edge cases which it is actually not clear can be handled by the system Waymo has built. In huge swathes of the world, weather conditions are not conducive a system that relies heavily on LIDAR / roads change too much to be effectively mapped like they've done in very stable metropolitan areas like Phoenix and SF / etc.

Said another way: our existing roads are built for entirely visual agents (humans). Getting a system that heavily relies on non-visual sensing (e.g. LIDAR/Radar) to work well on a specific subset of those roads is clearly doable, but that doesn't mean it can be generalized to our entire automobile infrastructure.


At some point "doing the same thing in a new place" is not going to be economically possible, unless the company is somehow able to continue getting money to burn (or unless they get much much better at bringing new places online).


FSD is still technically level 2. In Tesla's eyes and the law's eyes you are driving and in liable for what happens.


It's also (and this is important for Tesla owners to keep in mind) not-italicized-technically level 2.

As in, "This system is known to degrade in ways that require immediate human manual intervention under risk of serious injury or death. Do not operate it without constant driver supervision and preparation for takeover."


In the automotive industry we called this "level 2.999." I've moved on from automotive now, but in my experience nearly all of the new vehicle features in development, big or small, are developed at that level.


SAE levels are pretty garbage.


It's fantastic to see Waymo's progress. SF is a real nightmare to drive. If they can nail it there, that's 2/2 for busy, urban street driving (SF) and "boring" suburban driving (Chandler, AZ). They've been quietly very confident of their tech, but this is a real test.

Tangentially, I've also noticed that Waymo has picked up pace ever since the recent leadership changes. They are publishing more blog posts, offering more insights into their tech and generally seem to have increased their PR game. I wonder if that was a mandate from Alphabet leadership to show some urgency.


SF is hardly a nightmare to drive. Try Boston during a winter storm.


I think people vastly overestimate the challenges of weather conditions for self driving. With modern car tech (traction monitoring, ability to redirect torque to a specific tire, ABS, radar) an automated car is going to have an easier time navigating snow/ice/rain than a human driver.

The real challenges when navigating city streets are the human ones – delivery vehicles blocking lanes, municipal worker fixing a manhole with a single cone to redirect traffic, pedestrians/bicyclists appearing out of nowhere, no one following traffic signs. This is the kind of stuff that tests "intelligence".


> I think people vastly overestimate the challenges of weather conditions for self driving.

This remark makes me wonder if you've ever lived in an area that actually experiences winter.

Around here, dead of winter, there are no lines visible on the streets. Heck, after a good snow storm the lanes are basically a function of group consensus.


This is a situation where automation has the advantage. With detailed position information and detailed maps, the fact that the lines on the street can't be seen is irrelevant. The car didn't need them anyway.

(noting that Waymo requires full detail maps to be able to drive an area, including all signage).


So, couple things.

First, it's optimistic to assume that data is accurate and up-to-date.

Second, I can't emphasize my "group consensus" point enough.

Anyone who's driven in real world winter conditions has seen a day where three lane roads turn into two. Or lanes form in the shoulder. It'd be actively dangerous to insist on driving according to the underlying lane markings during those types of road conditions.

Maybe in a world where all cars are autonomous and using map data that could work. In reality it really doesn't.


Unless you're tesla and you're using computer vision to determine where you are on the road.


Tesla is using maps too. Source: the recent Tesla AI day https://m.youtube.com/watch?v=j0z4FweCy4M


> With modern car tech (traction monitoring, ability to redirect torque to a specific tire, ABS, radar) an automated car is going to have an easier time navigating snow/ice/rain than a human driver.

Huh, what? Human drivers can already take advantage of all of those, and they still find snowstorms and torrential rain challenging.

The challenge is understanding what you see (and hear), and dealing with very noisy and limited--sometimes actively misleading--inputs.


Usually because they are driving far too fast for the road conditions.


How well do sensors and vision systems handle winter conditions like snow and lack of lane markers?


I suspect it will know where the lane markings are better than human drivers. They are mapped ahead of time and the car can likely localize itself via other landmarks to determine where they are without being able to see them.

The harder part is driving like a human and detecting that a path has been made in the middle of two lanes in heavy snow and not obeying the lines at all.


The first idea seems like it would require a lot a lot of data stored in the car. Is it feasible? And even so, to be that dependent on matching up with existing pre-mapped data suggests a system that would be quite slow to roll out across a country.


Easy, my dumb level-0 car can tell me when it's icy. And finding lane markers is one of the easiest tasks in self driving (the hard part is knowing when to ignore them).


You're being downvoted for the flippant and dismissive tone of your comment, but I do wonder how computer-driven cars will determine when it is acceptable to violate lane markings and road signs. Boston in winter is more than just traction control. There are snow piles that might be icy, ridges left from a plow, shifting conditions, and bad visibility. I suspect it IS a hard problem.


> And finding lane markers is one of the easiest tasks in self driving

It's not a matter of "finding" lane markers. There are no lane markers visible after it snows.


Lane markings are a fraction of the triangulation.

We ourselves identify and confirm other urban waymarks via captcha which feeds the nav data -- bridges, signs, hills, hydrants, chimneys, lights. There is mass live verification from android auto in vehicles. There are many yearly layers of street view images and scans.


Right, and what about rural waymarks? A highway in the middle of nowhere at night during a snowstorm?

I don't think we'll see a system that can handle that in my lifetime.


If a pedestrian slips in deep snow while crossing a street and is no longer visible because the snow obstructs them, does the car see a clear path and kill someone or not?


Does a human see a clear path and kill someone or not?


So I’ve encountered this in real life.

The human driver detects the pedestrian, laughs at the fall, then get worried and wait for them to get up, because a human knows someone fell and didn’t magically disappear.


Doesn't seem like it would require AGI for a self driving system to "know" that someone fell and didn't magically disappear.


Challenging weather conditions mean human drivers become even more unpredictable.


Waymo et al will have to install snow tires or else no matter of traction control or even all wheel drive are going to help when your tires cannot find grip.

Source: grew up watching subarus do 360s on the freeway.


Lived in Boston for quite a while, and grew up driving in a city.

I find SF much more challenging to drive in, at least wrt the other drivers. In Boston, drivers are aggressive and take calculated, dangerous risks to meet their goal.

In SF, a lot of people on the road act like it's their first time driving in like 5 years and they're still figuring it out. There's no rational risk-taking towards a goal, but more people bumbling around unpredictably while unsure of what their goal even is.


That's a great description of driving for literally everywhere in the country that's not the Northeast. It's also why I think driving in the Northeast is the safest. Driving on highways in the South is scary.


Try a southern state that has never seen snow before but got half an inch and is losing its collective mind.


What if the car/service just doesn't work or is planned to be offline based on the weather? It would suck to be stranded because it started raining, but would it still be valuable to have an automated taxi service on good weather days in big cities? I am inclined to say yes, and also delighted that this is the question I'm asking, but I'm an optimist.


Honestly, I think Boston is harder even without a winter storm.


This cars don't even work with a light drizzle.



That’s a big point if true. What’s the source for that info?


Not sure if I can tell so I'll have to pass on answering that. You can take a look at this though: https://youtu.be/0oyjYH6v0b8?t=434


Those are their previous generation vehicles. The I-Pace in SF are the newest generation with upgraded sensors.


In my experience with snow driving, a well maintained car (including good snow tires) goes a long way, and a fleet of commercial vehicles (like these) will have that as an edge over the average driver.


> SF is a real nightmare to drive.

Yes and no. The weather conditions are probably close to perfect for a project like this. A city that spends months with icy roads... _that_ would be the real nightmare.


Yeah, fair enough. Only adverse weather they'd encounter in SF is fog and may be light rain. But I still consider driving in SF pretty challenging, especially for an autonomous vehicle – narrow roads, people not following rules, pedestrians everywhere, cable cars in the middle of roads. Certainly more complex than Chandler, AZ.


> I wonder if that was a mandate from Alphabet leadership to show some urgency.

Surely Alphabet has noticed that their competitors are nipping at Waymo's heels. If they don't pick up the pace, all sorts of business books will be written about how Waymo squandered a decade-long lead in the industry.


It may not scale to rural areas though. There are some roads were you don't need to look at the road in front of your: it is there and nothing else is. Instead you need to watch the ditches in a wide area around because that is where wildlife will jump out of in front of you.


Having driven many rural roads, that is something I would be much more comfortable with an automatic system on: staring around for deer at night is the classic attention task where humans tend to fail.

What I wouldn't be as comfortable with is the "random sheet of ice" or "oh look, rocks" or "suddenly washboard dirt road".


Is there a market for taxis in the rural areas? They have little incentive to expand there if there's no money to be made.


Most drunk driving accidents/deaths happen in rural areas because there is really no other alternative for transportation. Because of low population density and long distances taxis are basically impossible to find. Self driving cars could definitely fill a niche there should they ever become cost effective.


If the price is right, maybe. I live in a semi-rural area (about a house per acre, but unevenly distributed) and we have one Uber driver and a handful of taxi companies. Competition is tough though, my PHEV costs very little to operate and there's always parking and the bus system does on demand rides for $2 during weekdays between the morning and evening peaks.


I can't speak for the US, but in Europe (experiences from Sweden, Norway, Russia) rural areas usually have a handful of taxi drivers and you "use their services" by calling their numbers which you can get from locals.


In Finland we had a law that required the taxi monopoly to provide services even in rural areas, so disabled and elderly people could get transportation to services they need. Worked well in my town of 7 000 people except sometimes on weeknights the only driver could be in the next city 50km away.

(Had, as in they changed the law few years back. Not sure how it's now)

Robotaxi(s) could be quite good solution to the problem - the drivers were often pissed if you called them for a single ride when they were home or far away.


There is typically an acute need and it is a market that is chronically underserved, but also typically unattractive from an operator’s standpoint


Absolutely! I live in outskirts and I would LOVE to be able to get a taxi to the pub and back! Unfortunately they don't service me here.


> It may not scale to rural areas though.

Most products, including this one, don't need to do everything to be both useful and profitable.


So long as it is only urban areas it is a band-aid for the lack of good transit options.

Not that you are wrong, just that you should be wrong because if cities actually had useful transit rural areas would be a much larger share of demand despite not having many people.


Very true, but retrofitting good transit into a city that didn't plan for it is extremely expensive and disruptive. I see these kinds of services being a great complement to public transit in cities that have struggled to make them attractive.

For example, I am way more likely to take Cal Train into SF if I can use a point-to-point service like Uber/Lyft/Waymo to get me the rest of the way there. Without that missing link, I'm much more likely to just give up and drive instead.


I’ll grant you that the muni busses are really terrible but they should get you (almost) from point to point. The muni system covers all the city of San Francisco (even Treasure Island) and run frequently.

The only problem is that they are painfully slow. If muni had more dedicated bus lanes (like, a lot more) it might very well be the best bus network in the world.


The best time to do good transit was 20 years ago, the second best is today. SF needs to quit making excuses and make transit good. What they have is not good even if it better than everyone else in the US.


So they’d need more training data. It doesn’t sound difficult to get.


The superiority of a blended computer vision system for this task, over a human performance, is almost impossible to overstate. The computer is not going to overlook even one deer.


>The computer is not going to overlook even one deer.

Oh it will. Animals have evolved amazing camouflage. Computer Vision will easily miss a deer hidden in a dark treeline. And radar/lidar even more so because the forest is going to have a pretty irregular geometry.

Even identifying a bicycle in a regular city street is something we have not convincingly solved yet. Animals on the side of a forest road is pretty far away.


It isn't possible to not overlook deer because they are often doing things such that you cannot spot them. Unless you mean they won't fail to see a deer 2 meters in front of the car - but it is too late to do anything about it then.


SF is unpleasant to drive as a human, but slow, dense traffic seems like a near ideal scenario for autonomous. SF needs lots of social calculations if you don't want to get honked at, which is mentally taxing, but they're far from necessary for safety.

Even fairly simple autonomous tech will have better peripheral vision at near-to-mid range than one human can manage, so for all those bikers, crazy walkers, and chaotic 15mph cars you shouldn't hit, it stands a pretty good chance of being better. And when it's not, come to a stop and you're fine (barring some honks) - few are moving fast enough to hit you dangerously hard in those human-complex areas, and you don't need to stop instantly, just fast enough.

---

Honestly, I'd put SF at dramatically easier than either residential or highway roads. Residential (and adjacent) has fast-moving cars ignoring signs with obstructed vision, and inattentive humans at relatively high speed (bikers used to low traffic and swerving, kids and animals running literally through bushes adjacent to roads, general lack of care around vision-blockers like fences due to perceived low risk, etc). Who's-at-fault doesn't matter - in car vs human, humans lose, and people rightfully get upset.

Highways also seem harder, if highly specialized: accurate decisions 100+ feet in advance are absolutely critical due to the speeds involved, computer vision at that range has fairly low detail compared to humans, and lidar is practically braille for "car". Radar has trouble distinguishing stopped cars from the road because neither are moving. Ultrasonics as an ultimate backup really only work up to around 10m (and that's about the distance to stop a car at 25MPH, which you'll regularly encounter in dense city traffic).

I'll also point out that more people have died due to Tesla's autopilot on highways than Waymo, Uber, Cruise, heck all self-driving companies I'm aware of at any size combined. They're riding all the terminology lines they can to get away with it, and they may very well have an order of magnitude or two more miles, but I believe the point still stands - highways are hard.


SF is a real nightmare to drive. If they can nail it there, that's 2/2 for busy, urban street driving (SF) and "boring" suburban driving (Chandler, AZ)

Both are low-hanging fruit. When it can navigate a snow-covered road at night while it's still snowing, get back to me.


But the bar to be better than a human in this case is also wickedly low.

I basically can't do this. If it was absolutely necessary I would go out there and drive like 5 mph and be terrified the whole time, but otherwise I would just treat whatever I needed to drive to in a snowstorm at night as temporarily inaccessible. I have lived in places where it snowed in winter before.


But the use case is not the middle of the night but perhaps during evening rush hour where it gets dark after 5 PM in the winter, and a storm began in the afternoon.


Doing that better than the average human is a pretty low bar.


> SF is a real nightmare to drive.

'cause they decided not to build most of the roads: https://www.cahighways.org/maps/1955trafficways.jpg


Cities that did decide to tear up urban areas for freeways aren't really any better. Consider places like Los Angeles, Dallas, or Houston.

What makes SF difficult to drive in (from my perspective of only ever being a pedestrian there) is a) extremely hilly terrain, b) the general difficulty of a dense urban environment anywhere, and only a distant third is c) traffic, which is merely an added stressor to the complex choreography that is an urban street.


For SF, not rebuilding the 480 after the '89 earthquake made the Bay side of San Francisco really pleasant and enjoyable place to be. The Embarcadero from Giant's stadium to the Wharf and around to Fort Mason is such a beautiful place to walk/jog/ride, I can't imagine the area with the double-decker highway it used to have.


> I can't imagine the area with the double-decker highway it used to have.

How about with the freight railroad it used to have for 75 years before the state donated the ring of land to the city and paid to build the highway?

https://en.wikipedia.org/wiki/San_Francisco_Belt_Railroad

http://sanfranciscotrains.org/sbrr_history.html


Los Angeles also didn't build all its planned freeways, and today LA has fewer freeway miles per area and per capita than most american cities


> extremely hilly terrain

Yes, I agree, but they decided it was better to go over every hill instead of through them: https://www.flickr.com/photos/walkingsf/4182283392/


Wait do you mean? As in, the highways proposed in 1955 weren’t built?

I’m not sure how highways going through SF would make it easier to drive in SF (outside of the highways): wouldn’t that generally increase traffic and conflicts?


> wouldn’t that generally increase traffic and conflicts?

When coupled to our additional refusal to build housing, sadly, yeah. What two things do people usually commute between?


So just to clarify, you're thinking that traffic would increase because people would live outside SF and commute in? But don't people who live in SF need to get to work too? In that case, it seems like having giant highways carving up the city is going to make walking / biking to work harder which would cause more people to drive to work. That's what we see in "car-oriented" cities and it leads to an increase in traffic congestion that makes it miserable to drive, in addition to an environment that makes it miserable to do anything else but drive.


Commute through, eg Mill Valley to South San Francisco, or similar. Right now, because there's no freeway that goes through the city, that commute is technically possible, I'm sure there are people that do that, but it's not a fun or easy commute, so people try and live in San Francisco and commute to one or the other. If there was a freeway from the Golden Gate bridge, through the city, instead of Lombard and then Gough, then the Mill Valley - South San Francisco commute would be (more) viable at a cost of increased inter-city traffic. Ie traffic that is in San Francisco, but not doing anything there other than transiting. Which is the so-called "extra" traffic GP refers to.

See also: Boston's Big Dig.


There's a second Oakland/SF bridge that doesn't exist. Well, I assume it would go to Oakland. It's marked with "???" and just says "Crossing". I presume that hypothetical bridge wouldn't have Yerba Buena Island to connect through, so would be really impressive and long (compared to the Golden Gate and Bay bridges).


That's the "Southern Crossing"! https://en.wikipedia.org/wiki/Southern_Crossing_(California)

It would have probably been the continuation of I-980 had that bridge been built: https://en.wikipedia.org/wiki/Interstate_980#History

At one point it was also planned to be just north of SFO. Have you ever taken I-380 east instead of one of the exits to 101N/S? There's a huge multi-lane road that dwindles to basically an airport access road exit.


Oh man, thanks for those links. TIL. A causeway or something that extends off of Alameda? It's wild to think about what that would have done to the area.


On the north side there was also a plan to bridge San Francisco / Angel Island / Tiburon! Part of it still exists as Route 131.

https://www.flickr.com/photos/walkingsf/4047626058/

https://www.flickr.com/photos/walkingsf/4047626054/

https://www.cahighways.org/ROUTE131.html

Here you can see an idea of doubling-up the Bay Bridge, plus a view of the Southern Crossing / I-980 alignment: https://www.flickr.com/photos/walkingsf/4247129432/


This is a bizarre thing to say when all of those roads exist, more or less, except for the Embarcadero which was removed after it collapsed in the 1989 earthquake (and was a crazy eye sore).

It is certainly true that the taste for elevated highways through cities has waned given the pollution and dust and general unsightliness that it produces. In the 1950s, when cars were all the rage, people were very excited by these things.


> This is a bizarre thing to say when all of those roads exist, more or less

Personally, as an SF resident I would much prefer all the cars to be tunneled or elevated instead of idling in front of my house or blowing loudly through my block. It's a safety issue.

When there's only so much surface area where else are people supposed to build except up and down? That's why we have skyscrapers, and those don't seem to provoke the same vitriol as the roads.

Even the famously-hated Embarcadero Fwy wouldn't have been visible if the plans for the World Trade Center (lol) at Market/Embarcadero hadn't also been canceled:

https://archive.org/details/ferrybuildingcom2919sanf

https://www.sfgate.com/opinion/article/Ferry-Building-what-m...


> Personally, as an SF resident I would much prefer all the cars to be tunneled or elevated instead of idling in front of my house or blowing loudly through my block. It's a safety issue.

This is a false choice though. It would be better to design the city in such a way that we don't need personal automobiles for most trips. Building high-speed roads (elevated or not) through a city tends to have the opposite effect. If you live in SF, just think of the parts of the city that do have high speed roads. Is it pleasant to walk along division street? People choose their mode of transit based on what feels safe and convenient to them.

In the space and money taken up by an elevated highway, we could have low-speed mixed-use streets and an entire separate highway for bikes, and it would be safer and quieter.


Supposedly more road construction doesn’t alleviate traffic, in only induces more demand (which is moderated by high traffic levels)

Source (great read if your interested in the subject): https://www.amazon.com/Traffic-Drive-What-Says-About/dp/0307...


"Induced demand" applies to literally every public resource from subways to parks. If you build it and it's not totally out of place they will come.


Yes, inducing some demand is the point. People have to live somewhere, work somewhere, and until recently generally had to commute between the two. When this happens in your circulatory system it's called a stroke :p


As someone who lives in Bath, an old european city with roads (and other drivers) that can give experienced human drivers a nervous breakdown I'm rather more interested in how they do in SF than their current deployment in Phoenix.

From what I've heard repeatedly, SF sounds much more irregular and messy than Phoenix, so it should be something of a stepping stone to making them usable more widely if they can crack it.

I've been expecting it to happen for literally decades, but always been disappointed.


I recently lived in Oxford where they have a few self-driving car trials (notably Oxbotica) that essentially just went round and round the station because of how awkward some of the junctions were.

If we want to really test self-driving cars, introduce them to Milton Keynes 'Magic Roundabout'

https://www.google.com/search?q=milton+keynes+magic+roundabo...


Apparently Brits hate the Magic Roundabout, but it seems so shiny.. just the idea of a roundabout of roundabouts where the inner flow of traffic is reversed is really pretty. I'm sure it's one of those things that looks great in traffic flow simulations but falls apart when panicked drivers are trying to deal with each other and the unconventional traffic patterns.


I don't think most people in the UK who express an opinion on it have ever actually used it.

I used to live round the corner from the magic roundabout. It's actually fine. The best analogy I have for it is juggling, if I concentrate too much and overthink it I drop the balls. If you over-think the magic roundabout it can seem intimidating, but when you're actually there it makes much more sense and you just go with the flow. You're not dealing with the whole system in one go, most people just take it one roundabout at a time.

One of the reasons it seems to work is that people take it easy, everyone is paying attention to what they are doing, and most people take it at a sensible speed.


What I don't understand is what advantage it has over just a single big roundabout. A single car may be able to save a few seconds by going around in a different direction, but I can't picture any actual throughput advantage.

Needless complexity is all I see in it.


I'm not sure the Magic Roundabout would pose any problem for any sort of automated driving. It's basically just a nested roundabout- where you essentially have to give the right-of-way to traffic already on the roundabout.

If you can navigate a roundabout, you can navigate the magic roundabout - you just apply the same rules.

If anything, this is the sort of thing an automated system would excel over humans at - where the automation won't get confused by an uncommon application of a familiar ruleset.


100% Phoenix is basically a big grid with a relatively flat elevation. SF is quite contrary to that organization with elevation changes, more dense traffic and irregular roads being common.


Re>> "that can give experienced human drivers a nervous breakdown"

A couple of years ago I took a wrong turn and got lost in the Long Beach harbor - and had neither GPS nor phone. Holy Cow man.... holy cow... (admittedly, low quality comment)


The streets of pre-pandemic San Francisco were an unmitigated clusterfuck, arguably the worst in the nation at its 2019 peak. I imagine it's much easier these days for an autonomous vehicle.


> arguably the worst in the nation at its 2019 peak

Not even close - I've lived in the DC metro, Boston, and LA and all three are certainly worse than SF, and I think that is also backed up by evidence.


Other than reduced volume has San Francisco done something to unfuck its clusters?


Absolutely not.


Same with Coimbra, Portugal. Would love to see a Waymo car trying to navigate those old intricate roads.


Just checked it out on google. What a beautiful place.

My scepticism around SD cars in places like that and Bath is not even the roads themselves, but the traffic.

The are quite a lot of roads around me that are quite narrow and frequently drop to single lanes. I feel like any self-drive system will need to develop quite an advanced theory of mind to perform the weird silent negotiation that happens when I'm driving a long, with three cars behind me, and come face-to-face with a bus coming the other way, and somehow we have to either backup en-masse, or perform the squeeze dance as we edge past each other with inches to spair.


I wonder if Waymo can use street mirrors.


I hadn't even considered those. It would be interesting to see how confused Telsa FSD got when confronted by something like that.


Yeah SF is very messy configuration wise and is more "european" in that way.


Partly due to challenges with the landscape right?


It's a thumb shaped peninsula seven miles across with a 922' peak in the middle. Market St runs diagonally through downtown separating two grid systems, the south side of which is offset by about 40 degrees. Columbus St runs diagonally in the opposite direction through Little Italy and China Town, itself a maze of one way streets, ancient buildings, triangular parks and steep hills. The whole thing is cut up by trolly tracks, makeshift bike lanes, and more pedestrians per square foot than most places in the US. Be prepared for your two-lane road to suddenly turn into a one-way street running towards you, seven way intersections, pedestrian traffic that never stops, and lane markers that completely disappear in the rain. On top of everything, intersections are only ever labeled in one corner, but never the same corner, and at least half of the stop signs are behind trees that should have been trimmed five years ago. Also it seems like 10% of drivers are drunk or stoned, and there's a moving van blocking 60% of the road everywhere, all the time.


Kind of interesting to consider how this adjusts priors on these outcomes:

- Self driving is not possible

- Self driving is not possible anytime soon

- Self driving is possible, but requires LIDAR

- Self driving is possible, and can be done with normal cameras

I've engaged with a lot of people who presume self-driving reduces onto AGI, therefore it will not be achieved anytime soon if ever. I wonder which of their assumptions is wrong, if this ends up being successful.


I assume that they just don't solve the really wacky edge cases which might require AGI.

Many people assume that a self driving car must be 100% successful. To me it is sufficient to beat the top 20%/bottom 80% of drivers.

So the car will crash if a seagull flies in front of the camera and there is a deer just beyond it. The human would probably crash too.


> To me it is sufficient to beat the top 20% of drivers.

Computerized safety systems can, should, and are raising the bar, in terms of safety for the top 20% of drivers. "Self-Driving Cars" aren't competing in a static environment, similar tools are raising the safety bar that they have to compete against. This means it will get increasingly difficult to beat non-self-driving vehicles.


Shouldn't this rapidly lead to a convergence between self driving cars and human cars though? I assume a lot of the technology is similar.


Don't let silly little empirics around traffic fatalities in recent decades get in the way of a good narrative like this one!


> similar tools are raising the safety bar

And what is the story on driving becoming safer?


> To me it is sufficient to beat the top 20% of drivers.

It's interesting that you set the threshold there (I presume you meant 'beat the bottom 80% of drivers'). Rationally, we should be happy if their driving performance is above average. I fear that the public (and the courts and the insurers, etc) will require airline levels of per-mile safety and still be wary.


I would be fine with beating average but as soon as this thing kills someone, I want there to be a significant difference in accident rate so it does not end up in regulatory hell.


I think from a statistical standpoint, beating average is great. But the big difference is that we have this odd and fundamental requirement for justice/punishment in our society that gets lost with self-driving cars.

If a human driver kills someone / injures someone / damages property, people get satisfaction or resolution when the human is punished - insurance increases, license points, jail, etc.

But when a self-driving car - even if it's better-than-average - does something wrong, there's nobody to punish and no retribution to exact, so people will be left feeling unsatisfied. That's why the bar is going to need to be so much higher.


There's also the bias (accurate or not) of "most people are bad drivers, but I'm better". Beating the average only beats the average. It improves things across the board (and that's awesome), but people who are particularly careful or skilled will balk until it's perceived as better than they are.

Pick something you pay extra attention to. Now imagine being informed you can't do that any more, you have to take what [this robot] does for you. Also remember all the terrible UI changes in software that have been done for a majority instead of your use-case.

Reluctance until it's substantially better seems reasonable to me. Somewhat unfortunate on a species level, but not for many individuals.


I think self driving can work with AGI to match a horse. It doesn't need to understand with a depth and clarity of a human.

That is still a pretty high ask for current software.


You forgot a fifth one: Self driving is possible but follows the 80/20 rule that states that the last 20% of work requires 80% of the effort. We probably haven't even started on that 20%


That's the "not possible anytime soon" answer, which is probably the most common choice.


Waymo already "self-drives" today, it just is extremely slow at making turns when the road is busy, and if there is any kind of unusual obstacle on the road (such as construction cones that require you to drive partially in another lane) it just stops completely and has the rider wait for twenty minutes for a manual operator to come and take over.

Those two obstacles won't be overcome until AGI. At best their frequency will be brought down a bit but not by nearly enough orders of magnitude.


Completely disagree. This doesn't require AGI.


Why would faster turns require AGI? Assuming the position/velocity of the all nearby cars is known it's not a hard problem.

Also even Tesla handles the obstacle in the road situation well. Their FSD beta will happily enter the left lane to go around obstacles.


The big concern is that driving as a human skill is reactive and adaptive, whereas ML (and software in general) models are pre-baked. If something happens outside the car's model, it will react unpredictably, and strange circumstances can arise while driving. AGI, as based on human cognition, would have the capability to adapt to as-yet-unseen circumstances.


I would suggest visiting r/idiotsincars sometime. Humans are terrible at reacting to situation outside of their experience, and essentially do random things all the time.

If an AI system's default response is "come to a stop safely" then it's going to be way ahead of a lot of human "unexpected situation" handling in cars.


There are many situations where "come to a stop safely" is the worst possible thing you could do.

Yes, people are bad at driving, because they don't pay attention, panic, make mistakes etc. But ML models tend to freak out at slight variations on mundane circumstances; a cyclist crossing the road at just the right angle and the wrong colour of bike, that sort of thing. The thing self driving cars need to avoid is killing people in broad daylight for no discernable reason, and that seems like the kind of thing that you'd need a mind for. It's the same issue as with adversarial image manipulation to fool image recognition; if changing 3 pixels can turn a frog into a toaster, you aren't really "seeing" the frog at all in a symbolic way, and not seeing a road symbolically seems like a recipe for disaster.


> The thing self driving cars need to avoid is killing people in broad daylight for no discernable reason

This, I think, is the thing that people miss when they say "self-driving cars don't need to be perfect, they just need to be better than human-drivers, who aren't actually all that great".

From a public confidence perspective, it doesn't matter if a self-driving car crashes one tenth, one one-hundredth as often as human drivers; as soon as you see a self-driving car kill someone in a situation that a human driver obviously would have avoided (like in the adversarial image kind of scenario), you've totally destroyed any and all confidence in this car's driving ability, because "I would never, ever have crashed there."


I think the main thing they miss is that human drivers are actually amazingly good.


Yeah it's been really odd to see the take that self-driving must require strong AI. It needs to be done carefully, but it's clearly a manageable engineering problem if you have good sensors.


If there's a person at an intersection directing traffic, it will be very hard to have the car itself communicate with them as easily as a human can. Edge cases like that is where AGI would be needed it seems.


So in this hypothetical is the person directing traffic completely oblivious to the existence of self driving cars? Pretty sure we can assume traffic cops in the future will be trained to deal with self driving cars and use only use gestures from a predefined list.


People directing traffic use only a handful of signals.


I would believe that people directing traffic usually use only a handful of signals, but it's certainly not a universal truth. This is one more case of the 80/20 problem that self driving tech keeps running into.

Sure it's probably feasible for cars to handle hand signals in the happy path, but anything outside of that will be disastrous. How will the car understand and communicate with a person who doesn't use the standard signals, aside from having some level of intelligence?


> People directing traffic use only a handful of signals.

Sometimes. Other times they confusingly gesticulate or just shout out things, or even give conflicting signals. Humans can interpret these without much effort but it's a hard AI problem.


self-driving is more of a language problem than a technology problem. In 2005 grad students had cars driving themselves on a course. Tesla and Waymo both have cars that very drive well on many roads. self-driving is here, now all that is left is for people to argue about what "L5" means as the systems improve. They will improve as there is a clear path to improvement.

I don't expect that we will ever see the day where everyone is OK with self-driving cars on the roads regardless of the safety statistics, because there will always be edge cases of crashes and personal preferences around driving styles.


I think from the original DARPA challenges most researchers knew that while self driving is possible or more correctly "a promising direction", but its application to the realworld and its robustness was a far larger barrier back then.

We have the advantage of retrospect when looking at these claims which now may seem more possible then before due to our better understanding of the difficulties and technologies to address them.

I think many people confuse their excitement for the promise of having self driving cars and the actual technical and political barriers that still need to be addressed to bring this to reality (ranging from robustness, infinite number of edge cases, perception, insurance, or public policies).


> I think many people confuse their excitement for the promise of having self driving cars and the actual technical and political barriers that still need to be addressed to bring this to reality

I think most of the confusion is due to deceptive marketing. Sales people like Elon Musk have been saying that the big dream of true self driving is just around the corner for years now, even though they were nowhere close.


Self driving for taxis is a very different problem than for personal vehicles. For the latter you can always rely on the human driver to handle the last 1% or 0.1% edge cases and still provide a ton of value. Taxis don't have that option, so it really is perfect level 5 automation or bust. "Good enough" doesn't cut it.


Taxis do have the option. Cruise remote operators "guide" taxis once every 5-10 miles during peak hours and thy expect to do so after public launch https://youtu.be/sliYTyRpRB8?t=212


> - Self driving is not possible anytime soon

Define “soon”. For me it means 50 years. That’s an infinitesimally small amount of time. If we can have level 5 autonomy in 50 years I will say that progress was rapid and we really excelled.


I'd put 50 years in the "not anytime soon" camp. 50 years is an eternity nowadays with regards to tech.


To put it in perspective, 50 years is still less time than from now until when the transistor was invented.


Here in 2021 it has been well over 50 years since the first transistor was built.

> The first working device to be built was a point-contact transistor invented in 1947 by American physicists John Bardeen and Walter Brattain while working under William Shockley at Bell Labs. The three shared the 1956 Nobel Prize in Physics for their achievement.

https://en.wikipedia.org/wiki/Transistor


The transistor was invented in 1947/48, over 70 years ago.

Source: https://www.sjsu.edu/faculty/watkins/transist.htm


Yup. This whole era from WW2 onwards (the 3rd/4th Industrial Revolution?) will be looked at as covering 1950 to probably the mid 21st century hundreds of years from now.


Self driving was never impossible, just infeasible with tech at the time. Surely it'll be standard in a few decades, despite that being quite far into the future.


Most important one you missed:

Given sufficient qualitative and quantitative investment self driving may be technologically feasible at some point in the future but not price competitive with human driving.

Self driving proponents seem to assume the tech is free. Just keeping sub meter scale 3d street maps up to date has a massive cost.


There are enough people in the developed world who are physically unable to drive or use public transportation in their area and who also don't want to be homebound that the economics could still work out even if they were the only market, which they aren't.

> Just keeping sub meter scale 3d street maps up to date has a massive cost.

True, but probably not actually necessary.


You seem to assume there is unmet demand that you can meet cheaper with automation than with human drivers. I agree there may be unmet demand but I don't see you get automation to be cheaper than a $10 an hour cab driver. I mean if automation was cheap and easy then our factories would all be automated before something quixotic like a car. It is frankly absurd.


> a $10 an hour cab driver

Well, for one thing, I don't want anyone in populated parts of the US to live on $10 an hour, not even a cab driver. What happens when we double that? Cost of living goes up over time, but cost of technology goes down. When do we reach the tipping point? Have we already?


No. Look into. Supply and demand.

If wages go up to $20 an hour, then maybe there is general inflation and robotics goes up to $200 an hour. In the world today we have cheap labor and expensive energy. That is why automation is not replacing humans. Look into the history of the British industrial revolution.


>if automation was cheap and easy then our factories would all be automated

Please describe the last factory you were in.


I have probably been in far more American factories than you. Obviously I am not going to answer your question directly because I have an interest in protecting my career or privacy but in my extensive experience in mamufacturing many things that you might think could be automated are not because it is still cheaper or more effective to pay a worker $15-25 an hour than to spend $15,000 on automation that doesn't work right half the time.

Here is the reality that self driving will confront:

cost effective, reliable, competitive automation is hard, even for seemingly closed mundane taskes like palatising products.


I make these kinds of maps for a living.

Yes, it's expensive, however, as we refine algorithms, and the processing systems, it gets cheaper, especially amortized over more cars, where the per-car cost becomes affordable.

ML doesn't work well enough for offline maps generation either, and all the high quality maps require human editors for final touch.

All this work is currently done because realtime perception doesn't work well enough, and you can have a much more reliable system with the aid of the maps. Having a 3D base map of the world makes the realtime perception problem far simpler, and it makes fancy sensors less critical.

In the US, where the cost of labor is very expensive, self driving will make sense, even if it's expensive, but someplace like China or India, where a middle class person can afford a driver, it probably makes less sense, though the push for it in China is probably the strongest that I've seen anywhere in the world.


So you think self driving cars will be able to profitably offer a 10 mile ride for $20 in suburbia of second tier cities like Springfield MA? If your opinion is that cab drivers make much more than $10 an hour today then I suggest you look at things outside the bay area.

Self driving does not scale at all now. Because as you said these special maps need to a lot of human labor to make and the cars need a lot of sensors on top of the auto patform. Labor is not even the main cost driver in person transportation.


I am in the AGI camp, but have wondered and still wonder,

Why are efforts not focused primarily on interstate/highway travel, specifically, collaboration with DoT for mesh/distributed/coordinated long-term travel?

I don't need a self-driving taxi. I would pay 20K for a car which participated in a federally-regulated framework which let me let go of the wheel when I get on a highway and let teh emergent cloud determine how best to move my car and all the others on dedicated/reserved lanes in coordinated "trains" for hours.

The wins here seem like no-brainers. Sidestep all jurisdictional nonsense; optimize commerce and personal travel; automatically handle emergency vehicles and other unusual conditions; etc ad infinitum.

All my car needs to be able to do other than existing lower-tier self-driving/driver assist basics, is join the borg.

Coordination for traffic flow management seems like an unbelievable win.

But no, all we seem to be getting is cyclist-terminating taxis which cost $250K each and are, IMO, doomed in target-rich environments like SF to not forseeably adequately the last 8% of anomalous novel cases.

Just don't get it.


If you just want a car that drives itself on freeways, get a Tesla. Probably 80% of my car's 15,000 miles have been on autopilot. It automatically changes lanes to pass and automatically gets out of the passing lane afterwards. It takes offramps and interchanges. It automatically brakes for obstacles. It aborts lane changes if someone else gets in the way. It even works in rain and light snow.

Other car companies are a few years behind, but even something as simple as adaptive cruise control + lane centering is a huge help on freeways.


It has killed people letting it self drive on the freeways. It was a while ago and maybe it has gotten better since, but I don't think taking your eyes of the road is advisable.


The latest update has eye tracking and warns you if you take your eyes off the road for more than a second or two.

Nobody is saying that self-driving cars are perfectly safe. Considering how many Teslas are on the road and how many miles are driven on autopilot, it would be surprising if there weren't any deaths. As long as it's safer than unaided human drivers (which it is), it's a net win.


I agree. I think the problem has been the ride hailing companies shifting all the attention to being self driving taxis.

For 95% of people I think the value is in letting them do other stuff while driving long distances/times instead of stuck behind the wheel. This seems many orders of magnitude easier than trying to tackle inner city driving.


GM Super Cruise already allows you to take your hands off the wheel while driving on many freeways.

https://www.motortrend.com/reviews/cadillac-super-cruise-is-...


Highways and big cities first. I predict non-self driving cars will be made illegal in areas like Manhattan, NYC.


Disclosure: I work at Waymo.

This is big news, and I encourage you to apply if you're in San Francisco. For the HN crowd, I'd also recommend last week's blog post [1] which includes some more technical material on "how" we're driving.

[1] https://blog.waymo.com/2021/08/MostExperiencedUrbanDriver.ht...


I'll never forget I almost got murdered terminator style by a Cruise while riding my bicycle around SF... hopefully they figured that out.


> I almost got murdered terminator style by a Cruise while riding my bicycle around SF... hopefully they figured that out.

I had to google "Cruise". To save others a moment or two, Cruise is a self-driving startup unrelated to Alphabet. In the quoted sentence, "they" != "Waymo" (I wasn't sure).


Yea it is terrifying, I had a reflective vertical zipper on a jacket while out for a run and saw a Cruise swerve towards me after it crested a small hill. I guess I looked like the new lane


Don't worry, the later terminator models are much more effective.


Can you elaborate more on how?

HN is getting to reddit's status where people make comments and everyone just upvotes because it sounds good to them.


People have been saying “HN is turning into Reddit” for over ten years. There’s a bit of an Easter egg at the bottom of the HN guidelines about it:

https://news.ycombinator.com/newsguidelines.html


I don't know. I've been here for nearly a decade (I switch accounts every once in a while), and in the last six months or so, it sure feels like somethings different. Specifically, I'm noticing more joke replies that aren't downvoted. Maybe I'm just getting old.


I said the same thing a year or two ago. There's probably some truth to it, but overall it's probably just selective bias.

I became a HN regular because I saw some insightful comments backed by quality moderation. I was escaping the dumpster fire of reddit so it seemed like paradise by comparison. Over time, my usage of HN grew, and I became more exposed to the reddit-like mentality that pops up on certain topics or at certain times(i.e. weekends).

I think the quality of HN varies depending on where the more serious users are in their development schedules. It would be an interesting thing to analyze I suppose.


That's awful. I am almost murdered daily and I live in a city without self driving cars.


A Cruise car nearly ran me and my baby in a stroller (a pretty standard model stroller for the neighborhood, too) in the crosswalk by Caltrain station. The rest of the cars have not been so overzealous in running down pedestrians. I give Cruise an extra wide margin of error since then.


He really does do all his own stunts.


I discovered a few years ago that a Cruise will stop dead if it hears a horn. My bicycle happens to have an electric horn from a motorcycle. This provided some occasional amusement. I wonder if they still do that.


That would not be a very efficient feature to have enabled here in Mexico


Reminds me of the 'honk more wait more' video from the Mumbai police dept.

https://twitter.com/MumbaiPolice/status/1223090017397960705


I just signed up and actually skimmed the privacy policy (because hey, it's Alphabet). They record video of you during the ride and didn't mention anything that I caught about ever deleting it. I get that they need video in case I do bad stuff in the back, but it's really disappointing that they didn't bother reassuring you like "we delete it after a month" or something.

I'm hoping I just missed that line. But I signed up anyway, so I guess that says more.


Everything to do with more tech in cars is a privacy nightmare.


Kudos to the Waymo team.

I was a self driving skeptic 2-3 years back. Now given the advances in both hardware and NNs, I do see possible solutions in the next decade.


> We can’t wait to hear from more San Franciscans as they experience the Waymo Driver themselves. Beginning later today, San Franciscans can sign up for the Trusted Tester program and help us shape the future of mobility in this city. Just download the Waymo One app to get involved.

Anyone else got this to work? I only see a message that says "It looks like you're not in our service area..."


I was able to complete the signup process: requires a gmail account, email link to a qualtrics survey which asks things like name, age, where you live (if SF proper, neighborhood?). (I do live in SF and gave it location access)


Perhaps they really do mean, "Beginning later today" and it will be available for your area later today?


I skimmed past that detail. Thanks!


Delete the Waymo One app and reinstalling worked for me when I saw this bug.


"beginning later today"


I feel like people here are missing the other factor that will drive adoption of self driving vehicles: economics.

Safety is obviously important, but we already accept a decrease in safety over money and convenience (otherwise, we would have a 30kph speed limit everywhere).

But most people use their car for commuting, groceries, dropping the kids to sports, etc. They may make one or two road trips a year for vacations or business.

To facilitate this, a middle class family will often own two vehicles, pay tax, insurance and maintenance on said vehicles, pay for parking, etc.

If that suddenly wasn't necessary, that would be a massive saving for most people. You no longer need the capital investment in something that loses value.

Even better, you don't need to bring little Suzy to football, you can put her in a waymo instead (once she's old enough obviously).

And when you want to go drive to see grandma, you rent a car for the longer trip.

All of that depends on affordable self driving transport of course.


I'm confused why many people automatically assume robot taxis will be cheaper.

Financial Times (Alphaville Blog): The questionable economics of autonomous taxi fleets: https://www.ft.com/content/aa05823d-e58f-33ee-a99c-d50f2fd0c...

... quotes this Harvard/MIT study: Autonomous Taxis and Public Health: High Cost or High Opportunity Cost?: https://psyarxiv.com/6e94h

<<<Drawing on a wealth of publicly available data, Ashley Nunes and his colleague Kristen Hernandez suggest that the price for taking an autonomous taxi will be between $1.58 to $6.01 on a per-mile basis, versus the $0.72 cost of owning a car. Using San Francisco’s taxi market as its test area, the academics examined a vast array of costs such as licensing, maintenance, fuel and insurance for their calculations.>>>

I can imagine that as the world grows more urbanised, and wealth concentrates in urban areas, that people will want to pay for the convenience and safety of a robot taxi.


Notably, your 2nd paper puts the cost of a traditional taxi at $3.50 per mile. At the low end of the autonomous price range, it is still a half price taxi.

For the autonomous taxi calculation they also include a 250k medallion plus interest paid off over 5 years, but don't include this for regular taxis. They also include a 200K/year to supervise the auto taxi for the high end ($6/mile).

With all that said, Auto taxis are unlikely to be a cost effective replacement to owning a car for distance commuters. Even with lower costs than traditional taxis, taxis are horribly cost ineffective way to get to work and back for any significant distances.


This is a great reply. You are right: Too much analysis is focused on the suburban car commuter. More analysis should focus on urbanites looking for a "human" taxi that will be more expensive than an equivalent robot taxi! Personally, I still think the future of taxi drivers is they become the maintenance workers for robot taxis at the same wages...


Thanks! It's always nice to get some feedback after putting thought into a post.


Economics has also already given us lots of information on how much money people are prepared to sink into transportation - the owners of the self-driving cars/rental companies will attempt to extract that value.

I don't doubt there will be a shift in the overall economic picture, but it may not be as large for the car _user_ as we would hope. It will need to be better/cheaper in some dimension for it to get adopted, but the rent-seeking vultures are always circling, spoiling the utopia :-(


Many comments in this thread are variations on a theme of "self-driving cars don't need to be perfect, they just need to be better than human drivers, who aren't actually all that great." I think it would be nice if this were true, and I suppose it is from an actuarial perspective, but it's also an extremely flawed point.

From a public confidence perspective, it doesn't matter if a self-driving car crashes one tenth, even one one-hundredth as often as human drivers.

If you see a self-driving car cause an accident, particularly a lethal one, in a situation that almost any human driver would have avoided, you've totally destroyed any and all confidence in this car's driving ability, because "I would never, ever have crashed there."

As we've seen, there are lots of scenarios like this. The Tesla crash from last year, where the car simply didn't see a white truck against a light background. Or imagine an adversarial image attack, where some tiny insignificant detail is placed onto a stop sign or a "do-not-enter" that turns it into nothing from the perspective of the AI driver.

These kinds of scenarios obliterate public confidence in self-driving cars, because intuitively, you immediately realize that you're "a much better driver" than this car! Even if that's untrue 99/100 times, it only takes one visceral example to drive this kind of wedge.

Self-driving cars don't just have to be better than human drivers. They have to be as close to perfect as is possible, because that's what people will expect.


>human drivers, who aren't actually all that great.

A large fraction of human drivers are actually all that great. The majority of accidents/deaths are caused by a minority of terrible drivers, or good drivers who found themselves in terrible but rare circumstances. The majority of drivers drive hundreds of thousands of miles without any accidents that were their fault, or even any accidents at all.

In other words, it's probably easy to beat the mean human driver, which is greatly dragged down by a minority of terrible drivers. It's probably very difficult to beat the median human driver, and near impossible to beat the top 20% of human drivers.


I don't think it's easy to beat the mean human driver and to demonstrate with solid data that you've done so.

In 2019 in California, there were 1.06 deaths per 100 million vehicle miles traveled. Any self-driving automobile technology that doesn't have at least 1 billion vehicle miles of data is in no position to claim that it is safer than human drivers and less likely to kill people.

Self driving cars don't make the same kinds of mistakes as human drivers do, but they make different kinds of mistakes. Some of these can be fatal.


>I don't think it's easy to beat the mean human driver and to demonstrate with solid data that you've done so.

Agreed. I should have written "relatively easy."

> Any self-driving automobile technology that doesn't have at least 1 billion vehicle miles of data is in no position to claim that it is safer than human drivers and less likely to kill people.

The circumstances under which those miles are driven (e.g. road type, location, weather, time of day, etc.) also have to be consistent with circumstances under which humans are driving. 10 billion autonomous vehicle miles driven only on highways in broad daylight is a worthless point of comparison, whereas 500 million miles driven across a variety of conditions representative of the full human driving population is worth a lot more.


> but they make different kinds of mistakes.

This is key, there's expectation and some wiggle room that as a human driver, humans will fuck up predictably and experienced drivers know how to avoid getting into incidents when this happens (usually).

Self-driving cars are weird to drive around. They will absolutely stop in situations where no human would think to stop. I think about this as a motorcycle rider, what if I'm committed to cornering on a corner I can't see around and the software decides on a self-driving car that it should just stop in the middle of the road after the apex? A human driver could do this too but many will know that this is a dangerous place to stop and try to put the car on the shoulder or minimize the amount of time it's stuck there.

I don't know if this is something we need to tolerate a temporary increased incident rate on as people get used to them being on the road, or if we need to make the software drive more like humans (with the assumption that means potentially making the behavior act sloppier than it can handle so that increased software reaction rate doesn't cause humans with slow reaction rate to slam into them)


> I don't think it's easy to beat the mean human driver and to demonstrate with solid data that you've done so.

It is. The mean is dragged down by alcoholics who drive drunk every single day. If you never drive drunk, that gives you a significant advantage.


We were talking about self-driving cars.

The mean is given by the number I posted, about 1 death per 100 million miles traveled. That number includes drunk drivers, distracted drivers trying to text, everything.


The point is that "1 death per 100 million miles traveled" is the mean average, but most drivers do better than the mean. Mean, median, and mode are not the same and the mean crash rate is not relevant to most drivers.


I'd be curious to see a source for your number


* The 2019 Mileage Death Rate (MDR) – fatalities per 100 million miles traveled – is 1.06.

https://www.ots.ca.gov/ots-and-traffic-safety/score-card/

California Numbers:

* 3,606 traffic fatalities in 2019.

* 1,066 Alcohol-impaired driving fatalities (fatalities in crashes involving a driver or motorcycle rider with a blood alcohol concentration, or BAC, of 0.08 or higher) in 2019.

* 620 Unrestrained passenger vehicle occupant fatalities in all seating positions in 2019.

* 164 Teen motor vehicle fatalities (age 16-19) in 2019.

* 972 Pedestrian fatalities in 2019.

* 133 Bicycle fatalities in 2019.

assuming the above (alcohol-impaired, unrestrained passenger, teens, pedestrian, bicycle) are all all poor-driver related that leaves 651 traffic fatalities.


Not really answering your question, but CDC says 28% of all traffic-related deaths in 2016 involved alcohol. Excluding these would immediately improve the mean performance.


I see why LMGTFY had its day in the sun: you can literally paste the first sentence of parent’s post into DDG, and the first link answers your question. Hell, the preview answers your question, you don’t even need to click it.


the writer has the responsibility give a source, not the reader


Not sure about that. I'd say from the people that are very close to me (friends & family), I wouldn't want to be a passenger with half of them. AI is /so much better/, can't wait for it to be mainstream. And it's not just about the AI driving, it's about the AI reacting 100X faster and having eyes all around the car to avoid accidents before they could even happen.

I wonder what humans will actually do better than AI in 50 years. I have a personal theory but I'm a bit off topic here


Do you have a source for that?


I don't know where the op got the statistic from. But this claims 1.02 per 100 million. https://worldpopulationreview.com/state-rankings/fatal-car-a...


Exactly right. Furthermore, most risky behavior is a choice. Crashes aren't random "acts of god" that strike anybody with equal likelihood. If you choose not to drive drunk, drive in bad weather, or for many hours without rest, then you can greatly improve your odds and almost certainly reduce your risk below the average. In these discussions I often see far too much fatalism; "Everybody thinks they're above average but half of you aren't". ignores both the fact that crashes aren't distributed like that, and the fact that the riskiest behavior is a choice. The mean number of miles driven drunk is greater than zero, but the number of miles I drive drunk is zero.


Here's a particularly spicy viewpoint: It doesn't matter if you obliterate public confidence in self-driving cars, or if there are lethal accidents that would've been avoided.

As t approaches infinity, what are the chances that self-driving cars won't take over the world?


I imagine Waymo's investors want a return before t reaches infinity...


pretty high if people won't use them because they don't trust them.


If you think governments can't simply legislate away people's freedom to drive in the service of corporate profits or what's "best for them", you haven't read the news lately.


"If you think governments can't simply legislate away people's freedom to drive"

Less drunk, distracted, angry, and/or sleepy drivers on public roads, with overall significantly less crashes? Sign me up!


That would be an ideal outcome, but plenty of people would need to be economically or legally forced. Cross-reference to vaccines.


Freedom clearly does not work in a self-centered society. We wouldn't need such drastic actions if enough people voluntarily did the right things. But not enough do, so here we are.

I'm really beginning to get sick of seeing people use the word "freedom" when they clearly mean "no personal responsibility while living in a society among other people".


You must be reading different news. Governments (especially the US) can't even get people to stay at home for a while to avoid a deadly disease.

It's borderline impossible that they will "legislate away people's freedom to drive" anytime soon.

Even if a startup appeared tomorrow that could unequivocally show that they have a perfect self-driving car that runs on fairy dust and cleans up cities as it goes, people would still demand their right to drive ICEs.


At least here in Germany, even "let's maybe have a speed limit on all the Autobahns" is highly contested. (And one half-joking suggestion has been to introduce a speed limit only for ICE cars as a motivation to go electric)


It's funny how Americans seem to have accepted the idea that corporations control their political life


I wouldn't say agreed with, but mostly accepted a reality they have no power to change.


Counterpoint - nobody actually cares about traffic fatalities. Nearly 40,000 deaths a year in the US, and the majority of people get in their cars every day without ever thinking about this risk and go about their lives (or to put that another way, the risks are already so low as to be negligible to most people, and anything else within the ballpark of negligible is still negligible). Normalcy bias is incredibly strong and as soon as self-driving cars are "normal" people will get on board without thinking twice. Tesla is slowly acclimating people to self-driving, basically everyone is familiar with the idea at this point, and as soon as it's available and someone tells you it's "just as safe as driving yourself", most people will just go with it. Especially given how big the upside is - you don't have to deal with the stress of driving anymore, you can just relax in your car. Or in terms of getting a ride, maybe it's 1/4 the price of a taxi driven by a human. Sounds good, people will roll with it.

Of course the more it starts taking off, there will always be a vocal subset of the population that is strongly opposed to it, just like there are vocal anti-vaxxer groups and there were anti-seatbelt protests back in the 80's. But I can't imagine the naysayers having a very big impact on the progression of the technology, the upsides are just too enormous.


I'm a bit surprised by how negative a view HN has towards human adoption of technology.

What technology is perfect? What code is perfect and doesn't have bugs in it? And yet we adopt automated systems anyway. Yes, sometimes it is painful and an entire airline grounds all planes for half a day... But that doesn't stop the unending march towards efficiency and technological progress.


I think the point is that the general public is fickle and their trust in self driving cars is tentative. If the public loses confidence in the technology it will make it very difficult for them to roll it out. So this isn't really about what HN thinks, it's about what HN predicts the general public will think.


> Self-driving cars don't just have to be better than human drivers. They have to be as close to perfect as is possible, because that's what people will expect.

Though I disagree because people expect a lot of things and alternative outcomes happen when there are incentives (Current cars, airplanes, and elevators come to mind). Waymo seems to be aware of this from this recent video. [1]

1. https://youtu.be/yjztvddhZmI


This is arguing that because humans are stupid and biased, they will believe they are better drivers despite all the evidence to the contrary, and therefore, a solution needs to be close to perfect so that humans stop being fearful.

We have seen this before with autopilots in aircraft and ship, elevator operators and so on.

All it needs is time and adoption, not perfection, and as adoption increases, the roads get safer and safer, bringing us closer to the ideal, and as time increases, the closer we get to adoption.


The world doesn't work in idealized ways. Yes, perhaps ideally self-driving cars would need to be nearly perfect in order to win wide public acceptance. At the same time there was an article on the front page today investigating literal gulag labor camps in the USA, and I doubt those are popular or going away any time soon. Once this is working technology that can turn a profit, it will depend on who stands to gain, how deep their pockets are, who gets bribed, which talking points get pushed, and which votes get bought. Whether the public ends up particularly happy about the outcome is at best, secondary, and at worst, irrelevant. Don't think I've run into too many people happy about our healthcare system, yet that remains stubbornly broken. No one I've met really wanted to lose our lower-middle class to China, yet there it went.


Well a large difference is that consumer goods companies like Tesla are reliant on their public image for sales.


People don't usually pay too much attention to what the government relations department has been saying to the NHTSA. It doesn't have the be the same thing Elon is tweeting.


I was a doubter for awhile.

Watching the Waymo video just changed my viewpoint.

They have a "Pull Over" lever in the back seat.

Would I trust a self-driving vechicle without Lidar--probally no.

Would I use use a self driving vechicle commuting, and around the city--yes. Two driving chores I hate. I hate them to the point that the philosophical argument over dying by my own hands, or a computer, is put in the trunk.

On a personal note, the thought of a computer driving me off a cliff while driving to Stinson beach is not something I would chance. Even if they are statistically better drivers than most humans.

I can't imagine tumbling down a cliff, and thinking if only I drove today.

I still foresee most trucking jobs, and most driving jobs, completely gone in a few years.

Yes--it's time for a Basic Income.


I agree that these incidents are concerning, but you mention Tesla's crashes, yet people are still buying it? I think you are underestimating people's laziness.


Even if public confidence was low, so what? People will still buy self-driving cars and fall asleep in them. Laziness triumphs over safety.


People get on passenger planes every day and there are loads of instances of those crashing for extremely unorthodox reasons.


What's the biggest technical challenge that they have right now? And how much time and effort will it take to overcome?

I feel like there is a lot of disconnect between the different media pieces and opinions I come across specifically when it comes to Waymo's self-driving - from "we're already there" to "it won't happen in the next 10 years".


If Waymo has a 10-year lead, the people who get paid by the word to write about the horse race don't get paid. That's why you'll often see articles on self-driving cars that don't even mention Waymo.


Last week Waymo said they are driving 100,000 miles per week in San Francisco. That figure is just bonkers. SF MTA only operates about 450,000 miles per week, and that was before COVID-19 shut them down. When one is on the streets of San Francisco does a Waymo Jaguar just drive by every couple of minutes? I honestly haven't been over there in a few months.


Long story short, yes, you do indeed see the white jaguar crossover with the spinning lidar sensors every few minutes while you are out and about. Always a different person sitting in the driver seat, yes I've checked because I didn't believe the number of Waymo cars at first either.


Anecdotally I've seen many more of them in Chinatown in recent weeks, it's not particularly rare to see multiple Waymo Jaguars at the same intersection waiting for the lights to turn green.

They seem to have taken over a previously public parking lot at Pacific and Sansome in the city. There are often 20/30 of these cars parked up there.

I've never seen one without a driver, but occasionally when I look at the steering wheel it is turning independently and just being monitored by the driver.


> When one is on the streets of San Francisco does a Waymo Jaguar just drive by every couple of minutes?

I live on Dolores park. They drive by multiple times an hour


The are everywhere. I'm fairly certain that I encounter them more often than SFMTA.

My car was broken into over the weekend and I couldn't help but think about whether a Waymo captured the vandalism... Not like it would help, but like all mass data collection campaigns, the potential implications are strange.


I see them in my neighborhood at least twice a day. They don't bother me, to be honest. Just like any other car.


They’re also in my neighborhood - I generally see one every time I do an errand. They tend to drive 5-10 mph under the rest of traffic which can be frustrating. They also tend to be slow when making lane changes and turns to the point where it holds up other vehicles.


5-10mph under while within the city limits, on streets where things pop out at you all the time, sounds really nice actually. It's one of the things that happen all the time that drivers ignore the possibility of.


I'm in glen park neighborhood of sf, it's the suburbs basically. I see them anytime I walk outside, but I don't think I keep track of which company I see driving by. It could be waymo, cruise, etc.


I saw a few of them when I went to visit my friend in SF a few weeks ago.


They’re all over Potrero Hill. I see at least one daily.


I wonder if the cars will always have a human chaperone. Not as a safety driver, but to prevent riders from trashing the car.


They don't in Phoenix; the article links https://www.youtube.com/watch?v=AdrV9wqXyH0 as an example.

(Disclosure: I work for Google, speaking only for myself.)


It's interesting the number of comments I've seen since it launched many months ago that completely ignore that this has been deployed somewhere without safety drivers. However unimpressive you find the geofence/HD maps/lidar etc., you'd think more people would know about it


Yep, there are multiple even on this thread.


Thanks! I'd love to see their data on rider behavior. AIUI, they have hand-selected a group of riders into their pilot program there. I suspect those people probably behave better than your average taxi passengers. Either way, fascinating to see.


I think initially they opened it up to handpicked riders that were willing to sign NDAs, but now anyone can sign up in the Waymo app


This problem has been solved when it comes to hotel rooms, or rental cars. What would be fundamentally different when it comes to autonomous cars?


It's not exactly solved - your examples are close but not equivalent.

Hotels have many people who work there and cleaners who enter the room daily (no Hendrix hotel a la Altered Carbon yet). Rental cars are secured with your credit card so there's a massive disincentive to trash the vehicle.

Perhaps pervasive recording of the vehicle interior would suffice.


>Rental cars are secured with your credit card so there's a massive disincentive to trash the vehicle.

Yeah, I was thinking along those lines. Trash the vehicle, get a cleaning fee charged to your card.


I think they should record passengers boarding and unboarding, with motorized camera shutters that close while in transit for privacy.


If a human has to clean the car between rides that seems like a pretty big problem


It’s also been solved (or isn’t a problem) with car share apps like Car2Go or Zipcar. Those cars don’t get cleaned between rides, but system asks how clean the car is when you start a trip.


There are plenty of rail-based transports that have no driver (Vancouver's SkyTrain is the largest iirc). They seems to get by without being ripped apart by passengers.


The other passengers serve as the social discouragement in that instance. Most human-driven rail transit isn't monitored by rail employees.


I imagine truly autonomous rides without a specialist on board will have plenty of cabin cameras with clear view to identify the passengers. With consequences in mind, does that work in a similar way to the social discouragement of other passengers?


What about elevators? Sure it's sometimes shared, but you can also ride alone.


I'm not saying I agree it will be a big problem - but graffiti, littering, urination definitely happen in public lifts.

I imagine though that it'll just be something you can report when it comes along, then it gets sent off for cleaning, they pull up its recent rides, and send you a different car.


If you can teach a car to drive, you can teach a car to detect when it's being thrashed.


Unless it comes with the ability to eject participants, how would that help?


you automatically charge $500 to the passanger's credit card, and self-drive back to the depot to be cleaned


The cost to clean (and repair damage) and opportunity cost of not earning money in the meantime would be far more than $500, and far more than how much most people could afford to pay.


> Unless it comes with the ability to eject participants, how would that help?

System locks all doors and calls police?


I highly doubt police would care. People trash hotel rooms all the time, and they might escort people out of the hotel, but recouping any costs is a civil matter for the business. And good luck breaking even via that avenue.


And if it turns out the thrashing of said car was an in-cabin fire? You just locked them in.


You wouldn't achieve any scaling benefits there. But also - trashing an autonomous car (aka a car full of cameras and sensors, which you charged to your credit card) seems like one of those self-limiting problems as the people who do it go bankrupt or to jail.


There's a neverending stream of way-too-drunk people who need a taxi (or at least there were, in the beforetimes). I don't see that happening.

Also, I don't particularly want to ride in a car with a bunch of sensors and cameras monitoring my every move.


Agreed on both points. However... while I don't want to ride in a car full of sensors, I think the popularity of CCTV and Facebook demonstrates that a lot of other people are much more OK with that. Just look at busses, trains, and even gas stations -- I'd never want to have ads thrown in my face the way all of those systems push them on you, but a lot of people just seem to be... OK with it.


Aside from vandalizing the gas pump TV, what do you suggest doing to say that I'm not okay with that? Whining about it on Twitter seems equally ineffective.


A self-drive / remote piloted vehicle doesn't need a functional interior. Which means the interior can be basically be made fluid tight, and rapidly replaceable.

The current situation with taxis is that you enter before confirming your identity via payment, and the problem is put onto the taxi driver (an individual) when something happens.

This is distinctly not the self-drive corporation issue: you trash the car, the issue is forwarded to corporate debt recovery, who then work a 9 to 5 slowly pushing the issue though the relevant channels. Meanwhile, the car is returned to base, maintenance rips out the absorbent materials and power washes the interior.

Trashing it becomes a line-item cost to a very large organization, not a problem which "isn't worth it" for an individual operator.


> Also, I don't particularly want to ride in a car with a bunch of sensors and cameras monitoring my every move.

Many (most?) taxis already have both dashcams and cabin cams. This has already been normalized.


This is both incredibly exciting and a little terrifying. As a pedestrian, I'm still a little weary around these cars.


Disclosure: I work at Waymo.

At least the current fleets of AVs (Waymo, Cruise, etc.) are "obviously" potentially autonomous. I'm honestly more cautious now as a pedestrian when I see a Tesla coming after the FSD videos. I wish I could know "Is that Tesla owner using the FSD mode?"...


> I wish I could know "Is that Tesla owner using the FSD mode?"...

I wish I could know a lot more about the car and the driver in it too, such as if the car has pedestrian airbags[0] or if the driver is having a heated argument with their spouse. It's all risk management and ultimately the person behind the wheel is responsible for the exoskeleton on wheels they pilot, as the existence of cars at all is a net negative for pedestrian safety.

0: https://www-esv.nhtsa.dot.gov/Proceedings/23/files/23ESV-000...


> "obviously" potentially

I count three distinct hedges there :)

It's cool tho, congrats on the expansion!


I'm really bad at hedging and parenthetical remarks! (I guess I should add "quotes", too!)


What is it about FSD that makes you fear for your safety? It seems a lot more capable than waymos solution.

Edit: can’t reply because “rate limiting”

That video is almost half a year old. Here’s a video from the same channel that’s 6 days old rather than 6 months old. Tesla pushes updates very frequently…

https://youtu.be/y85oGY02gtg

Here’s a video where the title says it all.

https://youtu.be/tdHlKhKKOgQ

I read one of the latest waymo blog posts and was surprised to see how directly you guys reference Tesla. It felt like the whole thing was written for people at tesla… You differ in choice of hardware but could you tell me more about how you differ on software? Is there any meaningful difference?


In SF, Waymo/Cruise/etc have safety drivers behind the wheel who are being paid to ensure the car doesn't crash. Tesla drivers, on the other hand, are often misled about the capabilities of "FSD" and may not be paying as close attention.


Na, actually not. It’s labeled as beta, it is always explicit about the fact that it must be watched at all times and that your hand has to remain on the wheel. There’s no ambiguity about it.


Maybe I should have linked to [1] as one of the videos from a couple of guys testing it out in Oakland.

[1] https://m.youtube.com/watch?v=antLneVlxcs&feature=youtu.be


Language tangent: I have seen this a lot recently from many different people, and I don't know if it's caused by typing on a phone or what, but I think you mean "wary" (cautious), not "weary" (tired).


Indeed I do! I am not tired of self driving cars at all and in fact look forward to more of them. Thanks for pointing out.


Malcolm Gladwell did a podcast[1] on Waymo and their self-driving cars.

He speculates that something very different will happen. He thinks that self-driving cars that follow the laws and unerringly yield to pedestrians will transfer ownership of the streets from cars to pedestrians. Perhaps travel through cities in self-driving cars will be tedious and slow because pedestrians will jaywalk fearlessly and the cars will always yield. It's an outcome I never really considered before.

[1]: https://www.pushkin.fm/episode/i-love-you-waymo/


This is already mostly the case in San Francisco. You always have to assume someone is going to jump out from behind a parked car to cross the street in SF.


As a pedestrian, I’m terrified of all cars.


The bar to being better than human drivers is not that high. It is a hard problem, but there is plenty of room to be better than humans while still being dangerous to pedestrians.


As a cyclist in SF, I see the Waymo vehicles every day in my neighborhood. They ALWAYS see me coming and slow down or stop. I've had a few close calls with regular drivers not paying attention and missing stop signs so I'm looking forward to Waymo deploying more vehicles.


I'm curious how Waymo deals with the human aspects of cab hailing. With a human driver, you can say "hey looks like traffic is bad between where the car currently is and the pick up spot, can I meet you at X instead and save us both 10 minutes?"

To "good morning dear, do you think you could help me load my bag in the trunk"

All the way up to completely degenerate scenarios like "my friend was drunk and tried to take over command of the car to see if it would work" or "I had to call customer support because when my ride arrived to pick me up, there was a homeless guy hogging the backseat and cursing at me".

Surely there's more to commercializing the tech than just not killing pedestrians. The absence of a driver should create some new weird dynamics.

IIRC Uber et al spend a considerable amount on customer support. I wonder if Waymo can break free from Google's bad reputation on that front.


There are a different problems with no-driver, not necessarily worse problems. For example, some problems with drivers that don't exist, or exist to a lesser extent with no-driver:

1. How much should I tip?

2. The driver tried to rape me.

3. The driver seems drunk/high, and is driving horribly.

4. The driver didn't pick me up because, pick your reason: I'm black, had a bunch of kids, etc...

5. My friend was drunk and tried to pick a fight with the driver.


Yeah, definitely goes both ways, and I can foresee many anecdotes about how not having to deal w/ a driver is nicer in some way or another. Hence why I said I'm curious about human-factor issues. I don't believe there's precedents anywhere for what to expect once the tech rolls out at scale, so it's going to be interesting to see what kinds of operational issues they end up running into.


For the first one, I'm guessing you would just change the pickup location just as you tell the drive to meet you somewhere else.

If there's no driver, then there's nobody to ask to help with bags.

For various degenerate situations I'm not sure being driverless really changes the situation all that much. A drunk passenger can grab the wheel of a manned vehicle as well. With a self-driving car you might even be able to avoid that situation entirely by preventing the passengers from using any controls without authorization. They can grab the physical steering wheel but can't turn it until they punch in an authorization code (which they don't have). Likewise you prevent unauthorized passengers from entering the car by keeping the doors locked until the authorized passenger gets there. And if someone does manage to sneak in you just shut down the vehicle and call the cops (just like if someone jumps in a cab without permission).

But humans are great at flummoxing countermeasures like that so it will be an interesting thing to watch for sure.


> For the first one, I'm guessing you would just change the pickup location just as you tell the drive to meet you somewhere else.

To add on, maybe even Waymo tells/asks you if you'd like to change pickup locations to save X minutes off your trip.

Feels somewhat in line with how Google Maps will ask you if you want to save X minutes by taking a new route that was previously not as good as when you started using directions.


Why does Waymo use a niche British luxury car for their platform? Must be very expensive and what are the benefits?


I'm guessing they want to start with premium feel to give people good first impressions. That's why their other car is $60k Pacifica minivan and not $25k Honda Civic.

I'm also guessing that they want an electric car. At the time they made a deal to purchase those cars (was quite a while ago) Jaguar i-Pace was one of few premium electric cars available (if you exclude Teslas, which obviously Waymo wouldn't want as it's a competitor).

At this stage cost is not a big problem.


- They likely were able to get a commercial volume purchase discount

- if that's the car their devs are testing on, it's going to be the fleet car, at least at first

- Alphabet has $135B cash on hand[0]

0: https://abc.xyz/investor/static/pdf/2021Q2_alphabet_earnings...


Electric and manufactured by the same company that does upfitting for Waymo vehicles (and also a Waymo investor) – Magna International.


Is Jaguar still a niche brand?

Most taxis in Denmark and Germany are Mercedes-Benz so it's not uncommon for the taxi industry to use luxury brands.


Niche in that they only sell a hundred thousand or so a year, yes. If Waymo are planning to seriously scale up they'll be using every Jaguar they're making!


Why would they need all Jaguars to scale up?


Why would you want to have a hodge-podge of random vehicle makes to service?


What if self-driving cars develop personalities? Like, let's say to be able to drive, you need x level of intelligence or whatever, and just happens to be the level where consciousness emerges

I'm sure I got all the science wrong but do we really want cars with souls? Do they become non-human humans? Do they get the right to vote? Are they considered slaves?

Even if its just horse-level smarts, that's still enough to develop consciousness, maybe. Let's just say sure. But then, what kinda relationship do you have with your car then?

Another thing is that, now that everyone's working from home, there's a lot less incentive to buy cars. Who are the going to sell these cars too? Maybe they'll all be self-driving taxis. And that's a whole can of proverbial worms on its own. Politically, self-driving taxis are dead ducks.


Given the number of people here saying self-driving cars require AGI (a.k.a. sand that thinks a.k.a. fairy magic), I would say we have definitely hit the bottom of the trough of disillusionment. But since Waymo is launching in a second city, this is perhaps the very start of the slope of enlightenment.


>start of the slope of enlightenment

that would be more like staircase of enlightenment. or up the down escalator to enlightenment


You cannot predict the future with patterns from the past, regardless of how many patterns you crunch - the world is always changing. ML can just never solve FSD - conceptual understanding and causality is needed to achieve human level driving capability. We instantly classify things conceptually from our perception, instantly understand how those things may behave, and then learn the specifics of what that thing is doing to instantly predict what might happen and how it is interacting with other causally connected things in the environment.

If ML could predict the future, there would be some very rich stock brokers right now.

Autonomous driving efforts need a major pivot to start working with concepts, causality, and the integrated space time model of the world we humans use.


I think you uave some major misconceptions about how this stuff works. Most vision models output the current state of the world, to be integrated with other sensor data like lidar that is then used for planning. There are models that try to guess things like the crossing intent of pedestrians, but even those models just output a confidence of them crossing. I work for a different self driving company, but this high level stuff is pretty much the same everywhere.


Exactly, that is how that stuff works. Past patterns in the Ml neural net generated from massive data and training try to figure out the label of things from various sensors, and then given that label, try to figure out other things that might happen based on the label and event stream. These patterns can recognize with a certain probability similar patterns, but they have no way to cope with brand new patterns. Those brand new patterns, although they may only occur 1 out of 100 experiences - or lets say 3 days a year when your're driving, something fully novel occurs, this will kill you if you are relying on FSD. Until the systems try to think like humans, they simply wont.


> Autonomous driving efforts need a major pivot to start working with concepts, causality, and the integrated space time model of the world we humans use.

Dear god, no. I don't know what you think AV companies are doing past the perception layer, but they would be pivoting away from controls and planning -- you know, stuff that actually works outside of neurips/iclr/&c fantasy land.


I can’t wait for truly autonomous cars. I live in the countryside and have to drive to collect and deliver kids, or abstain as a designated driver and so on. Self driving will change this.

The aspect I dislike is the move towards hiring cars on demand instead of owning them, though.


hiring cars on demand makes sense in dense cities where most trips would not be by car even if you owned one (walk, bike, transit...). When most trips are by car anyway you are better off owning your own car.


Actually, if the cost of hiring is less than insurance, maintenance and fuel, I'm all for it. Don't have to be sober, find parking. Don't know if that will happen in rural areas though


Are there any thought pieces on how cities can adapt roads to accommodate SAE level 4 cars to facilitate rolling out a wider deployment more quickly? I'm thinking of something like cities could designate specific routes and lanes for L4 cars with designated drop-off locations and also only allow the service to be operational during clement weather conditions. It seems it would be beneficial to have fully automated routes to and from specific spots even today, for example an airport to a downtown location, for many cities to help reduce transportation costs and safety issues as compared with Uber/Lyft.


> I'm thinking of something like cities could designate specific routes and lanes for L4 cars with designated drop-off locations

I always find it interesting that the further down the road of self-driving we go the more it seems like its a techy version of a known but morre low tech paradigm. This if taken without the self driving part just sounds like a bus lane with bus stops. In fact the driving of a car would be a less efficient mode of transportation considering that they usually only carry one person at a time.

I used to be very pro self-driving cars, but tthte moree I explored urbanism and transportation the more it became apparent that, self-driving is over optimization of an already bad paradigm that is car-centric development. You could cut out the middle man of needing a self driving car if you didn't live far away from the things you needed or if the public transit was efficient and reliable, that seems way easier than trying to figure out how to make cars drive themselves!


The optimistic vision of self-driving vehicles is as a publicly run transit system, where you have a fleet of automated vehicles of varying sizes able to dynamically, collectively route based on demand. A hyper efficient, fully electric bus system.

That would be a pretty exciting development even in, eg, European cities with decent public transit.


> A hyper efficient, fully electric bus system.

Cars aren't efficient. The zoning should be such that things are close by enough. Then with proper cycling infrastructure you can just go by bike. As a result you'll need way less road. As cycling infrastructure is cheap and road maintenance is costly, it'll reduce expenses for e.g. cities.

> European cities with decent public transit.

I'd rather have the mix of public transport, bicycles and roads. Combined with proper zoning. Loads of electric cars just increases demand for a highly inefficient transportation method.


I guess all that driving around the avenues in the middle of the night paid off.


Yes and the marketing copy that says "grab a bite in Sunset or visit Golden Gate Park" kind of implies the Ave's will the service area. I don't think it's going to be downtown and Soma, yet.


Interesting timing given this article which was just posted a day or so ago on HN

https://www.autoblog.com/2021/08/22/waymo-is-99-of-the-way-t...


Would getting hit by one of these things basically mean you won the lottery?


I'm surprised the numbers for Phoenix are so low (10's of thousands of rides). I'm not sure if that fleet is just very small or my sense of how many rides/day a major metro can generate.


They operate in only a small area of Phoenix metro – Chandler and some parts of Tempe. That's why the numbers are small.

My guess is Chandler is purely a testing ground for them to learn to operate a robotaxi service. Things like remote assistance, customer support, emergency protocols, fleet maintenance etc.


The drivable area isn't huge[0] so it seems like it's not a choice in many situations, especially not for commuting from a further-out home.

0: https://i.redd.it/4rsg9pui55531.jpg


Does the Waymo One app require a Google account? If Google arbitrarily freezes my Google account, will I be unable to use Waymo and unable to talk to any human about it?


I'll know that we've achieved full self driving when it can brave thru the streets of a South American city :)



This is encouraging. But it's only for "Trusted Testers".

Do you have to have a Google account?


This is a clever reuse off a little known thing Google used to have called “Trusted Testers” for friends and family of Google employees to test new features. However, I don’t think it requires a Google account. The TT program would just NDA you heavily and required an invite from a Google employee.

This is likely to be more open, but again, requiring NDA.


The entire entire leadership team just quit and then they announce this?

That makes me worried this was rushed


What do you mean? It was just 1 person, albeit the CEO, who left


Why did they pick Jaguar for the platform? That seems like an obscure choice.


Most likely Jaguar is involved financially and it's not a straight purchase. From Wired:

"Jaguar, for its part, gets a large chunk of guaranteed sales and a chance to look like it’s at the forefront of this emerging technology."

- Waymo Expands Its Robo-Fleet with Electric Jaguar SUVs, https://www.wired.com/story/waymo-buys-jaguar-suvs/


Probably got them for free/really cheap. Waymo has regularly looked for partnership deals with car manufacturers, driven by the push to make Waymo look good on the budget sheets.

Also, companies pushing their own self-driving or ADAS solutions may be hesitant to partner with Waymo, excluding some of the large players.


Well, as long as the tech isn't ready to sell to consumers, 'partnering' with Waymo doesn't have all that much upside for automakers. It's not like Waymo is handing over the secret sauce or promising them exclusivity. I also suspect over the past decade Waymo has found fewer and fewer automakers fighting to make a deal with them.

Jaguar doesn't have a high-profile competing autonomy effort of their own, that would create a conflict of interest and/or invite spying.

The i-pace is electric, which makes integration easier and won't hurt for PR purposes.

And it's luxurious enough that no matter who you're showing the tech off to, it won't be a big step down from their personal car.


Does anyone know what city official is responsible for approving this?


Can't wait for these to be launched in my city! Love from Dhaka.


Considering the Density of Dhaka, it would be much better served with public transportation/metro.

While the US has a lot of Urban Sprawl, dense Asian cities have the option to invest and reap the benefits of public transportation instead. Eg im incredibly excited about all the metro lines in India, HSR in China etc.


How do I get an invite!? Do you just have to know someone at Waymo?


Why not have a remote driver? Like call centers but for driving?


Are there any plans for a Mountain View trial? I live there.


Does anyone know what the cost of Waymo rides is?


> All rides in the program will have an autonomous specialist on board for now

If you drive a Tesla w/ FSD, is it chic to refer to yourself as an Autonomous Specialist?


Perhaps Crash Test Dummy is the preferred term?


'headless driving enthusiast' https://www.youtube.com/watch?v=9BgV-YnHZeE


Does Waymo plan to chisel it's drivers like Uber and Lyft did with Prop22?


Cue the haters who said Waymo is a money dump


Expanding to new markets and being a bottomless money dump are not mutually exclusive?


Waymo has raised 5.5B so far. How much does it cost to add a market and how long will it take to recoup those costs?

How much are Waymo vehicle operational costs vs rider revenue? How long do Waymo vehicles last?

Even with this announcement, we do not know if Waymo has a clear path to profitability.


If Google can't do it with its massive resources and sensors don't ever expect a Tesla to perform better than this.

It's time for Elon to pay the piper and refund all users who were hoodwinked into purchasing FSD. Especially people who's cars are nearing end of life.


While I do get your point and have my own doubts, I fail to see how this is a fair argument:

> If Google can't do it with its massive resources

The companies behind the SLS had and have an enormous amount of resources. Yet, they have failed to even launch it.

Arianespace has a huge amount of resources and experience too. Yet, when asked, they said landing a rocket couldn’t be done. Then, back-pedalling, that it was pointless and would never make any economic sense.

Yet, SpaceX has developed three generations of rockets (Falcon 1, 9 and heavy), builds them, launches them, iterates on them, lands them, reuses them, and is now working on a fourth rocket (SuperHeavy).

They have also created 4 different rocket engines and are working on two others.

Honestly, where are Boeing and Arianespace?

Another example? Microsoft, their incredible wealth of resources and superior position falling flat on their face with smartphones.

Funnily enough, Google getting the greatest share of the global smartphone OS market, despite having had no real experience with operating systems, consumer hardware, or licensing operating systems before.

Kodak, which had some of the first patent on digital camera sensors, going pretty much extinct despite their massive resources and market position.

Xerox, failing to ever enter the personal computer market despite designing what everyone else would copy for the next 10 or 20 years.

Sure, the amounts of resources and experience seem to help, a lot, but it also certainly doesn’t seem to help reliably predict any outcome.


The problem with your comparison is that the company that is "the spacex of FSD" is Waymo. And even them can not get FSD right.


Who would be Boeing then?

Because almost no company has much more serious experience with self-driving cars, let alone FSD.


Isn't comma.ai the SpaceX of fsd?


> Isn't comma.ai the SpaceX of fsd?

Astra, maybe?

Difficult analogy because the barrier to entry's a lot lower for car mods vs orbital rocketry kits :)


> Yet, SpaceX has developed three generations of rockets (Falcon 1, 9 and heavy), builds them, launches them, iterates on them, lands them, reuses them, and is now working on a fourth rocket (SuperHeavy).

What does SpaceX have to do with Tesla? They're owned by a billionaire with his attention split, but otherwise are structured differently, are in different industries, and most likely have different cultures.

The cult of worshipping a single person has got to stop.


The parent's point is less about Elon and more about the fact fact all the resources in the world means nothing if you do not have the right team, culture and incentives to pull it off.


You will notice that I expand my point way beyond SpaceX, and even start by stating

> While I do get your point and have my own doubts […]


Google took a completely different approach and relies heavily on pre-mapped environments. Tesla has several orders of magnitude more vehicles, more data, and a more diverse set of data given their cars are in many different countries. It's not implausible to think that Tesla could arrive at FSD faster than Google.


The idea that data will somehow magically translate into FSD is laughable. It relies in the delusion that we just need to train neural networks with the proper data and then we all can go to sleep.

There are many issues with Tesla's autopilot that are completely unrelated to the amount of data they have, and they will not be fixed with more data, and having more data will not make it easier to fix it. At this point, I would argue that the discussion about who owns more millions of miles of data is completely irrelevant.


It isn't laughable at all. The real problem of FSD is the ridiculous long-tail of scenarios in the real-world that you simply cannot account for or manage well. At this point, Tesla has a huge upper hand because every vehicle in their fleet can constantly collect and provide new semi-labelled training data every time there is a user disengagement or an unforeseen action taken by the driver.

Tesla has built out amazing infrastructure to capture extensive amounts of "hard" examples from their fleet, turn them around into labeled data for training very efficiently and then utilizing simulations to further broaden the distribution of such quirky long-tail events in their training-set. In the absence of AGI, this is a very effective "brute-force" approach and they have a huge upper hand over every other player in this space.

I say all of this even though I am very skeptical that anyone will achieve L5 self-driving with where the state of things are today. But Karpathy and team are very pragmatic and making lots of good decisions coupled with excellent engineering and infrastructure development.


I'd contend Waymo has more data but data isn't really the hurdle. It's AI/ML and simulation needed to get the final 1% right.

Good thread on ground Tesla would need to make up to catch AV leaders here. https://twitter.com/Christiano92/status/1428671634131628033


I can't see how you can content Waymo has more data, they had around 600 cars on the road last year. Sure they have some more sensors but Waymo only operates in select cities and select routes, the diversity of the data is very limited. They won't have seen scenario's like a snow storm for example since they've only been testing in CA and AZ. Meanwhile Tesla has a robust framework for collecting clips of specific scenario's as needed by shipping a small model to roughly match the scenario.

Simulation is important for sure, and Tesla even talked about that as much. But you can only simulate the scenario's that you've thought of, it can't replace real world data, which again Tesla has several orders of magnitude more of than Waymo.

As for the comments about Dojo I think they're unfounded. Tesla has already proven that they can create their own custom chip and have shipped it to hundreds of thousands of cars. I don't see any reason to think they can't get the system running in the next year or so. But even if they can't do that in time they still have NVIDIA to fall back on. Elon even said in the talk that maybe it might not work out, and if that's the case they can always buy a solution.


I can't really see how Tesla would have more data than Google, who have been mapping the entire world for decades.


Similarly, Waymo stopped a bulk of their live testing and switched to simulation because they had so much data.


> pre-mapped environments

this raises the question of: what happens when those environments change?


Tesla relies on geographic whitelists to suppress problematic signals from their radar. Not what I'd want from an FSD machine.


> It's not implausible to think that Tesla could arrive at FSD faster than Google.

It absolutely is implausible.


As I understand it, they just started selling the "fsd capable" stuff a few years ago. Are any of those cars nearing end of life? That would be an even greater cause for concern IMO.


Its not "end of life" vehicles that should be concerned, but instead "end of lease" vehicles.

People who bought a 3-year or 5-year lease from 2016 with the $5k full-self driving package probably should feel pissed for wasting money.


To be fair, a lot of people right now would willingly pay $5k for the features that are available now, especially if they spend a lot of time on long highway trips.


The price keeps going up, though: if you bought it right now, it wouldn't be $5K, it'd be $10K. It's possible Tesla's driver assistance systems are better than anything else on the market in their current incarnation, but that's getting to be a pretty big ask.


Also, I don't think we actually know what the end-of-life of Tesla vehicles is. They've not been mass produced for long enough.


I'm guessing end-of-life for a Tesla is whenever they decide they will not replace your battery.


I'd be surprised, I've heard (and can not immediately substantiate) that they are generally very reliable vehicles. I guess part of it is that they just don't have as many mechanical points of failure.


>> who were hoodwinked into purchasing FSD.

Fools and their money. Like most every expensive car, Teslas are luxury vehicles and fashion statements. People buy them because they are cool. That product was delivered. As for future features, nobody should believe Telsa's advertising any more than we do Microsoft's advertising about how the next version of Windows will solve all our computing problems. The hamburger never looks as good as it does in the commercial. Demanding such things post-purchase, after the test drive demonstrates the deficiencies of the real product, is simply buyers remorse.


I'm pretty sure the USA has consumer protection laws against deceptive advertising, so a company can't just say whatever they want when selling a product in order to convince someone to buy it. Maybe Tesla protected themselves with clauses in the purchase contracts that absolves them from having to pay the purchaser back in the event of their own failure to deliver, but I think a FSD purchaser might have a case just based on Elon's verbal promises that have failed to come true over and over again.


I believe the person you're responding to was referring to specifically to the add-on pre-order for full self driving, not the entire car.


As was I. Consumers had an opportunity to test drive the wanted feature and find it lacking, or read reviews of the same. Tesla hid nothing. If customers want to spend money on hope for the future then that is their right. They paid for hope and that is what they got.


> Tesla hid nothing.

They deceptively claimed that for additional money the customers' cars would be enhanced in the future with FSD. They then made that claim again each time they failed to deliver. They continue to make this claim and people are trusting it.

A lie hides the truth.


> Tesla hid nothing.

But they lied a lot though, right? "Cross county automous summon in 2017" "Coast to coast autonomous drive in 2018" "Tesla driverless taxis in 2019" are a few examples. One may hold the view that Elon musk can make grandiose statements about solving AGI and nuclear fusion in 1 hour and make futuristic statements about that, but when does it go from projection to lying?


They haven't even released the feature yet.


That hamburger sure TASTES better than it does in the commercial.


But then the hamburgers used to make food ads are made from plastic and most definitely fake as food goes soggy and off when subjected to lamps used in studio's.


while it is disappointing that the true FSD may never materialize on cars that paid for it nearing EOL. The FSD beta 9 looks pretty impressive and from some videos i've seen have made some very difficult driving decisions.


Actually FSD works better than waymo. It hasn’t fulfilled promises of robot taxis but it’s the best and most advanced self driving system in the world, not to mention the most widely deployed, and so accusations of blatant dishonesty don’t really hold water.

https://youtu.be/SuZYACWhYSI


"If Google can't do it with its massive resources and sensors don't ever expect a Tesla to perform better than this."

10 years ago this might have read:

"If General Motors can't do it with its massive resources and sensors don't ever expect a Tesla to perform better than this."


Of course this ignores that super cruise is considered safer and better on the same tasks that it's allowed to do as Tesla FSD by third party analysts. The difference is GM is less willing to allow it to be enabled for dangerous beta driving unlike Tesla which clearly doesn't care about consistency or safety.


I meant 10 years ago people might have assumed if GM can't make the electric car work than Tesla can't either, and yet here we are.


The ev1 was a pretty good car for it's time. It had a lot of fans. Battery tech got better. An electric car is theoretically much simpler than an ice car.


And when you look into thoose reports - the only reason they rank Super Cruise higher is because it can only be used on a very limited number of roads, and uses a camera to detect awareness.

FSD uses a camera, and works on all roads - and is dramatically better then supercruise.

source - test drove both, bought the tesla because of how impressed I was with supercruise.


Don't know why you're downvoted this is 100% accurate.


General Motors is offering unpaid but fully driverless rides in California now. They, like Waymo, are ahead of Tesla, and likely to remain there.

Tesla’s unique strength, when it comes to autonomy as opposed to battery-electric vehicles, is selling hype.


Google is struggling to get hardware integrated with their self driving solution. Retrofitting cars with self-driving sensors is not easy, and Google does not have the scale yet to convince OEM partners to retool assembly lines for the Waymo fleet.


Source for them struggling? All the hardware upfitting is done by Magna, one of the biggest names in auto manufacturing, who are also a Waymo investor. There’s a reason they chose the Jaguar I-Pace which is also manufactured by Magna.


FSD v2 hw with DOJO - we definitely got it right this time.


if they don’t have fsd then they are just an electric car company. if they are just a car company their stock should be at 50 dollars right now. hence why he’d never admit it and instead will double down with his robot that will never materialize


Nonsense. Go look at some of the models by different analyst. Even if you include no software revenue at all, you can justify a much higher stock price then you suggest. Some never include anything for FSD in the first place.

And even if they don't reach FSD, they could still generate a lot of software revenue from Autopilot features.

And beyond that Tesla still does more then final assembly of EV and selling them to dealerships.


Their current assist packages are already incredible even though they come short of full self driving.

EDIT: In case my point isn't clear, if your position is predicated on the idea that anything less than full self driving is worthless garbage, you should know that is a pretty hot take. If you are instead having an emotional reaction and feel deceived I don't know what to tell you, you might have a very valid point but that doesn't appear to figure into evaluating whether a stock is under or over priced.


[flagged]


If buses were a better alternative, people would take the bus. Unfortunately MUNI does not seem willing or able to compete on convenience


I agree with smaller and more expensive. But less flexible? How is a fixed bus route that needs to run on a fixed schedule, reasonably invariant to current demand, more flexible than a car that can take you point-to-point on-demand?

I see this as the future of public transport, where you'd have a combo of smaller and bigger vehicles (including self-driving minibuses and buses) running autonomously, with options to pay more in order to walk less, but only running where people actually need them . Similar to Uber and Lyft Pool before the pandemic, where you could pay less to be picked up on a main street intersection instead of your front door.


Point to point goes where you want, when you want. Buses can be better for taking more people at once, cheap to the riders (if subsidized), but here in SF the service is so erratic it is difficult to use for anything on a schedule, and then with Covid-19 people want more space than mass transit provides


Cities don't have the room for everyone to be driving a single occupancy vehicle point-to-point. There just isn't room.


elon in shambles


Waymo and Tesla have very different approaches to autonomy. I think both are valid avenues, but it's fascinating to watch them develop in parallel. You don't have to make it into a spiteful contest.


Definitely both valid business approaches, Tesla more interesting from a startup / leverage approach. As a consumer, I'm less excited about Tesla putting out beta software in situations where mistakes can be deadly https://www.nytimes.com/2021/07/05/business/tesla-autopilot-...


As someone who has seen and driven their "pro" pilot competitor, I am not bothered by that. The non-beta competitors are often worse and sometimes more dangerous (sigh).


I'm fine with Tesla putting out dangerous betas, as long as the betas are opt-in. Shipping beta software by default is a bad thing.


Most people in the Tesla camp (like George Hotz) believe Waymo is destined to fail.


I think Tesla is taking the correct approach, treating it as an AGI problem, and training their system to be antifragile.

I think Waymo will make money off of their system sooner.


Why is that? and what are the two approaches?


Waymo is trying to solve the environment. They're HD mapping every centimeter of areas they operate in, so that they can let their vision systems focus on deviations from that single source of truth. It's a reasonable approach, but it has some significant limitations, especially in terms of infrastructure.

Tesla, on the other hand, is trying to make their cars work anywhere. It's a larger problem space, but the end goal is a far more robust product, with much lower long-term infrastructure spending.

I personally like the robust approach, but I think both approaches are long-term viable, albeit for different use-cases.


> but the end goal is a far more robust product What about Tesla's approach makes you think it is inherently more robust?


Tesla does not want HD maps. They want to be able to place the car somewhere it's never seen and have it drive itself competently. That's by definition more robust than requiring HD maps to navigate anywhere.


What you're referring to is overall coverage not robustness. In fact, placing your car somewhere you've never been before and operate it seems less robust than one that already has HD maps.


ro·bust | rōˈbəst, ˈrōˌbəst | adjective (robuster, robustest) • (of a process, system, organization, etc.) able to withstand or overcome adverse conditions: California's robust property market.


You think this is clever, but you actually come across as though you have no clue what you're talking about. We are operating under the same definition of robustness, can't believe that your first reaction is to act like I don't understand what robustness means.

You on the other hand, have no explanation whatsoever as to why you think a non-HD map system is robust everywhere compared to a HD map system. If you claim that HD maps are less robust by definition, you clearly do not understand deep learning or computer vision.


Thanks!


Lol. Out of all the vehicles they could have chosen. Why the Jaguar?


Probably cut a good deal since jaguar could use the marketing.


SF seems like not the best place to pilot this from a risk management perspective (ex: homeless people messing with the cars).


It says they will have a human onboard at all times.


I remember this one bit from a video on self driving cars back a few years ago where someone basically said that self driving cars would not work well in cities because people will always want to mess with the cars just because they can.

Consider how in the early 2000s when people would mess around with AIM's smarterchild chat bot and gave it outlandish scenarios just to see how it would react. I have to imagine you might see something similar here with these cars but you're right that having a human onboard would probably curtail these interactions.


Is everyone forgetting that they have a ton of cameras, sensors, lidar, etc on their cars?

Messing with the cars means they have a full 3D video of you doing it.


You just need to wear a mask and basic clothing. Sensors totally useless.


A lot of people in SF have nothing to lose.


I'm happy for their efforts and hope my kids never have to drive unless they want to, but I still think we're 20+ years out from level 4+ autonomy. Maybe when Waymo tests in a city that has weather -- like anywhere in the Northeast -- I'll be more enthusiastic.


It is heartwarming to see that Google is doing its part to solve San Francisco's homelessness problem.



That announcement was from 2019. Honest question, how can I see the progress made from this investment? Did they build any, some or all of these homes yet? It does sound great though, I hope they did.


Skimmed, this is probably the San Jose page on it: https://www.sanjoseca.gov/your-government/departments-office...


Thank you for that. Looks like no ground has been broken and Google is still cutting through bureaucracy.


Whoosh.


Yes, haha. To be fair it's more of a government role and they do support the government with "2.1 billion in state income taxes" https://www.mercurynews.com/2019/04/15/which-bay-area-compan...


When it's cheaper and easier to summon a Waymo than to find a public toilet in San Francisco, expect need to find solution.


I can think of a few milestones in the future for AVs:

  * the first accident between two fully automated AVs.
  * the first such accident within AVs from the leading company.
  * the first fatality in fully automated AVs
  * when uber or lyft or any similar company starts to use AVs more than human drivers in a metropolitan.
  * when a city gets half the driving hours in AVs.
  * when human drivers no longer get insurance coverage by a major insurance company.
I wish the order in reverse.


Self-driving is not the solution and will show its crack in the future, the solution is probably a cloud service administrate by the city that feed direction to the cars and rules (that reduce or increment the degree of freedom of choices), do optimization and sort many other things, cars probably will still need cameras to take small decisions, but this can be made optional.


Wow, 12 years in and they've spent about 6 billion and achieved a limited ridership Uber competitor in 2 fair weather cities. I'm sure the full solution they're pursuing is right around the corner though.


Maybe I'm just out of the loop on corporate presentation overload (I don't watch TV at all, and it's mildly jarring watching advertisments and whatnot), but this video feels so... formula-driven and lacking in depth and interestingness that would give me the room to explore the concept and form/own my own view about the service's safety, efficiency and relevance... that it just feels boring. I love tech and self-driving, but the way this is presented, I'm completely turned off the idea.

And then there's the fact that there's a shot of a driver in the front seat shown for like 400 milliseconds. What's up with that? Why is a token driver required to be in the front seat? I thought the foundation of the service was automated self-driving that obviates that...? If for whatever reason it is required, alright, then own up to it and properly integrate it into the video so it doesn't feel so awkward.

Overall this presentation feels like it's hiding something, simply because it's so poorly put together.


Waymo helps widen the gap between haves and have-nots. Lets see the next test in St Louis, Detroit or Baltimore - places where accessibility is orders of magnitude less than SF or PHX.


They don’t have Uber in STL, Detroit or Baltimore?


Actually getting an Uber in some parts of STL is very difficult. If you ever visited you'd find out first hand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: