Hacker News new | past | comments | ask | show | jobs | submit login
Waymo and Daimler Trucks partner on the development of fully autonomous trucks (waymo.com)
142 points by recov on Oct 27, 2020 | hide | past | favorite | 163 comments



As the AV space continues to mature, it is becoming increasingly clear to me that Waymo made some good choices:

1. Prioritizing safety over everything has kept regulators far away and there haven't been any mishaps that I'm aware of.

2. Using multiple high resolution sensor modalities (LIDAR, camera) has led to very accurate localization and environment understanding, evident in on board videos [1]

3. LIDAR costs continue to drop precipitously, it now seems quite possible that next gen Teslas may sneak a unit on board to improve localization and depth perception.

[1] https://www.youtube.com/watch?v=ZsfUmU30kWY, https://www.youtube.com/watch?v=qAZ6tJSj9T4&t


Agreed. There are options for low res / narrower FOV lider as well with fewer moving parts / lower cost. At some point for safety you end up with a question - is it worth a few hundred $ to have definitive depth data in some directions.

For tesla's, they have in the past struggled with stopped objects - so you have a tesla going straight into something at 70 mph on autopilot (highway dividers, stopped trucks, crossing trucks). They also suffer from phantom braking which is very disruptive, and the fix may have been to basically to disable emergency breaking in these areas which increases risks if something is stopped there. Lider, even lower rest, would really help in all of this plus pedestrian detection for emergency breaking there (see China videos of teslas crushing mocked-up walkers in videos there).

Google also is building absolutely nuts training data for wide stereo vision recognition if they wanted. They are driving these vehicles generating very hi def point clouds with synchronized video. In terms of a training data set for a vision based system, this is basically a dream situation. Ie, if google wanted to reduce lidar reliance, they may be well positioned for that as well.

Finally, what people want is level 5 autonomous driving. What trucking companies want is level 5. So they are skipping some of the earlier levels but getting to where I think people want this to go. Could a trucking company put a lot right near freeway and waymo get's the truck from lot to lot (1000's of miles) with then last mile hand driven? The savings would be huge there.


Tesla has lidar equipped vehicles that presumably provide training data for depth perception as you're describing. Although given the rarity of the sightings probably just one or two.


And google runs street view cameras everywhere with lidar now I believe, so their HD maps are getting pretty darn HD.

They are going to be in a great position when this all shakes out.


>Prioritizing safety over everything has kept regulators far away and there haven't been any mishaps that I'm aware of.

Maybe I have missed it, but has there been any substantial use of Waymo outside of the Phoenix area? There is a reason why both Uber and Google were targeting that area for testing. It is an area with lots of sun, little precipitation, lots of relatively new, big, and straight roads, no real wildlife that might jump out into the road, and that is notoriously hostile to pedestrians. Is demonstrating a self driving vehicle is safe there truly indicative that it will be safe in the back alleys of Boston's North End when there is a foot of snow on the ground?


A family member drives for one of the largest Trucking companies in NA. They have specialized drivers that take the loads into NYC, Boston, etc. He drops the trailer at a yard outside the city and a city truck and driver take it in.

It just works out better for the company since these drivers know their city and that is all they do. It keeps the company from having accident insurance claims.

I doubt there will ever be self driving Tractor Trailer type vehicles in any city. It requires some serious skill and intuition about your rig and roads, like where to take the truck wide to complete a right turn without taking down the street light or pedestrian waiting to cross would be a good example.


This mirrors how ships worked historically, as well. You have ocean navigators, who are great at getting reliably from one continent to another (even without knowing their longitude reliably), and then pilots who know every rock in a particular harbor for the last mile.


> serious skill and intuition about your rig and roads, like where to take the truck wide to complete a right turn without taking down the street light or pedestrian waiting to cross

I'm actually convinced that maneuvering a large, articulated vehicle is the comparatively easy part.

These two videos convinced me AIs can be pretty good at it: https://www.youtube.com/watch?v=HR4MEh5-paA https://www.youtube.com/watch?v=BYPA4ajTQgk

AIs might even have an advantage over humans because they can more easily use cameras on all sides of the truck to maintain awareness of everything going on around them. The AI should be able to predict with good accuracy where each part of the truck will be as the motion progresses because it can do the math, whereas a human has to rely on experience.

To me, the real difficulty for AIs is in the higher reasoning, like how you deal with situations (lane closed, vehicle traveling the wrong way, etc.).


Would you recommend starting AV development in Manhattan for safety, then?


Obviously not. It makes sense to start with the easiest problems first, I'm just advocating for keeping that perspective that Waymo's success in Phoenix is an easy problem in comparison to a lot of driving environments that would be expected of a truly self driving vehicle.


I imagine that these conditions are good for trucks in particular. I highly doubt that these trucks will ever be intended to be driven outside of freerange highway roads.


Waymo also do substantial amounts of testing around Alphabet's HQ in Mountain View and the surrounding area


According to the state of California, Waymo had less than 1.5 million miles driven in 2019[1]. That is basically nothing when it comes to proving a track record of safety. The fatality rate in California is around 1 per 100 million miles. Waymo's cars can be twice as deadly as humans and there would only be a 3% chance of killing someone in a 1.5 million miles.

[1] - https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...


That's only in California. They have over 20 million miles driven total and its increasing rapidly (10 million of those were just in the past year.)

https://venturebeat.com/2020/01/06/waymos-autonomous-cars-ha...


There is no breakdown there of where these miles occurred. It appears that most of those miles came out of the Phoenix area which leads back to my original point.

Either way, 20 million miles is once again a drop in the bucket compared to the number which would indicate these cars are as safe or safer than human drivers. For comparison, Tesla is likely around 4 billion miles with Autopilot. I still wouldn't be comfortable saying definitively that Autopilot is as safe or safer than humans.


You gotta start somewhere.

If you have a very limited fleet of self driving cars then maybe it's actually ok if they are not as safe as human drivers as long as the trajectory is towards better safety. I'm sure the very first test car driving anywhere was less safe than a human driver. Each step will lay the foundation baseline for the next step. The Tesla 4 billion miles can't be compared with anything since those aren't really autonomous driving. It's like saying there's a trillion miles on cruise control.


I don't disagree with you. I'm not coming from a position that Waymo is unsafe. I am coming from the position that we don't have any substantial evidence on the safety of Waymo and we should be cautious drawing conclusions from the extremely limited track record.


The quality of the information is different.

Waymo has 20 million miles using multi-modal sensory data: cameras, LIDARs, etc. Because the sensory data is gathered in multiple modes, it can be cross-referenced and calibrated against each other even if the placement of individual components change (so long as not all components change at the same time), meaning that the data can be used across multiple models.

Tesla has 4 billion miles using a single mode of sensory data, but because it uses a single mode of input (visual), old data becomes mostly useless when they make any changes to the system, such as to the visual resolution of the cameras, or their positioning on the vehicles. This is one of the big reasons that they get so many regressions with Autopilot.

Also, despite having 4 billion miles of sensory data, that hasn't stopped Tesla from being the industry leader in self-driving fatalities. The rest of the industry combined has only a single fatality.


>Also, despite having 4 billion miles of sensory data, that hasn't stopped Tesla from being the industry leader in self-driving fatalities. The rest of the industry combined has only a single fatality.

You are missing the point. Tesla is the industry leader in fatalities primarily because they have such a huge lead in the number of miles. Miles are not directly comparable to each other because each company has a different approach, but Tesla's fatality rate is 1 every 800 million miles. Waymo appears to be the leader among the rest of the industry with 20 million miles. How can you predict Waymo is safer than Tesla at this point? They may end up being safer, but it is way too early to say that right now.


No, the point is that Tesla isn't learning from the data it collects. It keeps having the same problems over and over again.

And that problem gets worse with increasing data, not better, because the useless data will drown out the important data.


I don't know how you are drawing that conclusion and it is irrelevant to the original discussion about having a track record of safety. Neither company has proven their system is as safe or safer than humans. Tesla at least has a much longer track record of their system not being substantially more dangerous than humans. Waymo doesn't have enough data to say even that.



No, you're both incorrect.

Tesla's own legal filings shows that Autopilot is 2-3x more likely to get into accidents that Teslas without Autopilot engaged. (The 2x for Autopilot disabled but other advanced driving features engaged; 3x for all advanced driving functions disabled.) https://www.forbes.com/sites/bradtempleton/2020/10/28/new-te...

Their marketing claims are just that: marketing.


> Tesla's own legal filings

Where is the source for this? It wasn't anywhere on the article that you linked.


What you need is a zillion representative miles. If Waymo mostly drives in easy conditions, or autopilot is mostly only on in easy conditions, then even 100 billion miles won’t let you extrapolate on how safe it is in a full regular person end to end commute scenario.

It’s clearer by hours instead of by miles. If a normal car gets into an accident every 1 zillion hours, and I’ve driven my research car for 20 minutes and then had it parked for 100 zillion hours and it only had one accident accident, I don’t get to claim it’s 100x safer. (100x fewer accidents per hour!) It has to be apples to apples.


Phoenix is still a very populated place. One question is if they replaced the entire rideshare market in Phoenix, will they make their money back in a reasonable time?

Second, you may be moving the goalposts. If the car can drive in Boston, but not in, say, French Guyana, would you still say it's not true self-driving?


> an area with lots of sun, little precipitation, lots of relatively new, big, and straight roads, no real wildlife that might jump out into the road, and that is notoriously hostile to pedestrians.

You just described a quarter of the US.


And probably only 10% of the population. But either way, I'm not sure your point. It appears no one has anything close to true self driving. Different companies are attacking the complexities of the problem in difference ways. For example, Tesla is limiting its system by situation while Waymo is limiting it by environment. Neither has been proven safe for the full spectrum of driving and we all know that the biggest problems for self driving are going to be solving those outlier scenarios.


I agree with you mostly, except your definition of self driving. Full self driving limited to the Phoenix metro area counts to me as full self driving. That doesn’t meant squat towards full self driving in different conditions, but if you’re a Phoenix resident, who cares?

If it works for the full spectrum of usage for normal people, I’ll count it, even if it’s not all normal people everywhere. If we get something that works everywhere in the US, but not the crowded streets in India, you’d count it, right?


Sure, you are fine to take this conversation purely from the reference point of a resident of Phoenix. I can't argue against your definition with that very limited context. I simply think that is a narrow enough viewpoint to not be particularly useful in a larger conversation about self driving vehicles.


Can you point to examples or evidence for the third or is that just a prediciton?


My own prediction based on the fact that almost all other AV programs are using LIDAR, costs are decreasing with scale /R&D, and Teslas localization looks shotty at best. As costs continue to drop, its going to be tough for the company to continue rationalize sticking with cameras only just to keep Elon's ego in tact.


I've always found his position here weird.

In a safety critical system, I don't like hearing "we don't need x, we can get by with just y". I like redundant systems, especially when one may do better in certain conditions than the other.


I was actually in the "pro lidar" camp, until I saw the user uploaded youtube videos of the Tesla beta FSD released last week in action.

Now I am not so sure anymore.

Looking at the beta UI in the car while it is driving, I am impressed with how well it detects everything. There are examples in dark and even areas with leafs covering parts of an unmarked road. It doesn't look at all like the detection system is struggling.

The only thing is, that things seem to "float" a bit, or cars and pedestrians disappears when going behind other cars and reappears again on the other side. The sides of the road also seems to float/shift a little bit from time to time.

I wonder if they are not using a temporal filter of some sort (like assume a car continues in the same trajectory when goes behind another object), or if the UI only displays an "unfiltered" value.

In any case, I still haven't seen examples where the cameras didn't pick up anything relevant/critical.


Object detection is mostly solved though, right? I don't think anyone is advocating using LiDAR for detection. It is used to accurately model depth and then localize the vehicle and surrounding objects/vehicles. The "floating" phenomenon you're talking about is noise from the computation method(s) Tesla uses to build a composite 3D model. You can fix that with LiDAR. I'd wager this noise gets worse in some circumstances. You can try to fix it by piling on noise reduction methods, but at a certain point, one has to wonder when it becomes obvious that an extra $X/vehicle to make the problem go away forever makes sense, where X is a monotonically decreasing value that'll probably get down to ~$200 with scale/R&D.


From the earnings call last week:

> Hey, Elon, a question on LIDAR. If LIDAR were totally free, would you want to use it in your cars near term? Would that tech significantly help Tesla on the training of your neural network for FSD?

> I mean totally free, well, I think probably — I think even if it was free, we wouldn't put it on.

https://www.fool.com/earnings/call-transcripts/2020/10/22/te...


Do you believe him though? I absolutely do not. He just has to say that otherwise people will be talking about how their system is inferior.


My guess is that there is so much technical debt that adding LIDAR is a tough choice for TESLA. They would need to throw out all the visual only data they have to-date and start over.


They might have an imaging radar coming up. https://twitter.com/greentheonly/status/1319363515610812416


The new iPhone has LIDAR. Solid state LIDAR continues to improve and get cheaper.


That lidar probably doesn’t have good fov and resolution. Getting those working on a solid state unit is a big challenge


Multi point flash is actually petty good at that, and as the number of vcsels increases, it will increase


Any idea what Apple's LIDAR specs are? Just saying that if the sample rate is only supposed to be good enough for auto-focusing a camera every second within a 10-20 degree FOV on-center, then there's still a deal of improvement needed before they can be used for self-driving cars. I think the tech will get there eventually, but this is an area where the sensor specs become really important.


LIDAR can't read traffic signs or stop lights. At best its a very nice addition to the depth sensing but the bulk of the work will still have to be sensing in the visible spectrum, categorizing the sensor data, and making decisions.

If it turns out Tesla can drop in a cheap LIDAR system down then the line then aren't they the ones that made the right choice?


They'd certainly have a lot of very angry customers who were promised this was unnecessary and other OEMs would drop LiDAR. If it turns out that more accurate depth perception is necessary for L3 AVs, every Tesla owner with "FSD" wasted 6-10k, unless Tesla voluntarily retrofits these vehicles, which would likely kill their profit for half a decade. Recalls are always much more expensive than getting it right first.


Why for half a decade? I mean they only need to offer it to those who already paid, which isn't too much. It seems like a pretty safe marketing gimmick for Tesla even if some retrofits need to be doled out.


I think Elon Musk makes some good points why Lidar isn’t the best choice.

https://youtu.be/HM23sjhtk4Q

Waymo taxi looks incredible but how can they scale it.


I don’t agree he makes good points at all. His arguments essentially come down to two things:

1. LiDAR is expensive - this makes sense from his perspective because he has already declared all Tesla cars on the road are capable of “full” self driving with their current hardware. If he walks back on his, he would have to retrofit a million cars.

2. Humans can drive with just vision, so why can’t computers? This is a naive argument because human vision isn’t just vision, it’s linked to a ridiculous intelligence in the form of human brain. And computers are nowhere close to replicating it.

If you want to read more on LiDAR’s importance, read some of Brad Templeton’s writings.

https://www.templetons.com/brad/robocars/cameras-lasers.html

https://www.forbes.com/sites/bradtempleton/2019/05/06/elon-m...


I doubt the cameras on the Tesla have anywhere near the performance of the human eye. I own a Tesla and often I get some warning when the sun is at a low angle blinding the cameras. I've no problem seeing stuff at the same direction.

So both sides of that sentence seem wrong.

EDIT: it's also worth noting that Tesla is accounting for a whole bunch of future revenue from FSD. If Tesla admits at some point that they can't make it with the current sensors that would have huge financial implications.


I agree that the second point is overly simplistic, I do think there is some sense in the argument and that it remains to be seen whether the state of AI will advance enough to be better than humans. Yes, humans do have intelligence and that does help quite a bit, but I would also guess that a system could potentially be better than most ordinary human drivers without LiDAR just because they have a much better view of all cars around them (humans have a somewhat narrow field of view and can only look in one direction at a time, computers can have a full 360 degree view all of the time). They never suffer fatigue and never get distracted. My guess is that, yeah, it is better to have LiDAR, but also we can also probably make a really good system that is better than humans on average and saves many lives per year without it. It seems worthwhile to me to continue exploring and investing in.


It is definitely worthwhile to continue exploring this. But as you say yourself, there needs to be significant improvements in the state of AI to do this. So Tesla seems to be relying on achieving significant breakthroughs to make it happen. I wonder if it's too risky of a bet considering they are already taking money for FSD from customers and promising the feature.

Also, I don't think Tesla is known for their AI expertise like Google. So it remains to be seen if they can pull it off.


Until they lose peripheral vision later in their lives, people's field of view is about 180 degrees.


I’m human, I use more than vision to drive, as in I actively think “what will the other drivers will do?”.


The more difficult form of the question is “wtf does that person think they’re doing!?” People are weird. It’s not enough to think at a human level in easy conditions.

You have to think at human level in corner cases, like seeing an object fly into view and judge whether someone is likely to be chasing it into the street, which depends on knowing what the object is and how it fits into people’s lives.


Also, blind humans (and bats) can walk (and fly) around the city pretty easily with just sound. Why would you tie your hands behind your back and try to make a robot that can navigate the city using only sound?

To cut corners and costs.

Point #2 is deeply flawed.


Not really a good example. Blind humans don't navigate cities easily, even when cities take them into account. As an example, crossing a street despite tactile surfaces on each side can be a hassle because it's difficult to tell if you're walking straight or diagonal towards the other end, potentially walking into a car. Sound only takes you so far.

Source: I have blind friends.


> 2. Humans can drive with just vision, so why can’t computers?

Agreed this is very naive, it implies that humans are good drivers. Humans are awful drivers, and it sure doesn't help that [human] field of view and depth perception is very poor [compared to alternatives such as LIDAR].


Human drivers cause ~40,000 deaths a year in the US. The car would not exist as a consumer product if it were invented today. Automatic car systems may have to have close to zero fatality rates to be acceptable in the US. Possibly a national law of some sort could be passed to limit automatic car manufactures liability that would allow them to be deployed at scale. Otherwise, I see little chance of automatic vehicles existing in the US. They will cause deaths, especially if human drivers are still allowed on the same roads. I sure hope they can be deployed, as the benefits of automatic vehicles for safety, convenience, the environment, and efficiency are large.


Step 1: Computer vision.

Step 2: Theory of mind.

Step 3: ???

Step 4: Profit.


well lets also take into account in the US alone over three trillion miles are driven and the fatality rate per hundred million miles traveled has been reduced over time.

So yes the numbers are high but the item that a car using vision would have similar problems is a bit dishonest and LIDAR only provides one other means to determine where something is.

However self driving cars have one major advantage that LIDAR and cameras don't solve. They are not easily distracted. They are also far easier to be restricted to the ability of the vehicle they are managing adjusting for weather and more. Oh you can fool them for sure and there are always edge cases.


Not my understanding of his premise - I don't feel like your statements have actually countered his point. I have actually been looking for a true counterargument to their approach and so far everything I've come across I don't buy into. If anyone has anything countering/disputing Tesla's approach - please share! I can't help but feel like there is something I DON'T see that many smart, respectable people see and understand that I just haven't seen.

Also thanks for the link shares, but I don't think either actually disproves Tesla's approach.

My understanding of the Tesla approach is: In order to truly 'solve' self-driving (situations on the road that have never been seen before to drive safely - think unannounced construction, collision or road closures due to protests), you MUST solve 'vision' with a very, very complicated and well trained neural net (re: ridiculous intelligence in the form of a human brain as you state). In addition, the existing road infrastructure (re: signs) is all built around human vision - and so being able to identify and interpret all of that is a requirement.

I find their approach compelling as in this instance where you have cameras with a particular neural net (which they are constantly refining the learning model on) that are training across the millions of cars across the billions of miles across the thousands of various edge cases into a generalized solution. You also have a re-enforcement loop via the nature of a human driver which 'intervenes' through the drive, a necessary step in refining the model at scale. Note: I am not saying that Tesla's FSD will be coming to a street near you anytime soon. BUT, I haven't heard or understood a well articulated argument that says 'lidar is really the only practical solution'.

This also doesn't factor in that I've heard Lidar doesn't work well in any kind of precipitation (light being refracted away from the sensor). Also, full self driving doesn't mean it can drive in situations that humans WOULDN'T be comfortable with (i.e. snowy blizzard or thick fog) so in either solution shouldn't be factored in as a part of the required solution set.

And finally, the practical threshold on a FSD system that would pass regulatory approval is evidence/data that it is materially safer than a 'typical' human driver. It seems the 'throw millions of cars and billions/trillions of miles at it' with a refined tagging system that approaches the narrow vision solution for driving forward in 'driveable space' seems to be most likely to reach a solution first.


> My understanding of the Tesla approach is: In order to truly 'solve' self-driving (situations on the road that have never been seen before to drive safely - think unannounced construction, collision or road closures due to protests), you MUST solve 'vision' with a very, very complicated and well trained neural net (re: ridiculous intelligence in the form of a human brain as you state). In addition, the existing road infrastructure (re: signs) is all built around human vision - and so being able to identify and interpret all of that is a requirement.

Yeah this is dubious. You need to solve situational awareness. Vision is one way of doing this. Lidar is another, and lidar avoids many of the drawbacks of vision (having to do accurate world modeling based on cameras).

Tesla doesn't (and would be stupid to) feed camera data directly into a neural network. They feed multiple cameras into a complex system that involves both classical object positioning and neural networks to build a model of their surroundings. Then a downstream system consumes that model and makes decisions.

Its not a single end to end black box. Such an approach would be computationally infeasible, not to mention over-parameterized to all hell. No one does this, not Waymo and not Tesla.

While cameras are good at certain tasks (like detecting traffic lights), they are not good at all tasks, and using more specialized hardware for object detection and world modeling means that lidar based systems are strictly better. They have more information than camera based ones.

Tesla is betting on, somehow, making some breakthrough in computer vision that no one else can replicate, and further that lidar can't do what cameras can.

Your argument appears to be that since Tesla has more data, they'll achieve some eventual success, but the point is that they'd achieve more success faster with lidar, and everyone in CV seems to agree that we'd need pretty fundamental improvements in CV (and perhaps in cameras) before you get the same performance out of CV that you get out of lidar. That means that Tesla's betting on a less accurate world model being good enough. Maybe they're right, but so far we have some evidence to suggest that cameras alone have some pretty fatal shortcomings, and no evidence that Tesla has solved them.


He has to make that argument because he has to justify not having LIDAR on Teslas, and he doesn't have LIDAR on Teslas because it's still too expensive to have it mass marketed. But as with most things, price will keep going down and it'll eventually become the go to, and Tesla will be stuck with years of camera data and zero LIDAR training.


Your answer just shows a lack of understanding what Tesla is doing. They have a layer that transforms camera data to depth data. So if they wanted they could replace that part with lidar and just cary on from there.


This isn't some CRUD app where they can refactor to use another class with a matching interface.

The models they use are incredibly opaque, complex, and work based on statistical inferences that are built into the data. Making fundamental changes like assumptions about where that data came from is not trivial. Even more so given that we're talking about safety-critical applications.


Tesla's model has difficulty generating depth data.

This has been demonstrated many times by Teslas crashing into stationary or large obstacle in front of the cars that would have been immediately apparent to a human driver or a car with LIDAR.


LIDAR alone isn't the best choice. Neither are cameras alone. They each excel at different things.

That's why everyone else in the industry use both types of sensors.


I watched the whole video you shared. I think they made the a mistake assuming that people are good at driving with just vision.

People are actually terrible at driving, even when paying attention.

I really hope that the future of self driving reduces the road toll to a teeny fraction of its current level.


I have to disagree.

1. They are 10s of billions deep in a hole and make no profit (or revenue) and will not for multiple years. To ever pay back that investment they basically need to become Uber. And to get there, they need many more 10s of billions of capx. Uber is not even making a profit yet.

2. Waymo approach is the classic 'solve the easiest problem first' mistake. They started and have driven most of their miles, in literally the easiest place to do AV in the world. They have created incredibly complex high resolution maps on one small and simple region in Arizona where the streets are wide, the weather is always nice, and the general upkeep of the infrastructure is done very well.

3. Reliance on hd mapping for localization is fragile approach, and it essentially requires you to continuously map the whole global road system to have a real generalized driving solution. When will they have cars that can just drive on roads they have never seen before, something that human drivers often do. Seems to me this is the wrong approach to take.

4. Waymo still needs highly specialized vehicles, even with lidar becoming cheaper, their whole stack is still a very high cost low volume stack. How they will compete against an Uber driver in a cheap mass produced car anytime soon is totally dubious to me.

Tesla clearly does not want Lidar. Elon Musk even said he wouldn't use Lidar if it was free. What they are actually doing is using next generation radar and likely next generation sonar as well. See: https://electrek.co/2020/10/22/tesla-4d-radar-twice-range-se...


> Elon Musk even said he wouldn't use Lidar if it was free.

I don't believe that. He ultimately has to be anti-LiDAR because putting LiDAR on every Tesla is simply not feasible nor compatible with his plan to mass gather training data, so he has to justify it somehow.

What you're missing is that it's all moot if your self-driving car does not have a 0% failure rate, or close enough to 0% that it rounds down to 0 accidents. As soon as you have real accidents, you are done. Look at Uber. The one death completely ended their self driving unit.

So yes, it may seem cool to just go with the flow and rush things, but I get the feeling that this will be like the tortoise and the hare. Eventually LiDAR will be cheap, and Tesla will be stuck with years of useless training data.


Agreed, Tesla made these decisions in 2014(ish) which was way before it was clear that convolutional neural networks would not give satisfactory depth perception and therefore localization without some major breakthroughs that just haven't happened. I actually believe these choices by Tesla were the rational choices at the time. If you remember back to the beginning of this decade, the deep learning revolution seemed like it was going to take over every industry and turn everything upside down. That progress unfortunately stopped and Tesla seems to be holding the proverbial bag. These companies are all executing strategies in a game that will last for at least another decade if not two. Things will change, but at the moment, it seems to me that LiDAR was the correct choice (subject to change).

To me, the revolution that actually mattered the most was the cell phone supply chain revolution (if you can call it that). That is what allowed these companies to equip their vehicles with cheap cameras, radar, and eventually LiDAR.


While I’m sure Musk is stubborn and his ego is huge, he is also very technical astute and doesn’t afraid to change his mind. SpaceX had planned to use parachute for 1st stage recovery but switched to the current solution after it became clear parachute wasn’t viable. Likewise, they switched to stainless steel for Starship once they figured carbon composite wasn’t suitable for what they wanted to do. If Tesla hasn’t adopted Lidar, that wouldn’t be because Musk didn’t want to admit he was wrong.

> I don't believe that. He ultimately has to be anti-LiDAR because putting LiDAR on every Tesla is simply not feasible nor compatible with his plan to mass gather training data, so he has to justify it somehow.

Why is using Lidar not compatible with mass gather training data?

Also he recently said they wouldn't use Lidar even if it's free. He didn't leave the door open for when the price goes down sufficiently. He could be wrong obviously, but I don't think he is in denial.


He's lying because he's already said that every tesla has the hardware for self driving. People would be pissed if he said tesla's might need to be retrofitted to self drive.


Well, Tesla does claim that Tesla cars may need to be retrofitted for self driving:

https://www.tesla.com/support/full-self-driving-computer

I suppose they wouldn't have any problem asking for Lidar installation if that were to be required?


1) Yep, Waymo is a moneypit. I don't know that Alphabet actually cares/expects it to make money. It feels like a halo PR project; attract and retain smart people so they don't work for others, and maybe some stuff will fall out of it. I'm sure the connections with automakers help with Android Auto, at least a bit, etc.

2. Solve the easiest problems first, as opposed to what? Even the easy problems are hard to solve, trying to solve everything at once is going to be floundering around with nothing to show. Is an automated vehicle that can only do the easy parts super useful? Not really, but it's somewhat useful. In a model with human controls, this is tractable, assuming there's a safe handover --- either directly with appropriate advance notification and acceptance, or by parking and switching; without a safe handover or human controls, it's pretty limiting, but some areas would have most of the year coverage, and some areas have tourist populations that overlap with fair weather, so it's something.

3. I don't think this is a big problem, if the vehicles are driving the routes, they can update the mapping. The road map is finite, and most places don't change that often. Again, safe handoff is critical, though.

4. Yep, it's expensive. But mass produced, less expensive lidar is plausible. Given this thing is still several years from mass deployment, constraining to today's inexpensive tech is too limiting. Figure out if you can do it, and figure out how you can do it with unlimited budget, and then figure out how to make a working system with less.

P.S. I can't believe I'm defending Waymo here. Usually I'm the staunch negative against automous vehicles. On the other hand, Waymo's serious errors have been more amusing than dangerous (let's try to merge into a bus at low speed, cause it'll certainly move out of out way)


"Elon Musk even said he wouldn't use Lidar if it was free."

If your mental model of how the world works is "Elon Musk always tells the truth" then this is compelling evidence, but if you have a different mental model I don't know why you would take it seriously.


Nothing prevented him from telling the truth, its a pure unrealistic hypotetical that he could have answered as he wanted.

The important point is that he is CEO and its well within his power to develop his own lidar system, or sign a contract for lidar. Or to do research on the topic. Yet he has not done that or shown any interest in that.

At some point you have to ask yourself, is somebody engadging in 4D chess over 10 years or is it simply his opinion.


1. They don't need to be Uber, and their path to getting cash flow is very different than Uber. They seem to be taking more of a Rivian approach, where they get to kick costs to someone else while providing the tech to allow these companies to do something new. Unsure why they would need 10b+ capx to make their business work.

2. Arizona doesn't always have nice weather. Haboobs, fog during the winter mornings/nights, monsoon season with heavy rain and hail. There are plenty of weather conditions in Arizona. And if building the visual recognition is the easy part, what is the hard part?

3. Fair, but that is already something their parent company does as another business; Maps/Earth. There is nothing to suggest they CAN'T do this on roads they haven't seen before, but it's obviously far safer to do this on roads they know.

4. They partnered with Daimler to make self driving trucks who couldn't use Uber to move shipments. They aren't competing with Uber necessarily.

As for Tesla, there is nothing to suggest they are doing it the right way. They have had numerous self driving failures.


Do you have a source for tens of billions? Did you confuse market cap for spending? I can't find any good sources, but that seems like an order of magnitude too much.

They just raised $3bn (the vast majority of which is presumably not spent), and piecing together other sources indicates Google put in a couple billion of their own.

This article claims they've spent $3bn on R&D:

https://www.caranddriver.com/news/a30857661/autonomous-car-s...

Google's approach requires a lot of mapping, yes. But they mapped damn near every road in the important markets to pursue maps, an opportunity two orders of magnitude smaller than self driving.

If their cars and software are both ready, the mapping won't be more than a small roadbump.


2. There's no other approach to do self driving than "start with the easier problem first". Actually they might have aimed at a too difficult problem - highway truck driving seems like a much easier place to start.

3. Sure, they use maps.

But do we know that part of their algorithm there aren't map-free techniques ?


I still can't quite figure out why exit to exit trucks weren't the first problem to tackle. The economic case is clear and it seems like highway driving should be a simpler problem than city. Even moreso since you can run the trucks at night when there is much less traffic.


Most of the responses to your comment are a bit off here.

The challenge of autonomous trucks isn't that they're "more dangerous," it's a matter of physics and current LIDAR technology. The weight of the truck means there is a minimal safe stopping distance at a given speed. Frankly, the quality and distance of current LIDAR tech falls short of the distance required for safe stopping at the average highway speed for a truck.

Put another way, the autonomous driving stack has difficulty seeing far enough ahead of a truck to successfully stop in time to not cause an accident in highway environments. You'll need better fusion of perception stack (LIDAR + imaging + neural nets) or better LIDAR ranges to be able to deploy autonomous trucking sooner.


That doesn't sound plausible. Breaking distance goes up with the square of speed while it only increases linearly with weight. As an example, a truck going 55 can stop as fast as a car going 65. Ultimately it just means your self driving car has to drive a bit slower.


Agree with the lidar range issue, but in a perfect storm... there was also major hype to drive money into robotaxis, e.g. Uber flashing an order for $10 billion of Mercedes S-Class ( https://www.roadandtrack.com/car-culture/news/a28508/uber-or... ).

This year we've seen Luminar start to go public via SPAC, and their high-range Lidar has been mounted on most (all?) non-Waymo self-driving truck prototypes. There's even a public dataset with Luminar data now: https://github.com/TRI-ML/DDAD


Radars already go 100m+.To avoid a front collision pretty much all these systems use radar, not lidar. Next generation radars are already doing 300m+.


But angular resolution still is too low to disambiguate targets in the far field and neither frequencies nor maturity for imaging radar are here yet. Range alone does not solve the main radar problem.


Trucks already have slower speed limits. A truck going 50 can stop as fast as a car doing 75. Run them all night. No rest breaks.


Opportunity cost, perhaps? Trucks are an easy win if you've got launchable performance in cities, but still require a lot of human energy to get into production.

So early on, when the number of people that know the system is small, it seems better to work on the bigger, more general problem, and branch out to the sub problem later on when there's more people to work on it.

From another perspective, of you stop work on cities to finish a trucks launch, you're losing ground to competitors working on cities, which may be decisive in the research phase.


This is something I've wondered about too. If I were to guess, these might be potential roadblocks (no pun intended):

- Insurance, since you're transporting others stuff at highway speeds

- You still need a CDL driver right now (not cheap)

- Economics - the failure of Starsky Robotics might offer some clues here

- Interstate Regulations on Driverless testing (?)


Presumably the economic incentives are much greater for trucking than for consumers (as is typically the case). If we can accept that, then investors shouldn't be swayed by regulatory questions (regulations change easily when wealthy investors and corporations want them to change) nor by some marginal upfront insurance cost (the hypothesis is that driverless trucks will be much less fallible and thus insurance costs will go down quickly as driverless tech proves itself).

The interestion questions are "is the 'greater economic incentive for trucking' assumption correct in the first place" and if so, why were we still bothering with consumer automotive in the first place?


> (regulations change easily when wealthy investors and corporations want them to change)

Teamsters would say otherwise.


I don't think they would, and that there are other interest groups doesn't suggest that corporate interests aren't particularly powerful. IIRC, if the public unanimously supports a particular initiative, it has only a 30% chance of making it into law, and it's more than double that if corporations support it.


> IIRC, if the public unanimously supports a particular initiative, it has only a 30% chance of making it into law, and it's more than double that if corporations support it.

That's obviously not a set of statistics that is possible to directly observe; such a conclusion is only possible through a model dependent on extrapolation which assumes that a relationship holds outside the observable range. (And it's also somewhat incoherent, as both corporate decision-makers and legislators are subsets of the public.)


> That's obviously not a set of statistics that is possible to directly observe; such a conclusion is only possible through a model dependent on extrapolation which assumes that a relationship holds outside the observable range.

It sounds like you're airing a general grievance against sampling, but I'm sure that's not what you mean, so please elaborate?

> And it's also somewhat incoherent, as both corporate decision-makers and legislators are subsets of the public.

I don't see how that renders anything 'incoherent'. Are you interpreting "unanimous public support" to mean "literally every single citizen supports this policy, including corporate decision-makers and legislators"? Such pedantry aside, it's entirely possible for an overwhelming majority of the public to want a particular policy decision irrespective of whether "corporate decision-makers" or legislators find themselves in the majority or minority.

In whatever case, the statistic I'm thinking of is from former labor secretary Robert Reich's "Inequality for All" documentary. I don't care to skim it for exact phrasing or numbers. Similar information can be found here: https://represent.us/action/no-the-problem/.


> It sounds like you're airing a general grievance against sampling, but I'm sure that's not what you mean, so please elaborate?

You obviously aren't going to have any measure that actually has 100% popular support. So the only way you could get to “if it has unanimity among the public, it has only a 30% chance of passing” based on anything even approximating empirical science is to take some observations about success at various less than unanimous levels, and develop a model of the relationship between public support and chance of passage, and use that model to extrapolate beyond the high endpoint of support from the data. Without a strong theoretical rationale which you have good reason other than the observed relationship within the tested range to believe is the explanation for the tested phenomenon, it is always suspect to extrapolate an empirical relationship outside of the range in which it has been observed.

> Similar information can be found here: > https://represent.us/action/no-the-problem/.

The study linked to support that is interesting, but since it models the effect of interest groups, including mass interest groups, separately from both public and elite individual preferences, and finds that both elite preferences and interest group alignment have significant independent impacts, but average citizen preferences don't have a strong separate effect, what it really indicates is that average citizens are effective in politics only to the extent that their preferences energize action through organized groups.


A different kind of regulation, but the FMCSA recently denied an autonomous trucking company's request to extend the amount of time their drivers could be on the road [1].

[1] https://www.truckinginfo.com/10126660/fmcsa-denies-request-f...


I think self-driving cars are a PR problem as much as they are a technology one, and that's definitely something waymo is working on - remember their cute little round bubble car?

Trucks are big, heavy, and human-driven ones already scare enough people on the road. If you launch with trucks first, you risk a backlash from people who are afraid of them. If you start with an autonomous taxi service people can sit in the cars themselves, feel safe, see that there's a "safety driver" present, and even if they aren't in the vehicle they still get to interact with them on the roads. you can make people comfortable with the idea before you launch the trucks. and everything that goes into the self-driving taxis is still valuable development work for the trucks. I wouldn't be surprised if self-driving trucks actually are the first real commercial launch target for waymo, and the taxi thing is just PR.


Trucks are subject to the PR of the almighty dollar. They don't have to make people feel warm and fuzzy. If there is a use case where a driver-less truck is slightly cheaper for equivalent results (or a lot cheaper for lesser results) they will get bought. The competition in OTR trucking is brutal like that. If the quarterly or annual operational cost comes out to be a tenth of a cent per mile lower than a dry van operated by a conventional steering wheel holder you can bet Swift, Schneider, Walmart, and anyone else with thousands of trucks will buy then for a portion of their fleets. They kind of have to to hedge their bets at that point.


>Trucks are subject to the PR of the almighty dollar. They don't have to make people feel warm and fuzzy

as long as the regulatory situation allows it, yes. but until they're seeking approval to have them actually allowed on the roads, making people feel warm and fuzzy is pretty important.


My thought too. Think between Amazon (or UPS/Fedex/Walmart) distribution centers/warehouses.

Still have a driver do the last mile, for deliveries to stores, homes, etc, but the between cities, long haul routes seem ripe for automation. (especially with all the laws on driver mandatory rest periods).


With 5G, considering last miles are almost always populated, you could even have remote drivers take over.


Road work is typically scheduled during the night in at least my region of the US (Northeast). If you think normal roads are confusing for auto-pilot, wait until they try to navigate hastily setup detours for temporary construction.


Because trucks are more dangerous than cars.


That makes them a bigger win to automate.

You save a lot of money on accidents too by automating.


Might be something as simple as the average engineer/maker/hacker has access to a car to play with but not a truck.


That basically what Tesla is doing, unlike Daimler they just don't have the Trucks yet. Once the truck is out it will drive itself on the highway.

The problem is just that you still need a driver anyway, so it really doesn't help that much for now.

Its the same problem Waymo has in general.


Not sure why this opinion is so prevalent. Tesla is in no way close or comparable to a L4 system. It is not even a brief thought for waymo


Just hype I’d guess. Tesla fans love to equate it’s self driving efforts with Waymo’s, when in reality they are years behind even with the new FSD update.


Neither has Waymo or Daimler ever demonstrated such a system with a truck on a highway. My point was about strategy.

And who will win general driving is yet open, and just assuming it Waymo is foolish. I think is funny how on HN people dismiss Tesla when there are actually tons of people who already do their commute on the highway almost without intervention literally all over the world.

They have trucks that will have this same feature and they have already talked about the idea of platooning.

Just downvoting me because you don't think Tesla approach is as good as Waymo/Daimler approach doesn't make what I said wrong.


> I think is funny how on HN people dismiss Tesla when there are actually tons of people who already do their commute on the highway almost without intervention literally all over the world.

That's just advanced driver assistance, nowhere near "full self driving". Granted, it's better than other cars, but it's still not self driving.

Also, "almost without intervention" has no meaning if you have to keep your hand at the wheel at all times ready to takeover.


Well, the driver would just drive the "first" and "last" mile right? So he could drive the truck to a lot next to the highway that has an on-ramp, let that truck go on its way, and he can jump into a truck that just drove itself off the highway onto that same lot...

Or I guess you mean they still need a driver because of compliance/legal issues?


Well for now you need it for logal reasons of course.

What you suggest is an interesting idea, a total change on how we think about truck logistics. Its certainty possible but I don't know if we will ever get there. By the time the legal things are sorted out, the trucks might just drive themselves all the way to the destination.


Oddly I'd trust Google more with a self-driving car than say Uber or Tesla. Kind of glad they're making progress and expanding. The industry as a whole seems reckless to me tho.


I don't think this is odd at all.

If you look at the space, there's the Waymo approach - which is safety and comprehensibility first - and not very many companies taking this other than Cruise.

Instead, you have this pile of companies who are pursuing mostly-deep-learning solutions, including half a dozen startups of various levels of craziness and Tesla. That scheme is unlikely to actually result in a safe outcome.


Indeed, it's Uber, Tesla and comma.ai which make the whole industry seem reckless. And as shown by Uber, a single accident can take down your entire project (and honestly put back the industry as a whole), so it is reckless in more way than one.

Waymo figured that out very early on, and it's a lesson others may find out the hard way.


Comma I could forgive - it's positioned more like a toy for developers than an actual full featured product. But yeah, agree on the other points.


I don't know, with Google I see two scenarios that are difficult: "this product has been discontinued" while you're relying on the car, and some random Youtube moderation algorithm getting triggered by a comment you wrote eight years ago and blocking your account, leaving you stranded outside your car.


The “Google deprecates stuff” thing is a tired meme that’s obviously not applicable. Waymo cost a ton of money to create. The most significant thing Google has ever given up on was Google+ and that wasn’t until the product had basically been dead for 7 years. Sure, Google likes to shut down dead products and unmonitized side-projects when nobody wants to run them anymore, but there’s no basis for the idea that Google shuts down products more significant than Google+ was in its dying year.


And frankly the technologies that were built to support Google+ have evolved into the underlying technologies for almost every new project at Google. Even with the failure of Google+ as a product the value of what was developed probably still makes Google+ a net positive on the balance sheets.

Source: am Google engineer


Realistically it won't be discontinued (if it works well), realistically you can expect to see unskippable ads in your car one day (unless you sign-up to a youtube-google-music-premium-red subscription).


I'm surprised I hardly ever see this brought up. If Waymo ever does become a breakout success the ride quality is almost sure to become a miserable ad ridden experience at some point; and of course, all passenger actions and intonations will be parsed and analyzed for tracking and ad targeting.


I guess the idealists would say "Waymo will be so wildly profitable that its holding company will opt not to pursue secondary revenue streams such as advertising and data resale".


It would be rolled into a new product line and you'd need to start paying a higher monthly service fee and take it to a dealer to have it repainted with new logo, have a new control panel with all new UX installed in the dash, and the self driving cars product would be merged with the self-driving motorcycles product and when you got your car back from the dealer you'd find it now had 3 wheels instead of the 2 of a motorcycle or the 4 of a car, satisfying the needs of both previous products inadequately.

And then in another 2 years they'd deprecate your car and launch a new competing self driving bus initiative.


I used to work as an intern in this department at the autonomous driving company Daimler acquired. Had a lot of great experiences there, and watching the hardware and software for autonomous vehicles being developed (and contributing to it and driving it!) was eye-opening and made me to fall in love with programming and ML applications. Waymo paired up with a great team here to help develop their autonomous trucking program.


Well, it looks more to me the "great team" at Daimler has given up, or? Will there be any ML work at Daimler going forward?


George Hotz makes some good points about Waymo. At best it's a slower Uber (drives more timidly and slower) with the potential to be cheaper. So let's customers trade time for money. But uber pool already does that and accounts for about 10% of rides - in other words it's not a trade off the market demands.

Then add to that it's had like ten billion dollars of investment. So it needs to turn into an uber sized company or it will be a failure. Not exactly standard Silicon Valley MVP style prove the business model, acquire customers, and then get series A investment.

Even if it can solve self driving, it could still fail as a company.


> Not exactly standard Silicon Valley MVP style prove the business model, acquire customers, and then get series A investment.

It's solving a hard problem and you don't build a Series A around an unsolved hard problem even when you know that there's market demand. That's why nobody is running a startup selling electricity below cost as an MVP for their fusion reactor back-of-a-napkin sketch.

If it's just a matter of aggressive driving, I'm sure Waymo has a dial they can turn that would make them go faster, and then Uber has already proven the market for them. But they need to get the safety margins up before doing that, which is a hard problem.

Is it the Silicon Valley model? Well it's not the get rich quick VC model. It's much more like the Bell Labs model of growing a transformative but immature technology in house until it's ready to deploy.


I think the comma.ai approach of trying to solve the problem incrementally while producing a profitable company at each step of the way is much smarter.

Waymo is just throwing money at the problem without a clear path to return on investment. We all know how patient Google is with that kind of project. It's asking to get the whole thing cancelled. It runs very much counter to the typical Silicon Valley model.


Uber Pool is maybe 50-70% of the price for 2-3x the time.

Waymo would be, what -- 25% or 30% of the price for maybe 1.2x the time? And with total privacy, which Uber Pool doesn't provide?

And heck, a Waymo Pool might be just 10-15% of the price. That would be revolutionary.

I can't imagine the market not demanding that. The tradeoffs are totally different from Uber Pool.


> 2-3x the time

Not just this; time varies wildly. Maybe it's 1.5x, maybe it's 3.5x. You can't really plan around it.


Agreed this is very differentiated from Pool, and I would think (hope?) it will eventually be <15% the price of "normal" Ubers.


Yes, that's true. If it's cheap enough and more private/predictable than uber pool, there will be market demand.

Can they accomplish that before the project gets cancelled? Because at the rate they're burning money there is a finite time that alphabet will continue to back it.


Wow, do I disagree with you.

I have repeatedly had Uber and Lyft pick a driver, I wait tens of minutes, and then the ride is cancelled. If Waymo can avoid that, then I would gladly pay MORE for it. (I waited at a hospital for 2 hours, at 2 am, one time.)

I've also had an Uber driver who had no idea how to drive. My wife had an Uber driver who was convinced the destination was a hundred yards into one of the Great Lakes, and couldn't be convinced to change their mind.

I think you're also underestimating the value of long-haul rides. If I can get a "moving hotel" that drives me over night to an awesome destination, I think that would be awesome. I mean, ideally it's a train. But that takes tons of infrastructure we don't have. If I could pay airline rates, and not have to go through airport security, not have delayed or cancelled flights, and could travel while sleeping? That would be phenomenal. Especially right now, avoiding airports and crowded planes? And post-covid, I think it could really help tourism.

I think you're also under-valuing having a self-driving car from the novelty aspect. Picture you're at a Disneyworld Resort, and you order Lighting McQueen to take you to the Magic Kingdom. He pulls up, it looks like him, sounds like him. You hop in, and you see his face on a display, continuing to talk to you.

Heck, picture the whole car is designed like a movie theater. Nice and dark, great sound.

Also, picture a self-driving car with a liquor license. That's like limo service, at Uber price, right? Hard to do that with an actual human driver, without a limousine license, right?

And we've all seen research that says that once you get enough self-driving cars, traffic jams go away. That alone is worth the cost, and then some.


> George Hotz makes some good points

And also higher up the thread

> I think Elon Musk makes some good points

I find it funny that all these "Good points" happen to come from the very people who have a vested interest in Waymo failing and their own alternative path being the better choice...

Only time will tell which path was better, and I'd honestly take the points from their competitors with a grain of salt.


I think Hotz's hypothesis re: uber pool may not be correct. I don't think it's the slowness alone that has people reject uber pool, it's the slowness + having to share with yet another stranger. Robotaxis promise cheaper rides AND no social awkwardness.

However, I think you're right about the capital intensiveness of Waymo's approach. Not only do they have to spring for the capex (vs Tesla where they have conned^H^H^Hvinced their customers to front the capital for the fleet) but they have to operate a continuous remapping fleet to keep their hi-def maps up to date. Possible they could incorporate that part into the taxis, but it's still something Tesla doesn't have to do. In my mind, there are two possibilities: Waymo works and Tesla just never quite gets there, and thus Waymo has the time to gradually map and roll out to cities all over the world, or Tesla works too, and has much more attractive economics, and thus can deploy faster and owns the market, even if it takes it longer to achieve "true" FSD. Only time will tell.


> George Hotz makes some good points about Waymo. At best it's a slower Uber (drives more timidly and slower) with the potential to be cheaper. So let's customers trade time for money. But uber pool already does that and accounts for about 10% of rides - in other words it's not a trade off the market demands.

I don't think that's the same thing. I don't like uber pool because it's very unpredictable, not just that it's slow. If it were predictably slow, I might use it more.


> At best it's a slower Uber (drives more timidly and slower)

I'd pay a premium for this. Some uber drivers are frightening, and to me it's not worth it the clenching to arrive 20 seconds faster. For the driver though, it may me more rides in a day.


Unfortunately the economics don't work super favorably for Pool/Shared rides. The labor cost is still too high and the "match efficiency" (measure of how overlapping the routes are, how expensive the deviations are) has to be VERY high to get real profitable returns (see: airport rides departing from same or closeby doors/terminals)

I get that with Waymo you are trading labor for recouping R&D, maintenance etc. which might still be expensive but I don't think the market has had the opportunity to decide on this type of trade-off yet.


You may be correct, but I would just note that Uber and Uberpool are not perfect by any means. I've had a good number of ubers that are dirty or smelly, that have poor drivers, that have inadequate trunk space, or that have little to no leg room. Sometimes it is great with ubers, but it can be very hit or miss.

Customers may be willing to trade off on ride speed if Waymo gives them a better user experience in terms ride quality and vehicle standardization (i.e. the roomy Waymo minivan).


The market for cheaper rides that you don't have to share with n random strangers is non-zero. Also how much cheaper will Waymo be?


If they can make it a slower Uber but somehow cut costs 50%, that may be a better product market fit. Arent drivers the most expensive part of Uber ? However with those low margins, not sure how long it will take to recoup their massive engineering investment


Unlike Uber, Waymo won't be forced to reclassify their drivers as employees with benefits. The more governments force the increase of employee benefits and pay, the more corporations are attracted by the allure of mechanization. It has a future.


> But uber pool already does that and accounts for about 10% of rides - in other words it's not a trade off the market demands.

How is that the same thing? Sharing a car with someone is definitely not the same as just being slower.


I'll believe self driving truck are around the corner when I see all the "manned" yard dogs show up for cheap on Craigslist.

We can't even solve for "slowly put that trailer over there" with highly constrained environmental variables. There's no way we get commercially useful highway driving until after that.


Aren’t many ports completely autonomous?

https://youtu.be/zm_rlLyelQo


I was thinking distribution facilities.


This is a sad day for the European car industry. It seems everyone except (maybe?) VW has given up on competing with Tesla and Google on self-driving.


Car companies aren't tech companies, and Google isn't a car company. I don't think it would've ever worked for a car company to develop this tech, and I don't think Waymo ever plans to make its own cars. Realistically, Waymo will licensing their technology to car manufacturers.


I think it's interesting they're only installing sensing tech in front of the truck, not the back.


Well the "back" is the cargo, which comes and goes. Unless they want to somehow attach and detach sensors to the cargo every time (which would be a logistic hell and also very unreliable), I'm not sure what other solutions there would be.


is a driver still more expensive that all those equipments + the risk of "truck pirates" that might steal the whole thing on the way?

Even in the U.S. truck drivers are not that expensive.


Trucks stop frequently at gas stations, rest areas, etc. without "truck pirates" making much of a dent. I'm not sure why it'd be any different in transit.


According to the following article/infographic, driver salary is the second largest cost in commercial trucking: https://www.thetruckersreport.com/infographics/cost-of-truck...


Drivers can’t drive 24/7. This cuts down on time to destination significantly if the truck can keep going. Perhaps this can augment the driver, where driver provides handling, gas, etc. while the truck self drives.


I'm sure a driver will still be needed for a while yet, they'll just be required to do less – much like an airplane. So the whole system becomes safer.

Around 5000 people die per year in crashes involving trucks in the US alone.


That's not a crazy high number. But I would assume there is an order of magnitude more of that in injuries. And then probably much more in property damage. So significantly reducing truck accidents could be very lucrative.


Drivers are in short supply depending on your location, and they are quite expensive. The pay along with the short supply has made keeping drivers a bit difficult in some places. Not to mention, a computer isn't going to be limited by OTR regulations concerning On-Duty and and Driving hours, and they don't have to reset.


Approximately what percent of the cost of shipping is the driver's salary? i.e. if you removed drivers entirely, how much can we expect prices to drop?


That'll depend on the cargo. Corn costs like $150/ton. iPhones... do not.

https://www.trucks.com/2018/05/22/shippers-truckers-soaring-...

> Driver compensation accounted for 33 percent of motor carrier expenses, according to 2016 data collected by the American Transportation Research Institute, or ATRI.


Maybe a driver would still be in the truck for non-highway situations, but could be sleeping in the cab so the vehicle utilization time is higher and goods delivered quicker on long hauls.


Weird. I think this money would be better invested in better train infrastructure. But then again maybe this money is peanuts to the money needed to get better train infrastructure


> The World’s Most Experienced Driver ™

Marketing departments just have to ruin everything.


My crazy theory: Tesla vehicles are already fully autonomous. In simulation.

My wild ass guess is the extra compute power is running the robot driver code. Then the human and robot actions are compared. Do some AI magic to close the gap. Any big unexplained significant deltas are forwarded to the mothership, for further analysis.

This notion is why I've gone from bear to bullish on Tesla's autonomous driving. Tesla has more data than anyone else. And per Norvig, IIRC, more data is better than more compute.

What little I know about Tesla comes from HN peanut gallery and Munro Live and others on YouTube. This parallel simulation idea is just what I'd do. And may explain what all that spare compute power in each car is being used for. I don't know if my wild ass guess is original or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: