Betting on solely camera vision for self-driving is not objectively a very good bet. It's not even a good bet. There still are zero self-driving Tesla's on the road. Zero.
It's even likely there will be legislation forcing LIDAR for self driving cars. Elon cannot flip the switch on self driving until they have millions of intervention free miles under their belt. Right now Tesla gets ~500 miles without intervention. One kid gets decked and the law will shutter his plan.
There is another layer too, where because of his polarizing personality, a lot of critical talent is inaccessible to him. Keep in mind Elon is doing -none- of the engineering besides the social engineering. He relies on talented people who want to work for him. I know many talented people who could make a difference for Tesla, but none who would ever give Elon a single minute of time.
Over last 6 months, since their latest upgrade, I have been driving my Tesla mostly on FSD. And it is nothing short of amazing, out of this world. The car drives like a pro and exceptions have become very rare. You have to drive this to believe it. Unlike Waymo which is full of gizmos, this is just a regular car that drives itself. The bet on camera and simplified architecture shows here.
If you just extrapolate what has already been possible and bet on the pace of growth of AI algorithms and techniques, robotaxi is almost a certainity and humanoid robots a possibility.
We've all been reading this exact comment for years now. Maybe it's actually different this time, I don't know.
What I do know is that I took multiple Waymo rides last week where those "gizmos" delivered a safe drive with no one in the driver's seat and zero unsafe exceptions. "Very rare exceptions" isn't even close to good enough for me to put my kid's life at stake.
Why would I care whether Tesla has maybe gotten closer to what they've been promising for a decade, but still having "very rare" extremely unsafe exceptions, when Waymo is objectively delivering full level 4 self driving to hundreds of thousands of people per week with an essentially flawless safety record?
It has felt much much better in recent months. Did a drive from SF to Yosemite that was basically fully autonomous - can’t do that on Waymo. That being said, I still had two issues in the city where it completely screwed up by being in the wrong lane. Humans make the same error but my one concern was that it doesn’t realize it’s in the wrong lane and try to safely just go the wrong direction and recover and instead just tries to take the “correct” route at all costs which can be a safety issue.
I agree that Waymo is generally safer for in city driving. It’s still not technically fully autonomous even though it appears that way; it has a lot of support people on the backend to resolve when the cars get stuck and whatnot. Waymo still can’t go on the highway or leave well-defined city limits whereas I can use my Tesla on every trip I’ve taken. I think comma.ai is a closer comparison point at this time as I can’t have a Waymo for my own personal use that I can take anywhere whenever.
"Humans make the same error but my one concern was that it doesn’t realize it’s in the wrong lane"
This is actually very deadly.. at least humans will signal / try do something in a safe manner to continue going on. An autonomous vehicle may behave in unpredictable ways and cause carnage. It only takes one incident to completely shutter it forever.
> at least humans will signal / try do something in a safe manner to continue going on
Your experience must be very different. I've been on the road long enough to know that humans will try all sorts of things to not avoid missing the turn & Tesla behaved very similarly.
FWIW, it was signalling all the right ways and no collision seemed imminent and I doubt it would have gotten into an accident. I just didn't want it acting like an asshole on the road and didn't trust it enough to let the situation play out by itself.
As someone that has driven thousands of miles, and encountered some interesting roundabouts and junctions - I cannot relate to your experience whatsoever.
"I just didn't want it acting like an asshole on the road and didn't trust it enough to let the situation play out by itself."
So basically you had to intervene and it doesnt meet the standard of a fully autonomous vehicle. Do all the mental gymnastics all you want mate lmao.
I don’t understand the vision bet. As a consumer, I want self driving with expectations of it being reliable and affordable. Again as a consumer I couldn’t care any less how it “works”. If someone is able to produce a car using more than just vision that works first, they will be the winner.
"if the human brain can do it purely on optics, then so should we. if we need extra sensors that the human doesn't have, that means the model is worse than the human brain."
i can totally see that going through elons 105iq head. and there is something to be said there. kind of, "no cheating allowed!", where "cheating" is defined as "a sensor the human wouldn't have had".
whether it has value... well... thats hard to say, but you have to make a bet i suppose. the waymos are very expensive with their sensor arrangements. maybe that can be made cheap?
We still have tens of thousands of car-related deaths in the US.
Also, if doing it a different way than a human is cheating...are they sending data via organically grown neurons in the car? Didn't know that. Looks like "cheating" to me.
Maybe it makes sense in Musks' head.
Me - I want the car to be better than the human brain.
> Betting on solely camera vision for self-driving is not objectively a very good bet.
Maybe, maybe not. It's quite remarkable how far he's been able to go with camera only. And he's not the only one who went the camera route--GM Super Cruise, Subaru Eyesight, etc., work that way as well. There is still no mass-produced car with LIDAR. At worst, the camera tech has kept Tesla at the front of where the industry has actually been going the last 5 years.
Looking at the bigger picture, I'm not sure you can point to someone who has made as many good bets in different engineering fields as Elon. Elon is the opposite of the Gell Manning effect: you think he's wrong about areas you are educated about, and he proves you wrong. You can find my posts on HN mocking SpaceX for going with an oxygen/RP1 rocket as if we were back at the dawn of space travel. Boy was that wrong!
Elon of 2012 was a dramatically different person than Elon of 2025. And like I pointed out he is talent constrained. Elon is not developing camera vision. He is seeking and hiring talented software people to do it. In 2012 Elon was celebrated and going to work for Tesla was a prestigious career. Something you could flex.
Today you would be embarrassed to mention you took at job at Tesla, especially when so many other big name prestigious companies will pay the same or more for those same skills Elon desperately needs.
I doubt that factors into most people’s decisions. If anything, aerospace and automotive engineers tend to lean centrist to conservative: https://par.nsf.gov/servlets/purl/10529569 (Table 5). And aerospace is disproportionately white men compared to other engineering fields.
Additionally, my observation is that reactions to Musk are heavily influenced by individual levels of empathy, and “people focused versus systems focused” thinking. In my experience aerospace engineers are pretty low empathy and low people focus. When I got my degree in aerospace engineering in the early aughts—before SpaceX and the reboot of commercial aerospace—the most exciting job to look forward to was designing missiles for Raytheon and stuff like that. When I worked for a military contractor, the hypothetical scenarios always involved stuff like “so we just overthrew Iraq’s government, and now we need radio uplinks to our UAVs so we can blow up terrorists.” Nobody caught feelings over that stuff. I doubt many of these folks care about Elon’s tweets.
It's neither aerospace nor automotive engineers that are solving camera-only self driving. There is extreme demand for AI engineers right now, and the people capable of solving a problem like that have pick-of-the-litter access to any AI lab they want.
I would imagine math-focused people are even lower in empathy/people focus. Heck, I can imagine it being a recruiting advantage to tout that, at xAI, engineers won't be hamstrung by liberal arts majors telling them to make the results politically correct. After all, xAI seems to have brought Grok to a competitive state quite quickly.
OTOH, we've seen open-source projects like Nix and arguably Rust and arguably workplaces like Google and Twitter taken over by Leftist ideologues (although I would guess that the Leftists are no longer in control of the middle 2 and of course definitely not of the last one).
If you look on twitter it's full of cringey "xXWealthMastersXx" accounts falling over themselves to hype Tesla. It's a full on steel domed welded shut echo chamber.
Its becoming more obvious that X was acquired so that Musk had control of a form of mass communication to benefit himself.
Very ugly world we live in - the joker masquerades as someone who is doing stuff for the benefit of humanity and crossdresses to appeal to whomever to benefit himself.
It's even likely there will be legislation forcing LIDAR for self driving cars. Elon cannot flip the switch on self driving until they have millions of intervention free miles under their belt. Right now Tesla gets ~500 miles without intervention. One kid gets decked and the law will shutter his plan.
There is another layer too, where because of his polarizing personality, a lot of critical talent is inaccessible to him. Keep in mind Elon is doing -none- of the engineering besides the social engineering. He relies on talented people who want to work for him. I know many talented people who could make a difference for Tesla, but none who would ever give Elon a single minute of time.