Hacker News new | past | comments | ask | show | jobs | submit login
Get a Tesla if you want to learn about AI trying to kill you, says Steve Wozniak (electrek.co)
131 points by clouddrover on May 3, 2023 | hide | past | favorite | 157 comments



"I think his point might be valid for gpt3.5 but has he tried 4.0? its on a completely different level, a game changer!."

This is pretty much what I have come to expect from any post that has the words "AI" attached to it.

As far as the tesla is concerned, its fun watching the screen to see what it identifies, and its cruise control works pretty well but the FSD should NEVER EVER be used. I test it every now and then against some freeway separators, and 5 years later its still trying to crash into them because somehow the road markers trump over the side collision detectors that are telling it there is a wall on that side.


There were a couple releases that were bad when I first got access, but the last couple have been good for me.

It is excessively cautious, but that's better than the opposite and I take over whenever it could annoy other drivers.


I know someone who was rear-ended on the freeway because he had to hit the brakes hard and the driver behind didn’t react in time. Thankfully nobody was hurt. Just pointing out that “excessively cautious” can carry its own set of dangers, even if it’s ultimately safer.


Some of the scariest rides as a passenger were with drivers who were excessively cautious.


Because it comes across as a lack of confidence, and suddenly every minute is filled with second guessing every aspect of their driving, with the added fear of having no control over perceived mistakes in their driving.


It didn't "come across" like a lack of confidence, it was a lack of confidence. And the mistakes were not "perceived".


That's what I tried to tell some human drivers as well: driving excessively cautious (below the common cruising speed) endangers all parties (even when the common cruising speed is slightly above the speed limit ). Forcing other drivers to overtake or break is not a good idea.


It's excessively cautious until it drives you under a semi trailer and decapitates you.


Yeah but once you’re decapitated driving is no longer a problem for you.


Maybe don’t watch Harry Potter when you should be driving if you don’t want to cast Expecto Cerebellum.


That was autopilot, not FSD


It's funny how all the grifters have moved on. Okay, the cars still can't drive themselves, but now surely AI is a year away from being conscious!


There's known value in autonomous driving.

Susan Blackmore observed that you can implement all the known functionality of human consciousness by answering the question 'Are you conscious?' with 'yes'.

We don't have any broad agreement that it does anything else, and we know that we do lots of things unconsciously despite our commonplace beliefs to the contrary.


aww, so three lines of BASIC is conscious.


Blackmore's argument is that people say they are conscious when they are awake and you ask them, but that's its only known function. Your reply appears to assume this is wrong. Do you have an argument to support it?


Interesting, so someone who has never said those words has therefore never been conscious, and we can treat them as we treat anything lacking consciousness. Hooray, we just solved all human problems: if someone has a problem, just turn them into soylent green! But only if they don't say those words!


> so someone who has never said those words has therefore never been conscious

That is not the argument.

People may well be conscious (no matter what they've said). What is the function of consciousness? Intriguingly, upon some reflection, we know of nothing that could not be done without consciousness [1]. One can seek food, mate, avoid danger, and so on without consciousness. One can speak and be intelligent without consciousness. So, why did evolution endow us with consciousness? Big puzzle.

[1] And now for the footnote/joke (which illustrates the profound point made in the paragraph above): The only thing we know of that one cannot do without consciousness is this: to truthfully answer "yes" to the question "are you conscious?".


Upon almost no reflection, you could make a long list of things that cannot be done without consciousness, or at least cannot be done yet and we have no good reason to believe could ever be done without consciousness. Can you understand anything without consciousness? Can you have an identity without consciousness? The argument you're presenting is that if physical mimicry is close enough to the original then it is the same thing, if we just assume that consciousness is nothing. It's saying that Frankenstein's experiment was largely a success.


Beep boop I think therefore I am beep boop.


Or, in a slightly less jaded take, the dreamers. Maybe not everyone excited by technology is trying to cheat you?


Maybe just the majority, but I'd flip it as well and say a lot of people seeking investment are also pushing AI on the dumbest things. I've just stepped back from a company that was doing this claiming they are using AI but only in rewriting marketing blurbs or something along the lines of analyzing log files, but zero usage in the actual workflows/products of the company. Reeks of desperation, or scammy to be less generous.


Sure. But you can’t let the assholes distract from the miracles. That goes for everything in life. If you look at what AI does today and what it did 10 years ago, we are clearly living in the future dude. It’s better than I dreamed it could be, and I’m super excited to see what the non assholes do next.


We've been living in the future since the 1950s. We put men on the moon. We have the microwave oven. We have supercomputers small enough to fit in your pocket. We've mapped the genomes. We've almost had the year of the Linux desktop, we've almost had fusion in 10 years. Now we're really close to almost having AI for 10 years?

Living in the future is just such an odd comment to me. I get your enthusiastic about it, but what you consider futuristic is different than what I would consider.


Very hard to find the authentic dreamers behind all the hubris these days. And by these days I mean the past 20 years...


Both exist. I’ve seen a good number of folks who’ve gone from cryptocurrency to NFTs to AI in fairly quick succession, and I’m inclined to see that as grift.


The moment a dreamer starts charging you for promises that never materialize they become a grifter. (See: Tesla and FSD)


Frankly, it is 1 script away. GPT-4 + a self-feeding script is conscious (obviously). Just not as conscious as a human being (but much more conscious than a sleeping human being).


Man, all these companies and engineers that wasted billions over a decade trying to crack self-driving cars are gonna have egg on their face after reading this


What do self-driving car have to do with the question of consciousness?


Forget self-driving cars, lets talk about self assembling automatons, I'm talking about ... TRANSFORMERS robots in disguise!


I’ve driven a few Teslas lately and have been extremely impressed by the performance and comfort. But the auto steer and fsd preview are definitely underwhelming.

It can’t see a red light that is 120’ away to begin slowing down, nor does it consider that the navigation calls for a right turn at the light!

My theory is that the camera array needs to be larger and potentially much more data needs to be integrated into the decision making.

Even the fsd preview is scary when semi trucks seem to bounce left and right by about five feet per second while I pass them on the highway.

This is not a criticism of Tesla - the cars are a superb experience overall — just a note that the idea of self driving working on normal city streets or even suburban roads is pretty far fetched and will require significant enhancement to the tech that is in the 2023 vehicles.


> My theory is that the camera array needs to be larger and potentially much more data needs to be integrated into the decision making.

Or they could just use radar/lidar like everyone else.


I can’t believe it could be possible that less data is better than more data when it comes to self driving car sensors. Why not also include audio data and ir/uv data too?

I realized today that the Tesla system is noticeably slow to react to stimuli. It reminded me of how some gas station pumps emit a beep a second or so after you press a button when entering the zip code. Laggy, underpowered embedded systems.



Human drivers dont need lidar and radar


Human drivers also don't need wheels. We can't pretend humans and computers are the same today so wishful thinking that "we do it so cars should be fine the same way" is the recipe for killing people every time the human fails to intervene and prevent it.


I haven't seen a clear proof yet that lidar/radar equipped vehicles are insanely safer than the current Teslas


Before long OpenAI will be able to implement FSD by trivially handing the video feed and API calls to GPTX and asking it to drive.

I think the Tesla AI team is just unserious about the problem, probably largely because of Musk being a poor leader for the project.


I disagree on the cruise control, especially in Europe where the streets are narrower, it constantly thinks that it will crash and applies brakes. I sold my Tesla because of this


I try to avoid driving near a Tesla these days as I've seen so many do incredibly stupid things in front of me.


Disagree about FSD not being used. I've used FSD for thousands and thousands of miles without incident.


Have you tried FSD beta 11.3.x?


Get a Volvo if you want to learn about cars trying to not kill you, says Steve Notwosniak.

The Volvo Cars Safety Centre crash lab opened in 2000 and contains two test tracks of 108 and 154 metres long respectively. The shorter of the two is moveable and can be positioned at an angle between 0 and 90 degrees, allowing for crash test at different angles and speeds, or to simulate a crash between two moving cars. Cars are crashed at speeds up to 120 km/h.

Outside, there is room for performing tests like roll-over crashes and run-off road scenarios, whereby cars are launched into a ditch at high speeds. Here, Volvo Cars also offers rescue services opportunities to hone their life-saving skills, as it did earlier this year when it dropped new Volvos from a height of 30 metres to simulate the heavy damage found in extreme crash scenarios.

https://www.media.volvocars.com/global/en-gb/media/pressrele...


If Volvos are supposed to be the safest cars, why are they often not given top performance marks by the IIHS?

I get that Volvo’s whole image is their safety, but real world data does not back that up entirely


They do generally receive favorable marks by the IIHS: https://jalopnik.com/volvo-is-the-first-carmaker-to-receive-....

Even updated tests (released post-launch of the car) tend to treat them pretty favorably: https://www.cbsnews.com/news/most-small-suvs-fail-insurance-...

I think of the IIHS as a sort of benchmarking organization - not representative of the real world like any microbenchmark, but also correlated with production performance on some level. They seem to do quite well in their microbenchmarks.


I would be interested to see actual "real world" data vs "standard crash test" data. It's much easier to design something for excellent performance on a small subset of rigorously defined tests, vs designing for any/every possible outcome.


Volvo really is impressive with how much they put into safety and testing.

As for self-driving, I've had great driving experiences with their Pilot Assist.


My Tesla erratically turns the bright headlights on at seemingly random times like when I’m pulling into my covered driveway when it’s cloudy or at night when I’m driving right behind someone. I figure this is actually a good thing: Reminds me to not turn on any other more potentially-destructive AI-powered features.


My Tesla consistently believes there's a big rig parked next to it in my garage. Not kidding.


Is there a way to calm it down by turning off some features? I ask because I'm in the "hey, I love the range, and simplicity.. would buy one as soon as I can afford it.. just don't care about anything beyond cruise control" camp.


I have turned on the 'visualize everything that is perceived' feature, which is disabled by default. I don’t have FSD. I am not encouraged to buy it, based on what the car perceives.


You have to turn them off on each drive.

You can just cruise control on it’s own - it works really well.


Mine too. It also thinks my partially raised garage door is a truck, when the door is about 3 ft in the air, where a truck's tailgate would be. When actually on the road it's decent at identifying things without false positives.


FYI, there's actually a big rig parked next to my garage. Not kidding.


Lots of “Tesla FSD Beta almost killed me” in the comments here but then I see hundreds of videos like this: https://youtu.be/Mj6vXnEYAUg

I choose to believe my eyes, it seems like FSD Beta can see almost everything a human sees. In fact in the latest version I don’t think I’ve found a single example of it missing something clearly visible to a human.

It’s planning needs some work, it seems to lack longer term planning, frequently putting itself in positions where it needs to do 3-4 quick consecutive lane changes. You may hate Elons politics, but the FSD Beta has been released to 100k+ Tesla owners and it’s not inconceivable that no one will be in the drivers seat in the next 5 yrs.

Edit: as mentioned in the comments below FSD can’t handle parking lots which is annoying because that was the one place I hoped it would work the most.


I recently did a 70 mile drive on FSD with no interventions from local suburb roads, to highway, to urban roads, to some odd roads near a zoo.

That being said, often times I've had to intervene going <10 miles on local roads because it's so cautious that I'm afraid other drivers may react poorly and cause an accident. Rarely, it has done some dangerous actions that I'd characterize as "bad driving." So no, it's not there yet and it is dangerous because of the rare situations, but it's also occasionally perfect and usually "okay."


I agree, many of my interventions are because:

1. It hangs at an intersection too long that the person behind honks at me

2. It refuses to lane change out of fear leading me to miss highway exits or turns (sometimes but rarer, it tries a turn from the wrong lane)

3. It just gets bumpy some rides with a lot of phantom breaking that can be dangerous at times (I’ve seen the phantom breaking reduce in V11 though)

I don’t think the standard fear of if I didn’t intervene I would hit so and so obstacle is something I ever experienced in my FSD, though I did start using FSD in the much later versions of FSD 10+ where a lot of kinks were ironed out


> In fact in the latest version I don’t think I’ve found a single example of it missing something clearly visible to a human.

Is this from personal experience of using FSD Beta or are you relying purely on YouTube videos to assess its performance? Because not everyone drives around recording videos for YouTube.


I use it every time I take my Tesla out unless I feel like driving. I rarely intervene on my drive to work. But honestly I’m not impressed by this because this was true a year ago, FSD has no problem with the light traffic roads of the Bay Area. FSD also I’ve found rarely has issues in SF, it can infamously do the Lombard Street in SF with no problems but I’m guessing it’s overfit to SF at this point since the FSD team is in the bay area. I’m more intrigued by the YouTube videos of people trying it out in Florida, Texas etc.


I’ve been using FSD in my 3 for months now and I’d miss a car without it. It’s still definitely beta quality, but it’s a clear step up from enhanced autopilot and can take me from the on-ramp to my office without intervention (about thirty miles in California’s central coast). It would be door-to-door, but it doesn’t work well in parking lots and there’s a couple complicated intersections in my small town that it can’t handle yet.


Lots of anecdotes, not a lot of bad accidents. But lots of miles driven. And lots of incidents involving people getting killed by other cars. Many of which incidents involve non artificial intelligent drivers being impaired by stupidity, alcohol, distractions provided by their phone or onboard infotainment system, or the fact that they just aren't very competent drivers. Or all of those combined.

I get it, it's easy to make fun of Tesla and the plenty mess-ups and incidents. But the argument that those incidents add up to super dangerous situations seems to consistently not translate into a whole lot of real world injuries and deaths. You'd struggle to make that point. Teslas continue to have excellent safety metrics. And the more people they roll this out to and the more these metrics fail to degrade, the weaker that argument gets.

Nice little thought experiment: Ask yourself would you rather have an AI drive your car than any random owner of a US drivers license (you don't get to choose)? IMHO The barrier for AI is pretty low if you think about it like that.


Yes, my model 3 once tried to run me into a semi, on the other hand I watch human drivers nearly kill pedestrians (myself included) on a daily bases.


I tried the self driving mode and it couldn’t handle a very popular and common road in Pennsylvania.

I have a friend who had his Tesla in the shop for 2 months and literally had no alternate car for this period.

Autonomous driving is not real. Do not believe Elon-hype.

The cars have a surprisingly high number of defects and bizarre issues.


Will autonomous driving help minimize auto-accidents? Yes.

Is the technology there yet? No.


My litmus test for this is simple. Has the steering wheel been removed and the manufacturer taken all liability for at fault claims? Until then, I don't want a self driving car. If the mfg doesn't believe in the tech enough to own liability, how can I?


I think assuming liability is sufficient, calling for the removal of the steering wheel doesn't seem relevant.


By the time the calculus includes the savings of the steering wheel, column and other components, and the gear selector, mirror adjustment control, brake and accelerator pedals, and all the other systems that used to be needed by a human driver, autonomous driving (hopefully) would be really really good.


the mfer doesn't assume any liability right now, so even if their self driving technology dramatically lowers your risk, why would they assume any of that risk? I see the point you're making, sure, if they assumed all liability it would make sense for me to accept that deal, but they're not going to and it could still be a very very good idea for me to adopt the technology.


> if they assumed all liability it would make sense for me to accept that deal, but they're not going to

Why not? Insurance companies will accept liability for my driving fuck-ups in exchange for a monthly payment. And doing so doesn't instantly bankrupt them or anything. Why shouldn't a self-driving car company do the same thing?


the insurance company makes money because you are willing to pay more to reduce your personal risk than it costs them to accept it (a dollar you lose is worth more to you than a dollar you gain, declining marginal utility of wealth; an asymmetry they make money on, especially considering a dollar lost to them is much less significant)

G*P didn't mention payments for accepting risk, he just said if they accepted liability.


That's an interesting perspective. Perhaps in the future, we'll have FSD car subscriptions that also include insurance, so if you're in an accident the car manufacturer will cover everything.


I believe Mercedes has a car where they do assume some liability.


Your steed awaits… /s

Mercedes Drive Pilot was to be liability for accidents, under a list of conditions, by the end of 2022

https://www.roadandtrack.com/news/a39481699/what-happens-if-...


I also don't trust self-driving cars right now, but fwiw this would not be close to legal so not sure it makes the most sense as the litmus test.


Assuming liability should be legal. That's literally what insurance does.


Yeah but the legal environment isn’t the reason they still have steering wheels.


I use full self driving for long highway trips. I do however take the advice of the manufacturer and stay fully alert. It has caught situations where some bonehead is drifting into my lane or changing lanes without noticing me, and I didn’t notice them fast enough, and braked avoiding an accident. Like wise I’ve noticed it make mistakes and asserted control. My estimation is the joint probability of the model catching a dangerous situation and me catching it is significantly better than me alone. It also is able to navigate through confusing interchanges that I normally screw up by taking the second exit of four instead of the third or whatever.

I think a litmus test of “perfect” is a fair test, but leaves a lot of value uncounted in the mean time.

I think perhaps ignoring essentially everything Elon musk says and does and judge based on what’s in front of you is what works for me in not getting baited into the “is the Tesla FSD sentient and can do my taxes” hype cycle, with its requisite jadedness.


Check out the Cruise Origin


The technology is there. Waymo and Cruise have been testing self-driving cars in various US cities for years, and putting thousands of engineering hours into safety. You can take a driverless ride in a Waymo in Pheonix right now.

The problem with Tesla is that it was a rushed system built on the whim of a 50 year old nepo baby, and that real safety requires holistic sensor suites that include LIDAR and RADAR, as well as a fuck load of redundancy, in addition to cameras, which is the only thing Teslas have.


There were a bunch of articles a couple of days ago how emergency responders are having an increasing amount of trouble with Waymo and Cruise. The root issue seems to be that the cars can't handle exceptional situations on their own, and there are not enough human operators to deal with them quickly enough.


Cruise and Waymo have slightly inconvenienced emergency responders. Tesla crashed into a firetruck and killed someone.


How? FSD means being constantly alert and ready to take over.

If the driver didn‘t do this, he killed someone, not the car. It‘s abundantly clear to anyone opting into the beta.


How then is that FULL self driving? Cruise and Wayno cars have driven millions of miles with nobody in the driver's seat and they have avoided killing anyone. They don't require a human being constantly alert and ready to take over. That is full self driving.


Legal liability is not the same as causation. If a Tesla using FSD crashes into something, then the Tesla software crashed into something.


I wouldn‘t call it rushed, it works far better than most assisted driving systems. It will probably soon be rolled out as that and I would not be surprised if, eventually, it becomes good enough to be fully self driving.


What's your evidence that Elon Musk is a "nepo baby".


Personally, I don't think it ever will as long as we're trying to mix human drivers with automated drivers using existing infrastructure. Seems like if it was 100% automated with a system for communication between vehicles to "ask" to change lanes of the cars around it, then we'd be a lot better off than having the car "guess" what a human is attempting to do.


How do you reconcile your view with Waymo having really driverless cars on the road with human drivers for a while now?


Waymo is not available in my area, so it doesn't exist to me.


That does not make sense for robotaxi.


Will banning all human agency help minimize accidents of all kinds in life? Yes.

Do we want to? No.


Human agency to impart 782,000 newtons at will regardless of intoxication, skill, alertness, etc causes accidents that kill and maim an awful lot of people. I for one look forward to the day that idiot weaving through traffic at 80mph in an enormous SUV agency is banned for my, my families, and everyone else’s sake.


Do you also look forward to not being able to do anything else either? Because that's where things are heading if you continue to choose safety over freedom.


The bulk of our society seems to be losing the ability to be self-responsible and behaves erratically and dangerously and then blames everyone around them when it goes wrong. If that continues then the logical result is going to be the restriction of freedoms.

If you want to fix that, you need to do something about the rising narcissism and irresponsibility in society. If you just try to lecture people about safety-vs-freedom, then you will be losing your freedom.


You could make that argument about anything.

You could make that argument about needing a license, or how you're not allowed to take a turn driving a plane.


the very first time I tried fsd beta the car immediately tried to crash me into a lightpole. it was an unsettling experience. still the best car I've ever owned by far.


I love these Tesla reviews, which sound like a battered wife: "Sure, he slaps me around sometimes, but he's still the love of my life."


Unlike an abusive spouse, Tesla's dangerous features are opt in.

I love my Tesla. I think FSD is a horseshit scam with some neat tech that's nowhere near ready for the public. Fortunately, I don't pay for it, so it doesn't affect what I do with my car.

I do find it incredibly frustrating that the vast majority of Tesla conversations devolve into screaming matches between people who are rabidly pro or rabidly con, with both sides doing a pretty poor job of representing the truth. And those in the middle generally wander off to do something far more productive literally anywhere else on the internet.


> Tesla's dangerous features are opt in.

Not for the other drivers and pedestrians.


This is a key thing that people often forget. Very similar to the shenanigans that social platforms pull on tracking of people that do not use their platforms.

The collateral damage to these types of decisions are invisible to the one actually using the thing causing the damage, and it is very hard to convince that it is real


"Some of you may die but thats a sacrifice I am willing to make."


off-topic, but I'm always struck by how great of a quote that is. I hope whomever came up with it at Dreamworks was given a raise and a promotion after Shrek.


> I do find it incredibly frustrating that the vast majority of Tesla conversations devolve into screaming matches between people who are rabidly pro or rabidly con, with both sides doing a pretty poor job of representing the truth

They sold it as autopilot. If you market something to do X and it does Y, you're bound to get people upset. Especially when it costs a chunk of the moon.


I don't own a Tesla, but if I refused to buy any product with misleading marketing, I'm not sure I'd ever buy anything more sophisticated than a loaf of bread. Then again, I also don't use any driver assist features beyond parking sensors and don't expect to see any actual autopilot within my lifetime, so maybe I just haven't been as annoyed by Tesla's nonsense. Regardless, for a purchase most people regard as rather significant, it doesn't seem unreasonable to expect them to look at least marginally beyond a marketing misnomer.


The people being upset don't seem to be the same as the people owning the Tesla's though? Just gauging from HN discussions, the comments with the most moral outrage seem to be coming from a place of pure principle.

Edit: I scroll down and the first thing I see is a first person account of some one having sold theirs after disliking the overall experience, turning white. Welp, guess this is an opinion I should reflect on.


"There are several paragraphs of warnings and disclaimers both on the sales page and when you use the product that exactly explain the limitations, but I absolutely insist on fixating on this one particular word. Nothing else matters."

Planes have autopilot too. It doesn't mean the pilot turns it on then goes and takes a nap. They still have to be alert.


They weren’t opt in for the children in the crosswalk it would mow down. See, e.g. https://youtu.be/3mnG_Gbxf_w


Did they have their foot on the accelerator?

Teslas always allow human input to override and they don't auto-steer through cones. There were several attack ads that exploited these two very reasonable behaviors to force the car to impact a dummy before blaming the impact on FSD. Is this one of them? This particular video doesn't look familiar from the last kerfuffle, but I see cones and I see a conflict of interest. Is there an interior shot showing the console?


Not FSD. This just tests AEB, Autonomous Emergency Braking. The human is completely in control.


Got it. So Tesla can’t even get braking correct. They should be able to do AEB before they are allowed to do FSD in my opinion.


AEB is damage reduction not prevention. It trusts the user.

FSD is great around VRUs.


Did someone have their foot on the accelerator?


I’m simply con because for the price the build quality is terrible and I’m tired of the hype FSD has been given. It’s all garbage and Elon and Tesla deserve a back hand for the marketing scam.


I'm sorry, but this is pretty much what I'm referring to regarding lack of nuance.

It's not all garbage, and the build quality these days is actually as good as any other car I've had. But FSD is vaporware, and Elon is an actual troll manifest.

Like most things in life, the cars are not 100% gold or 100% shit.


Nuance in an internet argument? Well I never!


Surely nuance doesn't always make for a better argument!


[flagged]


It's not an irrational hatred at all. Tesla released a dangerous feature onto public roads. That put the lives of Tesla drivers and people around these cars generally in jeopardy. They were forced by the government to recall every vehicle with self driving because it was so dangerous. It also damaged trust in automation generally which hurts more good faith actors taking more rigorous and comprehensive views on safety. The hatred is completely justified, especially when you factor in the facts that the quality control on Teslas is known to be garbage, and that your reputation is damaged by the antics of their CEO.


Most car companies have recalled cars at various times, sometimes for issues that have caused accidents.

Yet, somehow there is not a national news story every time a Toyota gets into an accident, with the reporter speculating that the accident was caused by a fault in the car, before any facts about the accident have been released.


There are many anarchists among hackers. They like the technology but hate the businesses.


Right? I sold mine, getting rid of that car was a huge sigh of relief. From parts shortages to labor shortages to weird manufacturing defects that no one could correct to the "overpromise and underdeliver" attitude of the company...

The car was cool for a few years, and then it became deeply irritating. We got a Volvo, and holy shit the Volvo had better tech.


I remember years ago seeing videos/posts about how much safer the Tesla was as far as build qualities. I don't know if those are still there, but surely they have been watered down with all of the software so the overall rating of the car is much less.


In fairness, if the problem is only with the self-driving, you can circumvent that by not using it. "He's a good dog, but he's completely uncontrollable if you take him to the beach because he tries to swim straight into the sea": well, if you don't go to the beach there's no problem.


> In fairness, if the problem is only with the self-driving, you can circumvent that by not using i

I wasn't aware that I could disable FSD for Tesla owners when they're using the roads in my city. Could you point to that feature, please?


The question there is whether Tesla owners are more likely to kill you than other drivers. I have no data on it.


Maybe that's one of the things you can get in the Nokia chassis hacking device floating around?


It's a $15000 product. There's nothing fair about saying "just don't use it". That's not even a valid excuse for when your $2 flashlight doesn't make light.


That's the nature on fanboyism.

I met an apple fanboy recently. His thing was that apple is amazing because they care the most about user experience, the design is so amazing, and it's so much better than anything else! After he talked at length about how everything about apple is great, I said that I'd learned that apple made iphones slow on purpose 3 years in and when caught said it was about saving battery. I said that's not a good user experience to me, if the phone becomes slow after 3 years. He told me that _his own iphone_ is going thru that, and he still hadn't time to buy a new one but he was going to, but it was great because it was a conscious design choice by apple.

These people are just brainwashed by a brand.


Your misrepresentation of the issue reeks of ideological fanboyism itself.

As the battery ages, running at peak power can lead to random shutdowns/restarts. Apple chose to throttle the devices (which in most use cases is basically imperceptible) to avoid random shutdowns of the device, which in my view is indeed a better user experience.

I have seen no indication that competing phones do not suffer the same issue.

The notion that Apple deliberately hobbled their devices to reduce their lifetime in order to encourage sales is ludicrous; iPhones hold their resale value far better than other phones.

https://support.apple.com/en-gb/HT208387

https://9to5mac.com/2021/01/21/iphone-trade-in-value/


> Your misrepresentation of the issue reeks of ideological fanboyism itself.

What am I a fanboy of?


I have a 2023 Rav4 with lane assist and adaptive curise control. these "AI" or simple self driving features are really showing me the fallibility of self driving.

the car panics at silly things, and is entirely too aggressive with braking.

Its also quite daft in the times when it just wont use braking like going downhill in adaptive cruise control, the car will not use brakes to stay within the speed limit when going downhill. It will just beep really annoyingly (no it cant be turned off). but if a car in front of it starts slowing down it will be quite happy to heavily slam the brakes on (nothing dangerous, just that its either on or off.. it wont graduate the application).

I know Tesla's version is an order of magnitude better but i can easily see how it can be fooled or do the wrong thing. I wouldn't trust it un attended.


For people that care about statistics rather than anecdotes: "In the last 12 months, a Tesla with FSD Beta engaged experienced an airbag-deployed crash about every 3.2 M miles, which is ~5x safer than the most recently available US average of 0.6M miles/police-reported crash" - https://twitter.com/Tesla/status/1631063252875505669?lang=en


If you really care about statistics, you will look into how biased their data is.

It's intentional and very misleading. As per your own link - it's comparing apple to oranges (airbag-deployed crash vs police-reported crash). It's not accounting for different drivers or road profiles.

Check some independent reviews of the data - it paints a very different picture from what Tesla would like us to believe.


For people that care about statistics rather than advertising:

https://www.forbes.com/sites/bradtempleton/2023/04/26/tesla-...


The relevant section of the interview: https://youtu.be/OU9cKjWsvH0?t=499


> It looks like Woz is not scared of Tesla’s “hardcore litigation” team, which has been a bit trigger-happy when it comes to suing people for defamation lately.

Anyone can expand?


That quote has links in the article. Following it has lots more detailed information.


My Tesla has been great. It saw a pedestrian in all black at night in a cross walk that I didn’t see. The car stopped, and I think it saved a life. Auto pilot is super nice on the Highway and in stop and go traffic. FSD drives like an 8 year old stole the car and is driving for the first time :)

TLDR we’re going on a road trip next month and taking the Tesla. When taking the jeep was mentioned, no one wanted to drive, lol!


I saw steve Wozniak speak at a business event less than two months ago. I was falling asleep after hours of business boring. When asked about - in regard to AI - how these technology leaders should be held accountable, he quoth Dr. Ted Kaczynski and mumbled something about execution. After a brief pause he walked it back but it sure woke me up!


He's right you know. Installing Tesla FSD puts your car on LSD and hallucinates the moon as a yellow traffic light to slow down in the middle of the high way and runs into bright red emergency cars on the road.

Nothing short of AI snake oil that puts the lives of drivers on the road at risk.


Rented a Tesla for a day back in 2021 and tried to let it drive itself, can confirm it spent the day trying to murder me.


autopilot 1 is actually not that bad - if you understand the limitations.

Just don't use it in town, and do your own steering on extremely curvy roads.

I don't think it's any different from understanding the limitations of cruise control. If you stay aware, it can do a lot of work for you.

(autopilot 1 is the first mobileye version)


If you can’t use it in town or on curvy roads, how is it better than my base model Forester’s lane/distance keeping?

Every time someone explains the features to me, they all sound cool and my interest is piqued, but I never quite grok what it practically does that would change daily driving for me.

I think lane changing might be neat but I just don’t do that much on the highway?

For me the massive advancement was getting a car with EyeSight and now it keeps me in the lane and evenly spaced and makes stop and go traffic painless. Oh and all the safety stopping systems. Every car should have those!

Beyond that feels like diminishing returns until we get to some new level of autonomy which no car currently has.


I should qualify my statement about tesla autopilot 1.

extremely curvy roads. curvy roads are fine, but really tight turns it hunts a little and it's just easier to steer yourself.

And I should be clear this is a corner case at the edge. On the freeway, even curvy mountain freeways, it does very well.

and in town, I haven't had any issues, but it doesn't stop at stop signs, so I like to drive myself.

this is just common sense to me, sort of how you don't use (old traditional set-one-speed) cruise control in heavy traffic.


Does EyeSight do everything except change lanes/make 90 degree turns? Like, to the extent that it requires human oversight for safety reasons but is practically sufficient on its own? Because what you are describing is 100% of what I liked about the Tesla's capabilities.


As long as the highway has half-decently painted lines. I can drive from London to Toronto without having to “drive.” It will get angry if my hand is off the wheel though so I just keep them there and feel the car making every curve and such.

It will slow if traffic slows then speed back up. It’ll do stop and go traffic without me touching a pedal. It does curves better than I ever expected it to. Any curve you find on a highway it’ll handle. It can do some off-ramps if they’re not super tight. But I don’t generally trust it to do those.

I don’t trust it on non-divided country roads so I don’t really try. And it turns itself off fully if there’s too much rain or snow.

I test drove a Mazda… 5? Whatever the comparable is, and a Honda CRV and their systems didn’t come close. Couldn’t even tell if the Mazda’s was enabled. The CRV bounced from line to line in the lane. They use radar and a single tiny camera. EyeSight uses two big stereo cameras which I guess is the difference.


Incredible. Thank you, thats good to know.


Musk was definitely overly optimistic on the FSD timeline but the hate is pretty wild.

Please correct me if I’m wrong but isn’t Teslas autopilot something like ~10x as safe as the average driver?


Tesla driving stats are cherry-picked, biased by the population of people with enough money to own a Tesla, biased by the locations where Teslas are driven, and doesn’t consider the failure modes that Autopilot faces that human drivers generally avoid (e.g. shearing off the side of a parked ambulance).


Most importantly: biased by the locations where drivers feel comfortable using autopilot.

These numbers should only be compared to miles on cruise control in other cars.


you haven't made a convincing refutation of what he asked. For example, Tesla autopilot might have failure modes that human drivers avoid, but while that could be a PR nightmare for the technology wouldn't necessarily outweigh Teslas doing other things better that humans are poor at. And if people who can afford better cars driving better is true(?), that's a fact you'd get downvoted to hell for (unfairly) around here if you just stated it in a freestanding way.


The closer you look at those statistics the more questionable they are. You can’t directly compare ‘overall crashes per kilometre across all human drivers and conditions’ with ‘crashes per kilometre while autopilot is active’ when autopilot being active implies:

- driving on a divided highway (what’s the actual prerequisite?)

- good enough weather for autopilot to engage

- in a relatively new, high end luxury vehicle which is likely to be well maintained and have better handling and braking than average vehicles

- likely to be driven by older middle-class drivers less prone to risk-taking

- unclear what the cutoff is between ‘autopilot driving’ and ‘human driving’ during a crash, if autopilot disengaged before impact does that count as human or autopilot driving?


I sold my Tesla last summer but as of then, my anecdotal experience is that it drove like a drunk teenager. Very uncomfortable to sit there and pray, I only used it in traffic jams (which it handled like a champ) and when the road was mostly empty (which made it a lot less stressful wondering what it does next).


Among what others have stated, I guess it depends upon what you mean by "safe". It's possible that Tesla autopilot avoids fender benders but increases chances of fatal crashes.

If you're looking at deaths, in the USA, there's about 1.5 every 100 million miles driven (this includes pedestrians/etc). That's a pretty high bar. Way higher than most autopilot apologists imply.

I'm not saying autopilot won't ever be safer than humans. Just that it isn't as easy as it appears.


"isn’t Teslas autopilot something like ~10x as safe as the average driver? "

Bruh. 10x as dangerous would still be a low number.


Only according to Tesla's marketing. In reality it can barely drive from point A to point B in a dense city without killing someone.


Oh no! I have a lot of respect for Wozniak. Now I have to watch musk call him all kinds of names. I shudder to imagine how nasty and petty musk will get.


Where's the lie?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: