Hacker News new | past | comments | ask | show | jobs | submit login
The Battle for Best Semi-Autonomous System: Tesla Autopilot vs. GM SuperCruise (thedrive.com)
193 points by kartD on Feb 22, 2018 | hide | past | favorite | 199 comments



GM did a nice job there. They have a good solution to the "expecting the driver to watch and take over" problem. Their system watches the driver constantly to make sure they're in position to take over, but doesn't require a hand on the wheel. GM is also more careful about when to allow the control system to engage.

Tesla's crashes on "autopilot" have mostly been in situations where the system should have detected an obstacle. Four times, a Tesla on autopilot has slammed into an obstacle partly blocking the left edge of a lane. All those crashes were in freeway situations, where Autopilot is supposed to be reliable. That's inexcusable. Tesla's radar has inadequate resolution for the job, the vision system doesn't really recognize fixed obstacles, and they don't have LIDAR. So they don't detect big things like fire trucks and street cleaning trucks that are mostly on the shoulder and partly into the lane.


I’ve got a Tesla model X and I’ve been in situations where the autopilot screwed up and required me to take over to prevent an accident. Most recently the car was autonomously changing lanes on a highway and didn’t see a guy on a motorcycle who was following too closely to the car in front of him. It’s probably an edge case, but still makes me weary of using it more.


As someone who rides, this is scary. I'm going to begin giving semi-autonomous vehicles a wide berth.

Whatever car company exec said that humans are going to "bully self driving cars" was very wrong. I'm very scared of them and I'm not going to mess around.


This is a very rehashed point; but just because these cars crash into things in stupid situations doesn't mean that they are less safe than what we already have. Shouldn't you already be giving semi-autonomous, fully-autonomous and non-autonomous vehicles wide berth?

I assume you mean rides [a motorcycle], which is ~35x more dangerous than driving a car according to Wikipedia, and I'm guessing driving a car is already the most dangerous thing people do in a day. They would be completely illegal if we worried more about safety than about cost-benefit ratios. Even if AutoPilot/SuperCruise vehicles are 10 times safer than regular drivers they are still going to take out the occasional motorcyclist.


With human-driven vehicles there is often a chance to make eye contact to confirm that you see each other. Perhaps the robot cars need some equivalent. Interesting interface design problem.


> Interesting interface design problem.

And so Don Norman (of The Design of Everyday Things book fame) is at it from UCSD: http://www.cogsci.ucsd.edu/~norman/


Eye contact doesn't insure the safety of either party.


That's not true. Human brain has evolved to pick up subtle things that even our conscious brains don't recognize. An eye-contact is a reaffirmation that the driver has seen me (and vice-versa) and notices my intent. Of course, people can misread. But in my experience riding a bicycle or crossing a sidewalk a simple eye-contact and a head nod goes a long way in resolving a conflict on who goes first or if it's ok to cross.


In the Illinois Motorcycle handbook it definitely states that just because you and another driver make eye contact, that doesn't mean they actually see you. Either way, as a motorcyclist, I'm not gonna count on eye contact to make sure of my safety. I ride and drive defensively at all times. I assume the worst in most situations.


> I assume the worst in most situations.

If that was really true you would never ride a motorcycle.

Eyeballing the other guy is no guarantee of anything, but it provides quite a lot of information about the risk you face in the interaction. I think we would miss it when dealing with fully autonomous vehicles, and that the vehicles might add something to tell you they see you. I've done some work on this idea in the lab with mobile robots.


> If that was really true you would never ride a motorcycle.

This is such a garbage statement. We check our parachutes twice before jumping, and whilst not jumping is inherently safer - having decided to jump you can still protect yourself from risks inherent in the activity.


If you assume the worst, there'd be no point checking the parachute. You assume it will fail.

Since this is not how people really behave, we are very rarely actually assuming the worst.


Of course it doesn't. But it's better than nothing. We do it all the time.


Motorcycles are dangerous because of other motorists. Most are actually intending to do you harm e.g the California study on lane splitting. Only advantage you've as a rider, is speed and nimbleness. My rule, when I'm riding I automatically assume no one sees me. I act accordingly. Try make sure, I'm the not sandwiched by cards.


Motorcycles are still dangerous without other motorists. My car will protect me from a lot of mistakes that would be fatal on my bike.


I've been passed by bikes doing slalom at ~300 kph. It's not common, for sure, but it's memorable. So my rule about bikes is that they're unpredictable.


25% of motorcycle accidents don't involve cars, just riders screwing up.


> They would be completely illegal if we worried more about safety than about cost-benefit ratios.

If we worried about safety, there would be much, much less drivers on the road.

See, society have decided that practically every adult is perfectly capable of controlling a several thousand pound piece of metal while practical evidence points to somewhere else: some or even most people drive so badly they shouldn't be doing so.

We build a world where the motorist rules and we are paying for this with a huge amount of dead and disabled people. Is it truly worth it? Could we live another way (sure, once we did, but we want to do better now)? Transit and such is mostly sold on the merits of climate change these days and noone dares to say this: you should be on transit because you can't drive.


Not here to defend Telsa, but drove 2014 Audi A6 and its blind spot assist would sometimes miss motorcycles and small cars (e.g. Mini Cooper, Fiat)


That's very sad as a motorcyclist who hoped that this would be a huge improvement. Most of the near accidents I've had have been inattentive drivers not checking their blindspots.


I've noticed blind spot detection on other car being unable to pick up small (empty) trailers when I pass them. They probably assume anything with less cross sectional area than X is a false positive in a config file somewhere


Or they were right, since we don't even have fully self driving cars yet...


To be fair here, and for people who might not know, Teslas do not "autonomously change lanes". When autopilot is active, the driver must initiate the lane change themselves, and it's the driver's responsibility to do so when it's safe. Tesla does not claim otherwise.


Most would describe that as autonomously changing lanes (after user initiation).

On my Model S test drive the salesperson-not-called-a-salesperson used the language “It changes lanes for you”


The article actually discusses the lane change feature specifically. He argues that, if the car requires the driver to look and confirm that the lane change is safe, you're probably better having the driver be in control of the whole process.

As a driver, this makes sense to me. I've definitely had cases where someone driving quickly "has appeared out of nowhere" while I'm changing lanes. I'd much rather be fully engaged in the lane change process and prepared to immediately back off than to have given an autopilot a thumbs up and let it handle the details.


I for one would read autonomously changing lanes as the system choosing the optimal lanes - and swapping - itself.

What you're describing is more like assisted, user initiated lane change.


Yeah, sure they would...


Seems reckless to do autonomous lane changes with only blind spot detection. I can't how ultrasonic sensors could detect lane splitting motorcycles.


I believe Tesla's "Enhanced Autopilot" has additional cameras pointing backwards for this reason. They claim they were added to address the issue of cars approaching more quickly than the ultrasonic sensors are useful for, but I wonder if they would have found this biker too.


Lane splitting should be illegal anyway.


People will do it even if it's illegal, and it would be better for everyone involved if autonomous cars can deal with it safely.


This is why self driving cars are so much harder than one might think. Just getting the basics of self driving down is challenging. Now throw in all the possible edge cases, some of which shouldn't ever happen, and you've got a wildly interesting and challenging problem. It would be a blast to work on that team.


In addition to your points I think GM also acted with more integrity in naming the feature. Tesla calling it autopilot is irresponsible and dangerous.


Agreed. I'd go further though and say the semi-autonomous feature itself is fundamentally flawed. It's a mishmash of what the car will do and what the human is expected to do; the whole while making the human responsible even if the car fails in doing its part.

So, I don't buy into their disclaiming of responsibility if the car fails in what it is doing autonomously.

That is, it's one thing if a safety measure fails to engage (for instance, the car fails to brake to avoid an accident while still under the human's control). However, it's a different matter when the car offers to actively perform a function in lieu of the human, then fails in properly performing it.

By the mere existence and offer of the function, the company is essentially making a claim as to the vehicle's fitness for that function and asking the human to trust that claim by assigning control to the vehicle. No amount of verbiage that tries to assign responsibility back to the human can offset the fact that the vehicle was under its own control when it failed.


Tesla's autopilot is functionally identical to a boat or plane autopilot. If a pilot or skipper relied upon autopilot as a full self driving system they would most likely be dead before the end of their trip.


Something like 92% of Americans don't own a boat, let alone one fancy enough to have an autopilot. 99.9% of Americans don't own a plane. As such, most Americans have no exposure to an autopilot and its functionality beyond perhaps having heard the name.

Tesla themselves muddle the waters. On https://www.tesla.com/autopilot, it states "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."

Further down, it talks about "Full Self-Driving Capability", "All you will need to do is get in and tell your car where to go.", "The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.", etc.

It's not shocking folks have gotten the impression it's a self-driving feature instead of a fancy cruise control.


I was going to write something nastier. But really. Those would seem to be, at best, wildly optimistic claims. And at worst simply counter-factual.


The second point is salient - theoretical hardware capability is still a long shot from actual Level 5 autonomous driving.


Yes exactly, they don't clarify at all that the full driving capability is after they finish the hardware. and they have taken forever. I'm glad I have a hw1 tesla. it works, not full automation but a lot of good features like tacc


Full self-driving is a vaporware feature separate from the current Autopilot. It's confusing that they would advertise a feature that doesn't exist, but it's not like they're claiming Autopilot works that way. More importantly, the page where you configure your car explains the distinction very clearly (and charges extra for full self-driving, even though it doesn't exist!), so I doubt anyone has actually bought a Tesla thinking that Autopilot was full self-driving.


From that same page:

"Tesla’s Enhanced Autopilot software has begun rolling out and features will continue to be introduced as validation is completed, subject to regulatory approval. "

"Please note that Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction. It is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval."

The latter paragraph is very vaguely worded and certainly not very clear that it is non existent as yet.


It is complete vaporware. Their radar sensors depend on a hokey whitelist database to avoid triggering false positives on "problematic" static structures like billboards and overpasses. Tesla deserves to be sued into the ground for daring to make such audacious claims. It is complete lunacy to believe that this is acceptable for autonomous driving in a dynamic real world environment.


actually tesla's ability to deal with dynamism (even without lidar) is something that puts them ahead of most cars. the famous waymo required pre-mapped roads, it didn't work on changes. They even had lidar and they had that restriction. I don't know if they've relaxed it since then.


Yes, I said in the comment you're replying to that Tesla's advertising for that nonexistent feature was confusing. I'm not sure why you're repeating it back to me.

But at any rate, this thread isn't about whether it's clear that the Full Self-Driving software is vaporware. The question is whether people are buying Teslas under the misunderstanding that regular Autopilot is the same thing as Full Self-Driving.


Justifications to real life are not really relevant because that isn't what regular people know. What "autopilot" means is what you see in the movies.

When a computerised voice in a sci fi movie says "Autopilot engaged" what does that usually mean?


That you're spending so much time in fiction that you are losing a grip on reality. There's also teleportation, flying broomsticks and miles-long star dreadnoughts in those stories; surely you don't expect those as well?


It’s not about him, but about what your average person believes autopilot means


Does it even have to be about the “average” person? What percentage of the population has ever been exposed to real autopilot systems? For all the rest, sci-fi autopilot is the only reference point. A person doesn’t have to be particularly ignorant not to be exposed to these things; I’d tentatively posit most people growing up in an urban center have never been behind the console of a plane or a boat.


I agree. I think by average I was referring to what you said - most people have never controlled planes or boats. Perhaps “ordinary” would have been clearer.


Point taken.


Nobody is purporting to sell me those things.


I do agree that Tesla seems to be doing their utmost to say something akin to "it could be autonomous, if if if" - and that an average consumer could read that as "it is autonomous, full stop."


I do agree that Tesla seems to be doing their utmost to say something akin to "it could be autonomous, if if if"

They are not doing their utmost. They're doing the minimum necessary to avoid getting sued. Their sales people and marketing material are pushing autonomous hard, and being very quiet about the limitations. If you're not following the tech news on these issues and only going by what the sales guy at the dealership told you, then the confusion is very reasonable.


Tesla's autopilot function is horribly named and not at all identical to plane autopilot.

Modern autopilots are actually fully capable of handling takeoff, flight, and landing without interaction from the pilot. However, pilots continue to handle takeoff and landing procedures to maintain skills in those areas as generally an autopilot failure during those times would not be recoverable.


> Modern autopilots are actually fully capable of handling takeoff, flight, and landing without interaction from the pilot.

No they're not. The pilot needs to initiate the sequences first, and good luck when something unexpected happens - a Tesla (or any truck/car with auto-distance-control) will brake when a car changes lanes right in front of it, a plane will blare sirens (okay, to be fully correct: it will tell the pilot a preferred course of action to avoid a collision) if another plane comes too close for comfort but not do any evasive action on its own.

Even in a perfectly ordinary flight there are always the ATCs to serve as master controllers for all airplanes... something you also have in railways, but not at all on the road. On the road everyone is on his own.


Modern airplanes are pretty good at avoiding head-on collisions though - so good, that pilots are trained not to override the planes judgement, and just let it communicate with the other plane and make the best calls as to whether to climb or dive...


Capable of handling _each_ of the functions (NOT switching between the modes - "sit on tarmac, punch in destination, go take a nap"; plus I'm not entirely sure if the feature you describe as autotakeoff even exists outside Hollywood), exclusively on the happy path with strict minima, with pilots required to take over at once in case of any irregularity. Highly convenient, but not a self-flying miracle.


The features I have described are available in the Dreamliner and the Airbus 380, and have been more than a Hollywood fantasy for at least a decade...

Self-flying isn't as difficult as people think it is. There are fewer obstacles and risks to deal with.


Nope. There are features such as thrust assymetry compensation, and you could technically engage the autopilot as low as 200 ft AGL, but I maintain that there is no automatic takeoff available on commercial aircraft (there's something related on catapult-launched fighter jets, but that's quite another world).

If you're claiming a feature exists "in the Dreamliner and the Airbus 380...over a decade" - well, [citation-needed], then: all the sources that I've managed to find are unanimous that it doesn't, e.g. http://www.askthepilot.com/questionanswers/automation-myths/

To quote the pilot above: "That fantasy insists on outpacing reality is perhaps symptomatic of our infatuation with technology, and the belief that we can compute our way out of every dilemma or complication."


Some modern airports REQUIRE pilots to use auto landing if their plane supports it. I know because I have a friend who is a commercial pilot and they don't like flying those routes because it is boring. Most airports let the pilots fly manually if they want.


Possible. What I'm saying is "the plane can do many things, but 1. won't switch modes e.g. from cruise to autoland, and 2. even so, it's just a different way of controlling the plane, it does not fly itself."


True. Although, that ignores the fact that even hobbyist pilots and skippers tend to have far more training and discipline than the average motorist. The name might be accurate, but the self awareness of the name is terrible.


... so you can pretty much go to sleep, assuming you aren't taking off or landing. Got it.


Before they were in cars, autopilots in planes were there to keep the plane on the approximate course, in good conditions, with the pilot expected to supervise the operation and take over at immediate notice. Difference?


The difference is comparable to the difference in licensing requirements for driving cars and piloting airplanes.


Yes, almost no Americans are pilots, and few own boats. The name seems deliberately misleading to most people, and in a safety-critical application no less.


That much is possible. (OTOH, the phrase "(don't expect any thinking now,) I'm on autopilot" seems to be in common use as well.


Why aren't we using sonar for this sort of thing. Bats have nailed down the echo location thing in extremely heavy bat traffic, but I don't see that being something that has been vigorously pursued by tech companies.


Lidar kind of works the same way, but is a lot faster (speed of light vs speed of sound)

I could also see sonar taking a lot more energy to use all the time as compared to Lidar.


LIDAR has severe range limitations, such that it's basically unusable for highway speed driving (or at least did, <mumble> years ago when I worked on autonomous driving), which is why Tesla doesn't use LIDAR.

Looking at Thrun et al.'s paper on Stanley (https://www-cs.stanford.edu/people/dstavens/jfr06/thrun_etal...):

"The effective maximum range at which obstacles can be detected with the laser mapper is approximately 22 m. This range is sufficient for Stanley to reliably avoid obstacles at speeds up to 25 mph. Based on the 2004 Race Course, the development team estimated that Stanley would need to reach speeds of 35 mph in order to successfully complete the challenge. To extend the sensor range enough to allow safe driving at 35 mph, Stanley uses a color camera to find drivable surfaces at ranges exceeding that of the laser analysis."

They built a neural network to analyze terrain and find safe paths- and trained it driving at 25 mph across the environment with the LIDAR providing the control- and then let it work at higher speeds where the LIDAR would be able to panic stop but not much more.


I may be way off base here, but hasn't the LIDAR field evolved enormously since 2006 (date of that paper)? Last I read prototype solid state LIDAR systems have ranges in the ~150m range, although I don't follow this field all that closely.

Given that the major players like Waymo/Uber/Apple etc seem to be betting very heavily on LIDAR, one would assume these challenges are being resolved. I suspect the traditionally very high cost of LIDAR was likely a much bigger factor in Tesla's decision not to use, rather than any technical reason. Cameras plus radar is vastly cheaper to ship for now.


No, it probably has more to do with your first point: speed.

During the time it could take for a sonar (ultrasonic) sensor to send out a pulse and for it to return, the car may have travelled many feet depending on speed and road conditions. This would impact how long it would take to perform an evasive action and return to a safe state.

That is just a guess, though - I haven't run the numbers to know if it is factual. Lidar, though, is much quicker, as you noted.


A lot of companies (such as Google/ABC Waymo) are actually looking into Lidar applications. Sonar is good, but Lidar is an order of magnitude faster. In fact, 1,000,000 times faster than speed of sound.


Have they overcome the hacking/tricking issues they were having a few years ago yet? https://www.theguardian.com/technology/2015/sep/07/hackers-t...

Side note: I can't help but be reminded of Napoleon Dynamite's Liger whenever I see Lidar. Makes me smile.


How timely. That's the very topic of today's xkcd.

https://www.xkcd.com/1958/

Short version: Regular cars are vulnerable to malicious actors, too, but luckily most people aren't murderers.


The article mentions Tesla's use of ultrasonic sensors.


LIDAR is still expensive, so is used in the lab (or in test vehicles) but I don't think it has been used in a consumer vehicle yet, which I think are mostly using cheaper eye-like cameras.


High-resolution, 3D lidar is expensive, not to mention mechanically complicated and large, so it doesn't fit well into car designs...no consumer is going to want a spinning top on their car (Waymo's Velodyne lidar).

2D lidar is less expensive, but still isn't cheap enough for the number of sensors that would be needed (multiple sensors would be needed for proper coverage, if the the sensors were incorporated into the car body); also, they are still mechanically complex (most rely on spinning mirrors or sensors).

That leaves something like flash lidar, which isn't quite cheap enough (though it's dropped in price significantly in recent years), and I think it's resolution is still fairly low - but it does allow for 3D information about the scene, within it's field of view.

I have always understood why Tesla took the route of using cameras only. In theory, it should work fairly well, since humans have been using a similar system for 100+ years for driving automobiles. It really comes down to a software problem (vision and deep learning, etc), more than anything.

Camera sensors are inexpensive, and easy to incorporate into a vehicle. I tend to wonder if maybe augmentation with FLIR and other wavelength sensors might not improve things, but even standalone image sensors - in theory - should be completely workable.


Tesla doesn't use cameras only. They use cameras, radar, and ultrasound.


Also iirc current LIDAR hardware is rather large and needs to placed in a fairly open and visible position. The HN demographic may not appreciate this as much as they should but looks are really important when it comes to selling a car, a LiDAR set is unsightly. LiDAR-less maybe the only market viable option.


I'm willing to mount a fake LIDAR on top of my car just for the status symbolism, but I guess we are a niche demographic.


I don't think so. Being able to go home drunk together with friends at night without a taxi sounds like a feature everybody would love while there's no better solution than having an ugly LIDAR.

Elon Musk is probably right that in the long term it won't be needed, but those few years count a lot.


They kind of do... Tesla uses 12 ultrasonic (sonar) sensors (and cameras). GM uses radar (and cameras).


Tesla also has a radar.


bats aren't traveling 70mph, are they?


No but they're also flying in much closer tighter quarters.


I think that's kinda the parent comment's point. Just because sonar is good for indoor localization doesn't mean it's got the range for highway speeds. In fact, ultrasound on cars is for exactly this: low speed, close quarters detection. This is why backup sensors or self parking systems rely on ultrasound.


Is there the equivalent of Reynolds number for this? I'm guessing something based on a characteristic length and velocity? It looks like I want the Walking Froude Number [0] which is Fr = u^2 / g.l it seems...

0. https://en.wikipedia.org/wiki/Froude_number#Walking_Froude_n...


Closer relative to their size? Cars often travel in packs closer together than unit size.


rain and wind would screw sonar up, no?


>They have a good solution to the "expecting the driver to watch and take over" problem

I've always thought the model itself represents a fundamental design flaw. These systems should either be fully autonomous and responsible, or engage only as active safety measures in accident avoidance.

Tesla has been particularly ambiguous and irresponsible in its messaging/positioning. Naming the product "Autopilot" and really playing up its capabilities, then disclaiming responsibility in the event that the system fails.


According to the article GM has decided to simply restrict the domain of their semi-autonomous offering to only work on divided highways. The author says that the result is innovative and brilliant. But the problems they're trying to solve are not nearly as ambitious or challenging as those Tesla has decided to tackle.

While it can be argued that GM should received kudos for releasing a feature that can work reliably, I don't see it as innovative.

It kind of reinforces that traditional car makers are never going to offer anything revolutionary to the car market. They're going to use the work of others to incrementally add features to cars that they think will increase sales. They won't take risks with technology.


Their decision to limit the initial problem domain is genius. When it comes to autonomous driving, safety is paramount. Limiting it to highways is a good way to ensure that you can maximize the system's safety and performance, while still getting a product to market that is ahead of the competition and which allows you to gather data and iterate to expand in the future.

SpaceX wants to go to Mars. Why haven't they yet? Do you consider them uninnovative? Or maybe they're just taking their time getting the technology right, because they know (1) they're leading the industry already, and (2) if they fuck it up, it would be a highly expensive mistake they probably wouldn't recover from.

If your definition of "innovation" is "irreparably risking your business and customers' lives in the pursuit of profit", I sincerely hope I never have to interact with you in any way in the future.


>SpaceX wants to go to Mars. Why haven't they yet? Do you consider them uninnovative?

The proper analogy is "SpaceX wants to go to Mars, and is pushing aggressively to do so" while a "Competitor sees SpaceX's cost cutting for LEO, and is aggressively copying that feature while completlely ignoring Mars"

In this case, Tesla is shooting for Mars (Complete autonomy in all situations) and is rolling out each phase.

GM, on the other hand, noticed that Tesla's phase 1 autonomy was marketable, so they copy pasted a profitable feature without copying the overall goal of getting to mars (of making an autonomous car).

Every single GE car with this feature will have THIS feature and nothing else.

Every single new Tesla will have todays features, AND tomorrows features, through software updates, because Tesla is attempting to achieve FAR FAR MORE than GM, and is willing to backport those enhancements.

Today GM looks better. In five years, it might look archaic.


Nope, GM has already self-driving cars on the street here in San Francisco. Remember they own Cruise.

So, GM has decided to go to Mars as Tesla did. It has acquired the tech to go to Mars and started testing it. In the meanwhile realized it could copy a page from Tesla's book and put some semi-autonomous vehicle on the road. And, according to this article, did an excellent job.

I'm no way a GM fanboy (do they even exists), but could we be fair?


Can confirm: I see Cruise vehicles virtually every day in SF.


"I'm no way a GM fanboy (do they even exists), but could we be fair?"

You didn't even read my post, and you want to talk about fair?

I very clearly state that Tesla is updating new features into cars, while GM is not.

That Tesla gets phase 1, phase 2, and beyond, while GM merely labels phase 1 by a marketing name and will never update it ever.

There is a massive chasm here between buying a car today that will "may take you to mars", and buying a car today that "will absolutely, never, ever, ever be capable of going to mars".

So let's be fair.


So let's be fair.

GM has a Level 2, maybe Level 3, self-driving platform in its current generation of self-driving cars. It's not claiming that current cards will be able to achieve Level 4 or 5 self-driving functionality. But it is expecting that future generations of their cars will, especially once low-cost LIDARs are incorporated into their platform.

Tesla has, being charitable, a Level 1 self-driving platform. Current Tesla cars don't have the sensors or processing power to handle the demands necessary for Level 3 self-driving, let alone Level 5 (i.e., truly autonomous functionality). Hell, Tesla's current hardware platform does't even have the same level of functionality as their original self-driving hardware (using MobiEye), which was just glorified lane-keeping and self-parking, i.e., Level 1 self-driving.

Tesla is playing a dangerous marketing game and deserves to be called on their bullshit.


I read your comment. And I stand my point.

Even being very very friendly and interpreting your comment in the way you clarified it, there is an assumption in your comment. The assumption is that Tesla will be able to bring their car to Level 5, full autonomy, while GM will not be able to that. And let me ask you why you say so? Just because Tesla, in a marketing move, tells you they will be able to do that, and GM is not doing that? Maybe GM has a legal department that warned them not to do that?

To paraphrase your point, no there's no difference between a car that, according to their marketing "will maybe be able to go to mars" and a car that "has the hardware that could be maybe able to go to mars, but we prefer to tell you now because we could be liable if we fail". I understand the value of aspirational statements, but still...


Can you point to 1 time a GM vehicle was substantially enhanced through a software update?

If so, your point has merit.

If not, you are hiding behind hypotheticals in a very dishonest way.

I have NEVER heard of GM upgrading existing vehicles with significant new functionality through a FREE software update.

If that is the truth, that they have never done it, never announced it, and yet you come here and suggest that they are in fact doing something that they are not, that is supremely dishonest.

I researched GM for half an hour to find evidence that you are not being dishonest. That you are not hiding behind some really bad hypothetical.

But I found nothing to exonerate your position from the realm of "fantastical hypothetical rendered dishonestly", so I hope you can respond and help me.


Software updates won't add lidar to the cars. Sure, the ability to update software easily is an advantage for Tesla, but they're ultimately just as limited by the hardware on the car as GM is. If it turns out the current hardware isn't good enough to do what Tesla wants, todays cars are going to look just as dated in 5 years as GM's.


Curious though, would you buy this car over a Tesla (basing your decision on the self driving features only)? The reason I ask is because if I buy a Tesla, I have the idea that one day it will be completely self driving one day. With this Cadillac, I would expect it to only do this specific subset of self driving for the life of the car.


> I have the idea that one day it will be completely self driving one day

Why do you think that? Can you explain how this would happen, when the car appears to lack the physical hardware (e.g. LIDARs, computing power) to do so in anything but clear conditions on highways?

Moreover, why would Tesla be interested in doing so? They gain nothing, reduce a customer's desire to upgrade to the latest model, and put themselves into major extra liability if the system ever fucks up.


They would gain a massive fleet of self-driving cars overnight.

Remember, Elon doesn’t think like other industrialists. I fully believe that he views profit as little more than a tool to reach his other goals.

Also, having worked on autonomous vehicles, my opinion is that LIDAR is really overrated, and if compute power is an issue, that can be a straightforward modular upgrade.


Here's the thing: if upgrading the hardware is as straightforward as you say it is, than GM is in a far better position to provide true self-driving in its cars.

After all, it's already got the hardware and hardpoints for installing hardware. Swapping out existing LIDAR arrays for newer arrays, or current processors for optimized self-driving processors, would be a piece of cake. Especially with GM's extremely large supply chain, capable of producing more cars in an hour than Tesla can produce in a year, and the corresponding parts for all of those cars.


> LIDAR is really overrated

Fair enough, although it does have better range than ultrasonic and it can see through rain/fog/snow better than cameras.


Tesla claims every new car already has the necessary hardware, and they're already pre-selling the upgrade. They could very well be lying, of course.


Which really means that a major motivator for Tesla at this point will be the threat of a class action lawsuit for selling a feature that doesn't exist and whose timeline keeps getting pushed back.



Tesla is interested because cars are expensive enough that people don't completely replace them regularly - they might trade them in every 3 years, but they are able to trade them in because somebody else is willing to buy a 3 year old car. Thus for all car makers an important consideration is resale value - they don't directly gain anything from it, but they know if their cars have a poor resale value that translates into a lot of potential customers unable to afford to come back and buy a new car. Even if they do buy a new one they will look to a different brand with better resale because that resale value translates into cash into their pocket (there are a few rich exceptions but not enough to buy a large company on)

All car makers would love it if they could reduce the lifespan of their car. I'm just old enough to remember my parents getting calls because a car was about to turn 100,000 miles - they would fill the car with people to see that event - they had to put oil in before and after that 10 mile drive, and you could see the road through the floor boards. Today even the worst cars are expected to make it to 200,000 miles with just minimal maintenance, and still meet emissions - this is to the auto makers loss. However if anyone tries to go down the road of planned obsolescence they rightly believe that will catch up to them. Gm still hasn't complete shaken that reputation - you can see it when they put a GM name on a Japanese car and it is rated lower in reliability than the maker's version.


> Moreover, why would Tesla be interested in doing so? They gain nothing, reduce a customer's desire to upgrade to the latest model, and put themselves into major extra liability if the system ever fucks up.

This keeps the vehicle resale value high, which constrains demand, which keeps their margins higher than all other auto manufacturers.


> Why do you think that? Can you explain how this would happen, when the car appears to lack the physical hardware

You're thinking like the Javascript Generation. People do impossible things all the time on so-called "outdated" gear. And a Tesla is far from outdated.

It could very well be a hardware problem that can be solved in software.


As a Tesla model 3 reservation holder, I wish they would decide they're an electric car company, not an electric autonomous car company. I badly want the former. I'm not interested in the self-driving part until it's really good. This half-hearted level 2 stuff is uninteresting to me - I see it as dangerous and marginally useful. The fact that it comes with heavy handed control and surveillance by the company makes it worse.

I'd rather wait 5-10 years for someone like Waymo to figure it out and then roll it out than see a car I really want today get crapped up with features that are incidental to its raison d'etre. Honestly this and the delays may be enough to make me cancel my res.


Probably the Cadillac. I have no expectation of a Tesla or anyone else having a 100% autonomous vehicle during the lifetime of current cars, much less one that it retrofitted to current hardware.


Personally, neither. I strongly believe that the only autonomous tech a general purpose, manned car should ever have is assistive and emergency systems, not replicative systems. Meaning, collision breaking, lane-keep assist, road-departure warning, adaptive cruise, blind spot monitoring, etc. You can get these on an Accord nowadays.

General purpose cars are used in too many situations for general purpose autonomous systems to operate in. Moreover, the addition of autonomous systems means that the human behind the wheel will be less vigilant about keeping themselves and other drivers safe. I very strongly believe that autonomous systems beyond L2 actually do not make the car safer, except in situations where the driver is impaired; they just make the car more convenient.

L4 has definite possibility for things like uber where pickup and dropoff are within the same city. But it seems increasingly unlikely that we'll see L5 within our lifetimes, at least from any ethical company who doesn't oversell what the system is capable of.


Nobody outside of Tesla appear to buy the idea that a car without LIDAR can be fully autonomous within a relevant timeline. I mean, sure, maybe we'll solve fully autonomy with only the limited sensors on a Tesla in 20 years, but question whether you will still have a car you buy tomorrow 20 years from now.

Tesla claims that they can do it with the sensors they have, but please do note that none of the other autonomous vehicle players -- many of whom appear to have much more sophisticated offerings than Tesla -- think it can be done.


> but question whether you will still have a car you buy tomorrow 20 years from now

I know I'm in the minority, but both of my cars that I drive are almost 20 years old (one's a 1999, the other a 2004). Prior to the 2004, I owned a truck from 1996 (ran great until I spun a journal bearing for no good reason).

I also once owned a 1979 Ford Bronco...


> Their decision to limit the initial problem domain is genius

Is it? It sounds like "If you can't think of an answer, change the question."

That's like Microsoft saying "We can't keep viruses out of our computers, so we're not going to try. Instead, we just won't let them access the internet anymore."*

(*Yes, I know that pre-internet C-64's and Apple ]['s had viruses, too. Work with me here, people!)


There is no evidence in this article that GM's system is technically inferior - if anything, the opposite. Whatever the question is, your strained analogies are beside the point.


> Their decision to limit the initial problem domain is genius

I believe it's called "Level 4" autonomous driving. It's meant to drive only on safe routes for which the carmakers have already tested their technology.

In theory, all self-driving cars should be classified as Level 4 right now, because I don't think any of them is good enough for being classified as Level 5.


Levels quantify the level of autonomy Level 4 is hands off and eyes off.

They do not quantify the domain a car can be level 4 in one domain and level 3 2 or 0 in any other.

The domain specific qualification is what all car manufacturers are essentially going for with the exception of Tesla which will all due respect is using a system which is borderline level 3 according the the manufacturer with a single sensor type that was designed primarily for ADAS and calling it an autopilot.

The MobilEye powered Tesla’s will never be a level 4/5 certified even the NVIDIA ones, in fact with how the current regulation is brewing current Tesla Autopilot actually might regress considerably more than some of it's competitors like Audi and GM because Tesla wants to provide full autonomy but doesn't have anywhere near the hardware to do so, and as regulation will eventually catch up to them they are quite likely to lose it.


As the article notes, current systems are maybe Level 2ish but the guidelines are vague enough that it's hard to say for sure.


Level 4? What are you on about? Level 4 is "no driver attention is ever required for safety, i.e. the driver may safely go to sleep or leave the driver's seat". That's very far from anything on the market today.


> What are you on about?

Please don't be rude here. Your comment would be fine with just the two last sentences.


Sorry @mtgx, that was rude of me.


Or it reinforces the idea (rightly or wrongly) that traditional car manufacturers won't play unnecessary games with people's lives.

Regardless of what you, I or others in 'our' sphere might think, there's a lot of traction in playing the conservative card when it comes to cars hurtling down roads.

As someone in a big city who drives a bit, cycles a bit and walks a lot, I can't help but feel we're still a long way off high autonomy in cars outwith constrained environments.


Unnecessary plays with shareholder's money. FTFY.

What history told us is that people's lives really isn't a concern.

https://en.wikipedia.org/wiki/General_Motors_ignition_switch...

https://en.m.wikipedia.org/wiki/Grimshaw_v._Ford_Motor_Co.


Part of the reason more mature car manufacturers are conservative is because they've gone through lawsuits and recalls.


This statement: "traditional car manufacturers won't play unnecessary games with people's lives" should be amended with "unless the games will save some money, and won't generate bad PR"


I already trust self driving cars more than random motorists.

Slowing down the deployment of self driving cars is likely to cost far more lives than it saves. The difference is we don't create national news for the several people that will be killed today on America's highways.


You should probably reconsider your trust in the face of data:

https://blog.piekniewski.info/2018/02/09/a-v-safety-2018-upd...

But who cares about data, if it is all about AI singularity religion?


Disengagements vs crashing is a meaningless comparison, these are in the middle of testing not deployed systems. We have real world data on self driving cars and per mile they have statistically saved lives already and are only improving with time.

Still if you want a real comparison my parked cars have been hit 5 times, most of which does not show up on accident data, because I only reported one of those the rest caused minor damage. Similarly I know of multiple cases of people driving off the road and not reporting anything because they did not cause significant damage. I even know of several cases of road accidents not being reported due to minor damage.

PS: Tesla's self driving cars get a lot of flack for not being ready but they have dramatically lower accident rates than non self driving Tesla's.


What self driving data do you mean? There are no self driving cars in deployment right now. There are various semi-autonomous safety features that _REQUIRE_ full attention from the driver.


You don't actually need to pay attention in a Tesla when it's on the road. It's closer to a dead man switch on a train than a real test of attention. Except you don't even need to keep your hands on the wheel as it's really easy to fool.

So, your argument falls flat.


LOL, all I can say to this, is that perhaps you should read your Tesla manual. I sincerely wish you luck with your driving, just watch out for fire trucks parked in the emergency lane...


The same people will also turn around and say "but it's not really autopilot!" when faced with challenges from the other direction.


Now there poor attempt at a Tu Quoque fallacy if I ever saw one. It's both incorrect and meaningless.

Some people may take that view, but I don't. It's not great, but good enough for deployment and easily qualifies as autopilot.

Really, aircraft is autopilot not designed to be used without pilots. Yet, it's still good enough to both be used significantly more than 50% of the time on commercial flights and increase overall safety. So even a highway only car autopilots that's plenty to be useful. Now, if you mean it's not designed to be level 4 automation then sure, it's got a steering wheel unlike several buses in production.


What an ungracious way to admit you're wrong. I am simply pointing out 'paying attention' is like keeping tires properly inflated, it's something many people are not going to do and thus part of real world data.


> Tesla's self driving cars get a lot of flack for not being ready but they have dramatically lower accident rates than non self driving Tesla's

I'm interested in seeing the stats for this if you have them, because I've previously looked and been unable to find anything to back this up.

To be properly comparable, we'd need to be comparing human drivers in the same situations in which people use autopilot - i.e. less challenging situations like highway driving.

This article covers some of the problems with Musks pronouncements on this matter: https://www.csmonitor.com/Business/In-Gear/2016/1014/How-saf...


Measuring disengagements for pre-release autonomous vehicles seems like a terrible way to measure safety of what's actually available to the public now.

Here's a better study from the NHTSA that measures the safety impact of Tesla's autopilot features. They found that Tesla vehicle crash rates dropped by almost 40 percent for cars equipped with autopilot: https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF


You are mixing things. Measuring safety of a human driven car with a human driven car with additional safety feature is not the same as measuring safety of autonomous vehicle. Contrary to its name, autopilot is not autonomous driving feature. It is a "smarter" cruise control. And if the drivers remain attentive it can improve safety as seen in the statistics you cite. If the drivers are not attentive (and treat it like "autonomous" feature) they end up like Joshua Brown.

And BTW, I'm not saying I'm measuring what is available on the market. I simply point out, that there is no evidence yet that _fully_ autonomous vehicles had exceeded human safety level by any stretch of imagination.


"Fully" autonomous cars (by your definition) have zero fatalities to date. Considering miles driven that is significant better than human drivers even paid human drivers if you look at trucking death statistics.

Now, driving a semi may be more dangerous than a car, but it's hard to argue it's vastly more dangerous. Further, people are not uniformly good drivers, a self driving car is vastly better than someone that so drunk they have trouble standing.


I'm sorry to disappoint you, but again the data does not support your claim. The cumulative number of miles driven by autonomous test fleet to date is nowhere near 100 million miles, which is where one should expect the first fatality from an average human. So no, your claim is empty.


First US numbers are greater than 1.25 deaths per 100 million miles. Self driving cars are well over 10 million miles but not 80 million. Even then 0 < 0.1 even if it's not great evidence it is still evidence.

However US is not the only country, they have easily passed the accident rate in Brazil.


It is 0.99 deaths/100 mil in California, but fair enough. I'm not sure where you got your "well over 10mil miles" data. Last time I checked, Waymo clocked 3mil, Uber 2mil, Cruise <0.5mil, and the rest are small potatoes, so it looks more like maybe 6-7 million miles to date at best (though if you can provide a reference that would be great). But that aside, you are missing one more important point:

Current autonomous cars still have a backup driver. So what we are measuring here is a compound safety level of autonomous tech + attentive, professional, sober human. The real data of interest is the safety level of autonomous vehicle alone. We don't have that data. We can proxy it, by looking at the numbers of disengagements and their severity, and that data currently does not look particularly good. But nonetheless it is just proxy data. Once a larger scale tests without backup drivers are concluded, we will get a better picture. Until then, I advise to withhold from any statements such as "autonomous vehicles are much safer than humans", because they are simply not supported by any data.

As for you final statement, I bet there are many undeveloped countries or particular cities with huge number of deaths per mile. But exceeding their death rate on US roads is not anything to be celebrated.


https://medium.com/waymo/waymos-fleet-reaches-4-million-self... 4 million in early November 2017 with 1 million miles taking 6 months. https://www.engadget.com/2017/11/27/waymo-autonomous-cars-dr...

While it took the company 18 months to reach one million, then 14 to reach two, then 8 months to reach three and finally six months to reach the four million mile marker.

Call it 4.7 to 4.9 million miles today.

Uber went from 1 to 2 million miles in 100 days and where over 2 million in December. So they are likely around 2.5 million today unless they slowed down significantly.

However, their are actually a surprisingly large number of self driving car initiatives world wide. Though apparently EasyMile which sells slow level 4 automation busses to 20 countries is only at ~100,000 miles which surprised me. Sill there are several competitors in that space.


Regardless of what you, I or others in 'our' sphere might think, there's a lot of traction in playing the conservative card when it comes to cars hurtling down roads.

Every day that goes by without a shift to autonomous driving, we lose 70 or so people in the US alone. Why this isn't being treated as a Manhattan- or Apollo-scale project, I don't understand.

A conservative approach is not what's called for here. We can afford to break a few eggs on the way. Sounds heartless... but sorry, that's what the math says.


This is the same problem current safety tech already has. We have cars that can stop themselves in most situations already - things like that + lane detection + distracted/sleepy driver detection would do wonders if everyone had them even without full autonomy.

But they also all add cost to new vehicles, and the average age of vehicles on the road reflects that. It's been climbing, looks like 11.5 years now.[0]

Looks like there's about 260M cars in the US, and on average only ~18M new ones sold each year[1]. Let's go conservative, and say we just need to replace 200M cars with current-or-near-future tech ones to save 30K lives a year. Let's say a 30K car has the level of tech we want. 200M * 30K = 6 trillion.

But car crashes are like 2% or fewer[2] of all deaths in the US. So that's a massive cost that might be much better spent elsewhere. It's not that we can't make cars safer - we can already do that! - it's that there are some much more appealing ways to spend the money if approaching it from a non-commercial perspective.

[0] http://www.latimes.com/business/autos/la-fi-hy-ihs-average-c...

[1] https://www.statista.com/statistics/183505/number-of-vehicle... and https://fred.stlouisfed.org/series/TOTALSA

[2] https://www.cdc.gov/nchs/fastats/deaths.htm vs https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


Furthermore, I assume that many cars on the roads today don't have the most modern air bag systems and passive features that increase survivability in a crash. To say nothing of the fact that they're simply old which certainly isn't likely to help with their likelihood to get in a crash or survive one.

I expect that if every car in the US were replaced with a new one with most of the available modern safety features, you'd reduce the current fatality rate by a very significant percentage.

One of my vehicles is quite old. But I'm not going to replace it just to get new safety features.


I read the article, and my takeaways were very different from yours. While each of us homes in on different aspects of the same article, I am wondering how you arrived at some of your conclusions. In particular:

1. Where does it say anything about the challenges that GM is trying to solve long term?

2. What about this release indicates that GM haven't set their sights higher?

If you have evidence to back your statements, please share. I don't have any insider GM knowledge, but from the outside looking in, this is a genius move by GM and the very essence of an MVP release. Rolling out with a limited set of capabilities that work as advertised is better to me (as a driver) than launching with a larger set of half-working features.

If GM can figure out a way to weave learning capabilities into their platform with their production cars providing training data for this feature, that will put them on the fast path to competing strongly in this market.

Edit: Minor word-smithing


> It kind of reinforces that traditional car makers are never going to offer anything revolutionary to the car market. They're going to use the work of others to incrementally add features to cars that they think will increase sales. They won't take risks with technology.

I don't really want them to. They probably wouldn't be very good at the software.

What I want to see is software companies developing the software, which is their specialty. The large automakers can license that software and churn out millions of cars based on it, which is their specialty -- knowing how to consistently build millions of cars and avoid production snafus is the big advantage the big automakers have over the smaller manufacturers like Tesla.

I think that turn of events would get us the best possible software and the best possible cars running that software, and it's probably the only way that enough autonomous vehicles would be built for every driver to be able to own one.


> I don't really want them to. They probably wouldn't be very good at the software.

RE: Mercedes cars have 60M lines of code. Daimler CEO said it himself. "We are a software company"


I'm sure they write lots of code. But I've heard, from people who would know, that the code quality in cars is pretty bad. That's what I meant by not being good at it.


Yes but as far as I can tell GM SuperCruise hasn't killed anybody either. 'Move fast and break things' doesn't seem like such a great philosophy when those things are people's lives.


It's only available in a small handful of cars and only works on certain roads. I have no doubt that one of these super-cruisers will be surprised by something and crash into it once they start racking up miles. Driving is dangerous.


You think Tesla will ever get sued over Autopilot?


Given some of Tesla's past behavior, I suspect any such attempt would involve any or the following:

- pulling of full vehicle logs, prior to any subpoena/ discovery, and publishing of press releases with Tesla's "findings" on the driver's record

- disabling of Autopilot for that vehicle, if not

- disabling of the vehicle for failure to adhere to the EULA which probably has "binding arbitration" clauses


Not going to be effective if the plaintiff is a third party injured by a Tesla induced crash.


Driver-induced crash. Tesla ships a level 2 system.


There's GM's Supercruise, talked about in the article, and then there's Cruise Automation, a startup acquired by GM which is focusing on robotaxis, and is at once more capable and more revolutionary than anything Tesla is doing.


> and is at once more capable and more revolutionary than anything Tesla is doing.

Yes and no.

Part of the reason why Tesla is behind is because they're gimping their engineers by focusing on optical-only solutions.

IIRC, Cruise Automation uses LIDAR, which gives better sensory data. IMO, this is where AI really has an edge: when they start to use sensors that humans don't have. LIDAR accurately maps the distance between the LIDAR and every physical thing in a certain radius.

Solving the problem with LIDAR isn't "revolutionary" per se. Its kind of the obvious way to do autonomous cars. It is a more expensive approach however.

I guess the end results are all that matters however. At which point, I am more bullish on any LIDAR approach. If LIDAR makes it easier to code and more reliable for AIs to detect where they are in 3d space, it should be used (and then you try to solve the mass-production problem to bring down the cost of LIDAR units).

I think we've seen the limits of the optical-only approach from Tesla: https://www.wired.com/story/tesla-autopilot-why-crash-radar/


>solving the problem with Lidar isn't revolutionary per se

Solving the problem in any capacity would be revolutionary. Autonomous vehicles operating in a commercial capacity on public roads technically don't yet exist.

There's the old saying 'it's easier to optimize a working system than it is to get an optimized system working',

Outfits using Lidar (and using about an order of magnitude more compute than what's going into Tesla's HW2) are much closer to solving autonomy than Tesla, and in time these systems will get cheaper.


However, you can see how an optical-based system should in theory work because all our roads and signs are designed based on that assumption and has never assumed LIDAR. If a human can do it optically, one can reason, so should a machine. That said, I think Tesla is making a big assumption about human optical systems and object recognition and our current ability to replicate it. One of the things my undergraduate classes in AI taught me is that people tend to underestimate the sophistication and complexity of human neuro/cognitive systems. We are only consciously made aware of the simplified end results but are often unaware of the complexity of the underlying sub-systems.


Yeah, I would never argue that an optical system won't ever work, I believe it will work someday. While the best optical/radar have shown alright capabilities in a demonstration capacity in straightforward driving situations, they're nowhere near having the kind of redundancy needed to safely remove a human from the driving task altogether.

With Waymo's robotaxis, who are the closest to a pilot commercial deployment, they spare no expense, they're riddled with sensors, they have giant computers in the trunk, there's an air-gapped backup computer that can take the vehicle to a minimal risk condition should the primary fail. They are maintenance intensive and require careful maintenance and oversight; Waymo (and GM) are building call centres filled with remote human monitors who can intervene when the vehicles get hung up, and they're geo-fenced. They are far away from being cheap enough or idiot-proof enough to put in the hands of private owners.

Assuming Tesla sticks to it's guns, I'm thinking maybe in 2025 we can revisit that and see how far Tesla has come towards full autonomy.

During the Waymo vs. Uber trial Judge Alsup asked Waymo CEO John Krafcik if Tesla uses Lidar.

"Not yet" he replied.


>They won't take risks with technology.

Better you say taking risks with human lives, this is not a framework or app where you risk crashes and upseting customers because they lost some work, this is about human life and expensive damages.


I think my point was that GM wouldn't have this product to sell in the first place unless someone else already took those risks. They want to sell the product that someone else made possible.

Now, as others have pointed out there is a market for that, and a great many people prefer the feeling of reliability they get from GM's offering. I just wouldn't call that innovation.


Google had worked on self driving cars on a while, the problem was probably the hardware was not ready. Tesla AI takes too many risks in my opinion because it is missing important sensors(IMHO) it crashes often with big trucks or buses proving the optimization they do are not safe enough and I prefer a company that is not risking my life by selling me a driver assistant that is not that good and is badly named autopilot.

I have a feeling but I can be wrong that you think Tesla innovated the AI on self driving cars and GM or others are profiting for it, self driving AI was studied and worked on before Tesla, Tesla are in the group of companies that risk the life of others where others waited for the tech to be better. Also other companies like GM already had driving assist functions, they now upgrade them with new features


The level of innovation is not determined by choosing an ambitious goal, but by actual progress towards it. AFAIK, Tesla and GM are working towards the same goal.

The author, who has quite a bit of experience with both systems, and apparently likes both of them, writes "Tesla is vague about [deciding whether it can handle the situation], and often errs on the side of What the hell, let's give this a shot." (author's emphasis.)

I would characterize GM's domain restriction as prudent rather than innovative, but that choice is not necessarily indicative of what has been achieved in R&D.


>It kind of reinforces that traditional car makers are never going to offer anything revolutionary to the car market

Oh come on. Not having to have your hands on the wheel, while driving at 65mph for three hours down a four lane divided highway, is absolutely "revolutionary". Is it as big as full autonomy? No. But it's no slouch, either, and it's kind of silly to claim that GM (which in this case is really the acquired Cruise team anyway) "is never going to offer anything revolutionary to the car market).


GM bought Cruise. How is that not taking a risk with technology?!


(Slightly off topic) Has anyone else been around the Cruise Automation self driving cars in SF? I have and I can't believe they are allowed on the road given how poorly they drive.

While biking to work today, there was a Cruise car behind me, driving at about 7 mph and just randomly stopping in the middle of the road constantly for no reason at all. Lots of cars kept on constantly honking at the car the whole time.

Last week I saw a Cruise car start to make a left hand turn at an intersection, but its sensors must have thought someone was in the crosswalk, so it stopped in the middle of the other lane of oncoming traffic causing other cars to abruptly stop and start honking.

If this was a normal car, I'd call 911 to report a drunk driver. I wonder if I should just call 911 for a dangerous driver in this case. Is there any way I can report this behavior and get their cars off the road?

In case anyone wants to do some investigative reporting, all their cars come out of a garage labeled Borden Decal Co. (https://goo.gl/maps/F85WPTwkmbQ2). If you follow them around you can probably observe similar behavior.


Yes I always get a good laugh watching the Cruise cars trying to drive. They are not smooth and do weird things that a human driver would only do when drunk.


> I wonder if I should just call 911 for a dangerous driver in this case.

That might actually be a good idea: call 911, or whatever the local number is for drunk drivers. See if the robo-taxi is able to pull off the road for the police.


It's kind of a weird review. For example it rates Cruise's lane changing superior -- "as perfect as it gets" -- because it simply requires you to do it manually. (The "perfection" is in how the UI clearly communicates auto/manual transitions.)

I mean say what you will but that's just a weird way to rate autonomous driving systems.


If you had used this feature on a Tesla you would understand.

Cruise ranks 0 (it gets out the way) but the Telsa ranks -5. (actually dangerous to use).

The Tesla dealership literally recommended leaving the feature turned off.


Fine, then give them both Fs, don't call 100% manual lane changing "perfect".

The reviewer also experienced frequent highway disengagements by the SuperCruise system which you can observe in the video, yet still rated it higher than Tesla, citing only one Tesla accident. That's also weird!


I have a 2015 Tesla S 70D with Autopilot and I use the lane changing feature. It works rather better than I expected when the lane markings are clear.

Perhaps the fact that I did not expect it to be good and therefore went into using it rather carefully and sceptically helped me use it safely. The feature I was specifically interested when I specified the car was normal traffic aware cruise control.

I always look in my mirrors and so on before triggering the lane change and when the markings are not clear I pay even more attention. I use it more on British motorways than Norwegian ones because the British ones have much clearer lane markings and both I and the car can see them better.

The display of lane markings in the dash makes it very clear when the system is confident about the information it has and I bear that in mind when I trigger the lane change. I treat this feature as just another feature of the car, like anti-lock brakes, etc., good when used properly, dangerous when pushed beyond its limits.

I remember when anti-lock brakes were introduced that some motoring commentators worried that people might allow themselves to get into more challenging situations than they would do without ABS and thus be at risk of going beyond what the automation could handle and lose control. It seems they worried unduly.

Lane keeping, lane changing, etc., are features that any sensible driver can use to advantage when it is safe to do so. If the driver is not sensible then I'm not sure that they will be any better or worse when using or not using the feature.


I'm really confused. When describing the "Operational Domain" The author explains that a Tesla he drove 18 years ago handled a sample drive on FDR with two disengagements, while the Cadillac wouldn't stay engaged when he tried the same drive. Then says the Cadillac wins by a hair. How does a brand new system that doesn't work reliably win over a system that's been working better for over 18 months?


I think the point of the whole article is that GM was able to make sure people only use the system when it was capable of working, where as Tesla's system can be used when it shouldn't be. He gets very distracted through out and accidentally mentions how good Tesla's self driving is.

As an aside, I rather Tesla's strategy to GM's. In 10 years when the tech is there, will this Cadillac simply handle freeways better? Seems like the car is locked in to a specific subset of self driving for the life of the car.


the whole field of modern "ai" is about 6 years old (since AlexNet won ImageNet), so 18 months is quite a long time.

More importantly, a review of flash performance is something that can be reasonable to perform, as another more long term reliability oriented one also can be.


This is a huge misconception. Modern AI is at least 30 years old, first convnets were conceived in 1980'ies, LSTM in early 90's. A lot of work went into deep learning before AlexNet eventually won the ImageNet. To some degree last years were not the "emergence" of deep learning but rather a culmination of many years of research and development.


I agree with you about the theoretical origins, but you'll agree that we didn't have anything remotely as deep or as effective before AlexNet, and that there was not remotely as much attention to the topic


I'm not sure most readers know but SuperCruise has nothing to do with Cruise Automation. SuperCruise was a GM project long before Cruise was a company or acquired by GM. Different technology and teams all together.


I'll be driving manually for a long time. I have no interest in beta testing anybody's autonomous systems at 70mph.


You're part of the beta test if you're on the road with any car using autonomy.


Yea well, you're also beta testing some 16 year old's brain on 6 pints of beer, or some 90 year olds brain who missed their afternoon nap.

You're beta testing a lot of shit on the road. I pray for the day when the only thing we're really testing, is the software, and not some idiot's brain.


My driver has had to catch the wheel on more than one occasion. When we’re in the carpool lane and the lane opens up for others cars to merge in the autopilot doesn’t see the edge of the lane and so starts swerving into the other lane. Also we did a trip to lax from San Diego and the dividers come pretty close to the edge of the lane, we were always little nervous the car was going to hit it. Overall it rides so smoothly but those few glitches make me quite nervous sometimes. It is kind of surreal to be riding in an almost self driving car. Tesla is taking steps to add safety features. Lately my driver sped up to overtake a car and the autopilot disengaged and wouldn’t re engage until we put it on park.


I see many people using this reason: Because there are people that drive drunk or text while driving or get distracted then we need the autopilot.

I think this is wrong, we just need an AI or other tech that detects this bad drivers or distracted drivers and don't let them drive/start the car or for distraction like texting safely stop the car, maybe report the driver, there must be other solution then allowing bad rrivewrs or bad AI to drive around.


Perfect, let's report all the drunk people! I'd love to live in a world where my car reports me to the authorities whenever I don't obey to its definition of good behaviour! \s


Why is not fair to have more ways to not allow drunk people to drive and depend on having policemen stopping you? Is your need to drive drunk more important then the life of others?

Let's assume your car won't let you start if drunk, there will be a fail-safe mode, you activate that in case of emergency and start it and drive to a hospital because there is an emergency. In case the car is broken and it is reporting false positive we could have rules so you get compensated for this issue by the people at fault for the false positive.

Your argument is like saying that forcing people to learn to drive is too much.

My point is that the bad drivers problem has more solutions not only a less bad AI autopilot.


My argument is that people should stop and think before asking something like this. If the car is capable to detect drunk people and shut the car down it could do so for any reason the car manufacturer likes, or even worse, the manufacturer could sell access to that system to the NSA, your bank, your insurance, whatever.

Did you really assume I'd like more drunk people driving around?


I accept your concerns, but the problems you have could be solved with laws right, you have laws for medical data, you can make laws for your car data. The Tesla and the other cars also can track you and send your data to NSA, is there less bad potential for bad things on this automated cars?

Again my point is you can reduce the car accidents with other solutions then the sexy self driving cars solution that is better then a drunk or texting driver but worse then a beginner.

Let's assume we are not in a bad country with bad laws and evil companies and we want to reduce the fatalities in accidents starting next month, can we create some hardware and software that does: -checks if driver is drunk -checks if driver is paying attention, not texting, or having a medical problem or sleeping at the wheal

My opinion is that we could implement that, and it would be better then having AI that hits stopped trucks, or that get updated monthly with who knows what new features that could make it worse in some cases.

What I would suggest is to apply the scientific method, create this devices , find some people to use them(give them some tax exceptions or something) then look at the number. If NSA is a danger in your country then I don't think Tesla has less bad potential, I would fix the NSA problem, like find a way to make them follow the laws and not spy everyone. IF the companies are bad then make laws and not let them track you or sell your data, there are many lives lost so we should try more solutions and not give up because NSA or bad companies.

Until self driving cars are as good as a regular driver then we should not allow them drive because are better then a bad driver.

Edit , I am sorry I accused you of wanting to drive drunk, it was not ok what I said, I read your comment as I don't want some law to add a device in my car to prevent me doing illegal things on public roads, I am also against companies that collect your data, I am just frustrated about the people that want the un-optimal solution of bad self driving cars to the problem of drunk or tired drivers.


sometimes i wonder how long it will take for these technologies to be available to poor people in Africa.


So I guess now we are settling for semi-autonomous vehicles. This would have been impressive 10 years ago (maybe more). They are just playing catch up while they could have been trying to be innovative all along.


I think it's more that there is a big dangerous gap between semi-autonomous vehicles and fully self-driving cars, and car companies are trying to figure out where to draw the line. Tesla, I think, takes too many chances, and GM probably is erring so far on the side of caution that most people won't use it.

Personally, I don't really have any use for a self driving car that makes me pay attention to the road. The primary 'cost' of driving isn't that i have to move the steering wheel a bit, it's that I can't do something else when i'm driving.


Kinda of a pointless comparison when one of the vehicles compared isn't and never will be commercially available...


It's already commercially available on Cadillac CT6.

http://www.cadillac.com/sedans/ct6-sedan


Not the Cadillac, the Tesla. The article compares the no-longer-available first version of the Autopilot, powered by MobilEye.


"Head to head" is an unfortunate way to compare automobiles, like playing chicken.

https://www.youtube.com/watch?v=sxooLC9EwII




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: