Imagine being on the team involved in the development and reading through this write up - I'm sure it would bring a smile to their face. "Got this right, got that wrong."
I was once on a team that had a product with some crypto features that was under attack.
There was a hackers forum where daily discoveries were discussed.
It was indeed exhilarating, and exactly that way you describe it: on one hand, we didn't really want the thing to be cracked, but on the other it was impossible not to be rooting for those scrappy hackers going down the wrong path at first before figuring out the right one, one step at a time. Every morning, we'd log in to that forum to check their overnight progress.
Still, we were confident that our bank-strength crypto algorithm would prevail.
It did not. :-)
While we had done our due diligence, an external implementation partner had decided to change the audited code later in the process, which broke things completely, in the most embarrassing way.
In the end, it didn't matter, and we probably sold a few more units that what we would have sold without the broken crypto.
It was summer 2001. The 3Com Audrey internet appliance wasn't quite canceled yet, but it wasn't thriving in the marketplace, either. I picked one up for $50 from uBid and started poking at it, blogging about my discoveries along the way. I figured out that the OTA image checksum was relatively simple to generate (something like two's-complement addition of running 32-bit integers that needed to sum to a specific constant). This enabled me to alter the next OTA image that came off the wire, which allowed me to begin replacing resources and binary components in the system.
By this point people were following the blog, occasionally linking to it on Slashdot. But then something strange happened: I started receiving anonymous emails from individuals who knew a lot about the Audrey OS's internals. They gave me hints about where the more vulnerably coded parts of the system were; they suggested hard-to-stumble-upon but valuable URLs in the Marimba OTA system; and one day when I foolishly overwrote the bootloader with a malformed image, just a couple days later a brand-new Audrey motherboard mysteriously appeared on my doorstep (at the time I didn't live very far from 3Com, and this was an age of the internet when nobody had much reason to hide their identities).
Thus, with the help of numerous knowledgeable individuals, I was able to bootstrap the Audrey modding community. The focus of the world changed soon thereafter, one Tuesday in September, but it was otherwise a similarly exhilarating experience to the one TomVDB describes. The difference was that I was the hacker, and the opposing team -- the one that had built this amazing machine (which I still have two of, by the way, both unopened) -- actually wanted me to succeed, hopefully giving the Audrey a better chance than it got during its initial, ill-fated, mid-dot-com-bust launch.
Sounds very interesting. This being hacker news, are you able to elaborate a bit on the specifics, specifically the type of crypto and how it was broken?
I wonder what the "most embarrassing way" would mean in this context -- I'm thinking timing attack or padding oracles, but it sounds like it might have been even more trivial.
Very close to a real world scenario. I usually bring it up to compare big teams to small. (Our small team was being replaced by a big, expensive team, and I found an issue in 15 minutes that the new team had created - despite sharing my findings immediately, it still took their team weeks to find it "on their own", admit to it, and finally fix it.)
There was a CAPTCHA used to prevent bot spam on a contest entry portal. The code that randomized the image displayed was modified to be stored in an application cache that persisted across sessions (meaning every "user" saw the same image and could use the same answer). Guess how useful that was in preventing bot spam?
(The fix was to delete one or two lines of code that were not only not helpful, but obviously harmful!)
I work in aerospace and it always interesting to read people speculate on how flight critical software/hardware is developed. Usually the least upvoted comment is the one that gets it the most correct.
I used to work on spacecraft flight software and I agree. Some people are so off base it's not even funny. I used to correct them but that nearly always leads to a pointless argument.
One of my favorites was when a reaction wheel failed on the Kepler spacecraft in 2013. Someone on Reddit declared that the mission was over with no chance of recovery. I kept my mouth shut. But I knew people down the hall from me were working on a solution. Kepler ended up observing for 6 more years.
An interesting and worth-knowing observation. I ought to have guessed given how off-base public discussion of other things often is, yet I forgot to apply Gell Mann Amnesia.
Which forums would you say are worth lurking in? This one? The Stack Overflow family? Are any specific subreddits not-terrible?
This forum isn't really that good for space discussion. There are definitely people in the aerospace industry, but not very many, and mostly in software. You really need some people who are aerospace engineers to have the best discussion. Lots of flight software algorithms begin as a MATLAB/Simulink or Fortran code from an aerospace engineer rewritten by a software engineer.
I've seen some good discussion on Stack Overflow, but I don't regularly lurk. I do lurk on satobs.org - occasionally there is very interesting discussion there.
One thing I've learned from reading Medium blog posts recently on programming is that there are some truly misinformed people out there that write extremely well-worded posts describing why their language or framework of choice is better than anything else. If you're just entering the field, you might take their advice as gospel, and think they're really knowledgeable, but they're not. It's just cringeworthy to see the vast amount of bad advice out there.
The most click-baity "ten reasons why you should use my favorite JS framework" posts seem to win, and it's troubling. We will soon be a world full of dumbed down workers, who can't experiment and think for ourselves, but instead get advice from whatever click-bait article included enough buzzwords to get recommended by an algorithm. What ever happened to tinkering and figuring it out on your own?
I work in games and it's fascinating reading people tear down either our design, network protocols or protection mechanisms. It's really interesting how people reason about those things in absence of source information :-)
I had a friend who was deep into that world, and it was really exciting to discuss with him how things were progressing at any given point. Every once in awhile I'd get either a new card, or chip, or some contraption to stick into the slot where the card went (I believe this was a way to prevent DTV from sending a signal that would fry counterfeit cards) – anyhow, I haven't kept up with things for a long time, but it's kind of my assumption that now it's impossible to hack, or not worth the effort, I should go find out :)
I was heavily involved in that world in the late 90s to early 2000s, before DirecTV made it almost impossible.
I remember showing a couple friends of mine how to flash cards with a DB9 serial port card programmer. You could get every channel, including PPV and porn. I never wanted to do anything illegal except watch TV and movies, but a couple friends of mine had highly successful "satellite TV installation businesses" where they would install for legitimate DirecTV customers, then say "you know, for a couple hundred bucks I can unlock all the channels..."
It was a fun time, and what made it interesting was the cat and mouse game that DirecTV played with the hackers. We had cards working for months, but at some points in time you had to get new updates every few days to keep it working. Eventually it became impossible, but it's been over a decade so I can't remember what the final event was.
I recall talking to a school friend, who happened to land a job in Microsoft working on DRM. That was their daily routine. They knew that no matter how good they were, any new measure would be countermeasured within months. Target was to be ahead of people cracking that protection in the (very) long run, not to win short battles.
> As a point of reference - Google’s TPU V1, which is the one that Google uses to actually run neural networks (the other versions are optimized for training) is very similar to the specs I’ve outlined above.
The TPU V1 is a 2015 chip (publicly announced in 2016, and that Google had been using internally for over a year). 4 years is a big lag in terms of technology.
Additionally, the comment about "actually run neural networks " seems to be, AFAIK, plainly wrong. The first version was limited to inference because it could only perform integer operations, while V2 and onward support both training and inference, as they can perform float operations.
TPU V1 was never available to the public, but both V2 and V3 are available on Google Cloud. I don't have any info on it, but at this point, I'd expect all V1 chips to have been deprecated and likely removed from production, given the savings in space and power from V2 and V3.
4 years is a big lag if you're not doing safety critical work. If you are doing safety critical, 4 years isn't particularly fast, but it's not unreasonable. If you've never worked in aerospace or automotive, you would be shocked at the requirements for software and hardware that can kill people if it breaks (assuming that Tesla is following the automotive equivalent of DO-178/DO-254, ISO26262)... I guess the automotive guys do have an advantage because they're not required to have independence for the various stages of verification, but it's still a big chunk of work (more than the actual development).
Everything we know about how Tesla operates as a company indicates that they do not freeze development of a technology or component and then spend four years testing and verifying it before deployment.
That's not exactly how it works. Development doesn't freeze - change requests, PRs, etc. can all be rolled in right up to the end and even after release if the appropriate process is followed, and in software, even in a safety critical world, you can pipeline updates out in a reasonably fast manner (we can get it down to about two weeks for minor updates). Hardware is an entirely different beast. You have to do FPGA development and prove-out, gate-level sims, timing accurate sims, thousands and thousands and thousands of simulated testbenches (hardware sim is orders of magnitude slower than software sim - hours per second, even on the most advanced equipment). Therefore, each testbench must be developed to test a very specific thing - in general, you can't just run your high-level software unit tests. Anyway, even if Tesla is the kind of company to roll software (doable) or boards (doable, but harder) right up to the last minute ... they do not (and neither does anybody else) spin a chip over and over again during the course of the development cycle. Speaking as the lead of a SW team that proved out a safety-critical custom SOC, it is colossally expensive. Every single iteration takes many months and millions of dollars.
You're agreeing with me. Tesla in all likelihood does not freeze development of a chip model for 4 years before releasing it. So you cannot compare Tesla's current chip with a Google chip that is 4 years old.
Well, Tesla clearly isn't doing safety critical work on autonomous systems (it wouldn't have even launched "autopilot" if it was), so 4 years is indeed a big lag.
Even if it's not now, their intention is to provide higher levels of SAE automation on the same software as time progresses, which means that the current hardware probably has to hit a higher design assurance level than the current software.
Tesla claims [1] that autopilot driving is safer than regular driving in terms of risk of accident per mile. Not sure if I agree with the way they've constructed their statistics, but for a moment take them at face value.
Isn't then the safer thing to release and encourage autopilot to the largest extent possible?
That statistic is wildly and purposefully misleading. It compares the safety of Autopilot (which only drives on the freeway) with human drivers on any road. Accidents-per-mile are already much lower on the freeway.
Plus, the statistic ignores the fact that responsibility is actually shared between humans and the car. If autopilot gets into an unsafe situation and gives control to the human at the last moment, the car company might claim that the accident happened under human control. We have a long way to go before we can truly understand the safety of semi-autonomous vehicles.
I'd even expect their human driven statistics not actually representative or useful for comparisons. Their cars are expensive, mostly excluding young and thus often unexperienced drivers. They also don't particularly target older people, the other group with high accident rates.
Controlling for all those variables is hard and not to their benefit, thus I'd never expect their pr team to do so. But I wonder if that might even creep into their automated driving stats... e.g. by drivers taking control in dangerous situations.
>> If autopilot gets into an unsafe situation and gives control to the human at the last moment
We have lots of experience with the reverse, where the human drives but the autopilot rips control away when things go wrong. Traction control, radar-triggered braking, even basic ABS is a machine taking over from the human once things get hairy. Several AirForce fighters have systems that will sometimes ignore the pilot's inputs and save the aircraft from a crash. That's the real path for semi-autonomous cars imho: you do all the driving until the robot steps in to save your ass.
(I cannot wait for the day someone tries to sell a car that will automatically obey speed limits regardless of driver input. That's a stock to short.)
Well, it’s not autopilot, it’s guided steering and gas, which on a freeway is the “easy” autopilot path: trivially (for a human) identifiable routes, signage, infrequent low visibility occluded by tight turns, etc. I’d guess a) if you removed the driver from behind the wheel the accident per mile would skyrocket and b) humans already have a much improved accident rate on the freeway compared to general purpose driving. Is it better than a driver driving alone? Maybe—time will tell as more dose off behind wheel, although I am hopeful—but it’s hardly designed to operate without failure and should not be referred to as autopilot at all. It would help if Tesla provided a meaningful comparison instead of the disingenuous one they trot around.
Safety improvements on divided highways I think are to the point that driving is so boring for humans that any increase in safty ends up with offsetting human behaviors like, eating, texting, phone calls, looking at the scenery, falling asleep. If auto-cars can do on-ramp to off-ramp driving 3 times? safer than humans(and many say that is already here), this would be a huge improvement for lots of people. Long commuters and overnight trips instead of flying, especially.
Can you elaborate a bit on how such testing is done, or share a good article on the topic? It sounds like a hard problem to need to get things right this bad, or else.
Not that long ago there was a few HN post about running on the metal that started off with this [0] (also called "Space Shuttle Style). Also speaking of NASA, they - and many other government departments - use a system called Technology Readiness Level (TRL)[1 -image] [2 - 1pg pdf]. This is used enough that you'll see it in HN comments. With humans on board, you are basically aiming for TRL 8. Look at the steps there and you'll quickly see that 4 years is pretty freaking fast. This not only includes code, but hardware. Everything has to be thoroughly vetted. In a typically contract you can go from TRL 1-3 in 6mo. 3-4 in 6mo-1yr. 4-5 in 1-2yrs. And so on. My guess is that the Tesla stuff is closer to TRL 5 or 6, since there is a driver involved. You'd need TRL 8 at level 4/5 to get fully autonomous driving approved. As it should.
As software people I think many will laugh at the low TRL of their own work. It isn't too bad or anything since other sectors need to move fast (probably security people will disagree). But other sectors need to move slow and ensure that things don't break. Because things breaking means people dying.
> As software people I think many will laugh at the low TRL of their own work. It isn't too bad or anything since other sectors need to move fast (probably security people will disagree). But other sectors need to move slow and ensure that things don't break. Because things breaking means people dying.
I doubt more than 5% of developers have ever worked on anything beyond a TRL4. SREs, maybe,
Typically, everything in civil aviation is done to DO-178C. Almost everything we work on is level A, the highest level of criticality. Military stuff is generally much less rigorous, believe it or not.
The issue isn't the actual certification, it's the rapidly changing requirements and the standard for testing most software. There's plenty of libraries and tools I'd colloquially consider TRL 8 or 9 but there are tens, sometimes thousands, of people who can make a change to that software and push an update to millions of systems with little more than a glance from an SRE running `apt-get upgrade.` The nature of network facing software even requires you balance the need to keep your infrastructure stable with the need to keep it secure so you're always stuck in a loop between customers, support, management, developers, and operations where everything constantly changes.
Once software is operational, whether you are following DO-178x/33x, FDA's General Principles of Software Validation and other guidelines, ISO-whatever, or NASA/military TRL, there's a whole ton of stuff you have to do to make any changes once you reach a certain point in development. Since most businesses have the benefit of captive employees that can be woken up in the middle of hte night and terra-locked systems, they never even come close to reaching that point.
it's been a while, do-178 is concerned with software while do-254 is hardware (both for aviation, FAA standards). Depending on the criticality of the component (e.g. engine control vs radios vs entertainment system) they will abide by different levels (ie DO-178A ... DO-178E).
For both software and hardware there will be requirements for documentation, design, verification, and testing. Some even go as far as to requir the implementation of certain functional logic in multiple ways and having concensus logic (eg hardware logic to interpret GPS accuracy/confidence That ends up being broadcast externally).
There's also DO-160 that's concerned with environmental requirements (eg temperature, humidity, lighting).
This is what makes a radio that should cost $300, cost $5000, and why FAA certified aviation components are expensive compared to uncertified components (eg hobby airplane builds).
Parent comment really gives a great overview of the process.
One slight nitpick is that DO-178B or DO-178C are revisions of the standard and the associated criticality is associated with the Design Assurance Level (DAL) from A to E. One would say software is developed to DO-178C DAL B for example.
The DAL is determined before the software is started at the Systems level according to a process described in ARP4754A (Guidelines For Development Of Civil Aircraft and Systems) looking at the system architecture, hazards, and potential mitigations of those hazards, one of which is developing software to a certain DAL.
But you don't need to train in a car, so you want to optimize cost/power for pure inference. Similarly if all you want to do is cheap and efficient inference, it isn't clear to me that TPU V1 are obsolete today.
All Teslas need to do is run inference. I'm fairly certain training is all done either in the cloud or on datacenter GPUs.
It would be very interesting if Google was selling Tesla EOL TPU v1 units. The specs are very good for what Tesla is trying to do.
4 years is not a long time considering this is a proprietary technology that even GCP customers haven't had access to for very long. It will likely be a few more years before any TPU-like technology is available in the mass market.
To be clear, it will be free for owners that purchased the "Full Self Driving" (FSD) package.
There's two tiers-- Enhanced Autopilot (EAP), and FSD. To own FSD, you need to have already purchased EAP. However, you do not need to purchase FSD to have EAP.
Enhanced Autopilot is (in simple terms) lane keeping + adaptive cruise control. It's designed only to work on highways, and does not need the HW3 board nor will owners that only purchased this get the new hardware.
FSD owners will get the HW3 board so that Tesla can fulfill the promise of providing FSD that the owner originally purchased with their car (or after-the-fact). FSD will (in theory) handle self-driving in most cases, such as regular street driving with stop lights, stop signs, roundabouts, etc.
As long as they keep on upgrading the hardware for free until they can provide full-self driving to the owners that chose the option it shouldn't matter?
Tesla said in October 2016 that cars with the current "HW2" generation hardware would be capable of FSD, and that they would be upgraded to that in mid-2017.
If they don't have FSD by mid-2019, that's two years off that target. Deadlines slip, companies overpromise, this is a hard problem, all that jazz, but by early 2020, people who leased Teslas that they were told would have FSD are going to be deciding what their next move is. If part of the calculation is "FSD still ain't here yet, but Tesla is promising a free upgrade to HW4 which they say will really lay the foundation for FSD this time Elon pinky-swears," I would submit that it's gonna matter.
Tesla'll lose the last strong differentiation from competition and Kona follow-ups will eat model 3 margins. Tesla will get back where is profitable (sport cars) and will likely want to double down on electric trucks, probably with a specialized highway autopilot.
I’m not worried about people signing up to beta test dangerous tech on themselves, I worry about bystanders who made no such choice. That’s what really needs to be dealt with, and if people want to take facetious arguments about the overall lethality of existing tech to the public, let them, but I hope they brought pitchfork and torch repellent.
Does it matter for the people that die in the meantime because someone didn't read the instructions, the product description, or the in-car warnings?
When you select full self-driving capability in the Tesla configurator, it states explicitly and prominently that this isn't enabled (or ready!) yet, and is subject to legal approval.
Anyone that buys a Tesla, thinks it is fully autonomous, and dies as a result is failing natural selection. Third parties (pedestrians and other drivers) on the other hand are victims, but have there been any examples of this with Tesla cars? My not-full-self-driving AP2 Model X can detect pedestrians, and will also emergency stop if it detects things. This is aside from the fact I, as the driver, should be emergency stopping if I see a forthcoming collision with a pedestrian!
Think of Autopilot like a horse. You still need a driver, but at least you've got another set of eyes on the road. Sometimes it will notice things you don't, sometimes it will get spooked by things it shouldn't do, but two sets of eyes/two brains are better than one.
The things that could happen are that either Tesla ships full self-driving (FSD) and better delivers when it does (i.e. it has to work reliably).
Or it puts off the customers again, promising a new hardware 4, 5, 6, ... capable of FSD - in which case people better read the instructions (and pay attention at all time lest they end up in a divider or under a trailer).
I bought an AP1 Tesla just before they started shipping AP2. I kept my AP1 because I doubted that they'd really be able to ship FSD anytime soon. Shortly after, a friend of mine bought an AP2 Tesla. For the next year, my AP1 performed much better at the features either one provided -- adaptive cruise control and lane-keeping -- than his AP2. And now we're at the point where it's "just kidding, you really need AP3." Still with no end (FSD) in sight. Two years down the road, and no regrets keeping my AP1.
If it isn't, then I would hope Tesla makes it right somehow (providing HW4 for free or something similar) or they'll open themselves up to be liable for false advertising.
Still makes me scratch my head when the car was billed as having "full self driving hardware" at the point of sale. This is the second hardware upgrade they've done already, although if it's free I suppose nobody but investors are complaining.
As an investor AND a Model 3 owner who purchased FSD, I'm content knowing that I'm "funding" the research that will bring us FSD. I'm content knowing when they get it right I'll have access to it and in the meantime am simply enjoying an incredible car.
I don't want to discourage you, but every indication is that Tesla is far behind the leaders in FSD. I don't know that its correct to claim to be "funding" anything that is pushing those boundaries.
Tesla has the largest sensor fleet on the road [1] (~370k vehicles with AP2.0+ hardware, adding ~25k units a month to that) contributing to the refinement of their models (~1.2 billion miles of shadow mode experience). Waymo may be closer because of Google/Alphabet resources, but I can't buy a Waymo vehicle nor ride in one, and I'd rather give Tesla the money than Google.
I got my S90D 3 weeks before AP2 came out, and I was told it was not backwards compatible. At the time, it was said that this upgrade was required for FSD.
I swapped in October to a 3PE, now I'm seeing that another upgrade is incoming, and that THIS one is required for FSD.
I've spent plenty of money with this company, but handing over $5,000 more dollars for a feature that is unproven is not something I feel like I should need to do; especially when I'm clearly advertised to at the time that I can buy the feature later for $6,000.
If fully autonomous driving becomes available, and it costs more than what was advertised, they are going to have a big problem.
It’s really simple: our video inputs are better than cameras, react in a wide range of light conditions, and most of all are hooked up to a GGI: Genuine General Intelligence, aka human brain. Even then it takes us almost two decades before we’re safe to drive, and we still smash into things.
We’ve made extraordinary progress, but, ultimately, we’re still training systems in extremely simplistic ways to solve (conceptually) simplistic tasks. Localizing objects (into a set of categories no less) is only a fraction of what your brain does when looking around. Moreover, we still struggle with this. One of the Tesla accidents was due to a segmentation error (two simimilary colored and textured objects were merged).
Except it doesn't work where it refuses to work and the cars don't actually send back any meaningful amount of data (unless you want to train your model on lossy compression algorithms).
You'd need a storage rack in your frunk to record any amount of raw sensor data. That's before you consider their production AP hardware doesn't have cycles to spare to do any of that recording.
> the cars don't actually send back any meaningful amount of data
Source?
> You'd need a storage rack in your frunk to record any amount of raw sensor data. That's before you consider their production AP hardware doesn't have cycles to spare to do any of that recording.
I have no idea on what you base your claims, but without more information it is hard to figure out what you have in mind...
Do the math. 7 cameras at 1280x960 with 16 bpp at maybe an average 30 FPS and you're talking 500 MiB/s of just raw image data you would need to record. Where is it going? Not on any sort of storage and not over the wire to Tesla, that is clear.
What curious people have found is that the car just sends a basic disengage report with maybe 10s of h264 video or spaced far apart raw images from the cameras.
Who the hell stores or sends raw image data? The training data that's used for offline training of the vision neural nets is probably all h264 encoded anyway.
You don't (always) need full resolution with 30 fps raw frames for training your model though. If you are looking for missing exit, stopping at a traffic light, etc. you need front facing camera (so 3) and a few raw frames (and the sensor is 12 bits not 16). One frame is 1.8MB raw, but if you train on YUV images, this would be even smaller.
With a 200MB buffer in memory, you can keep 100 frames at any time (and you have 8GB local storage for storing while waiting to upload)
You said "any meaningful amount of data", I guess this is subjective!
Something that hasn’t been true for some time. Some nights my Model 3 will upload 4-5G of data. What do you think that data is Einstein? My Easter egg game high scores?
Can cameras not detect traffic lights and their signal colors, assisted with mapping data to give a fuzzy idea where traffic light recognition will be required?
Not sure if I would rely on mapping data. There are so many new-ish roads and lights around my apartment. After living here for 6 months one of the roads still isn't known by Google Maps, Apple Maps, or my Tesla's nav software.
I bought 200 TSLA stocks at around $212 per. I also adore my Model 3 as it is just really fun to drive or ride in. You’re not discouraging me at all but you’re really missing the fact that their fleet pulls in more data than any of their competition even including google’s Waymo. They’ll have the raw data and they will get the models right. Having the most diverse amount of training data will allow them to have the best models, which will lead to FSD.
As you may be right ( I honestly don't know), Tesla is clearly driving demand for this feature, proving there is a market for it, causing others companies to heavily invest in it.
I would add to this that Autopilot is great in stop-and-go traffic. It makes an annoying situation so much more pleasant when the car is doing all the stopping and going for you.
Those are the simplest two situations and were already well handled by driving assistance years ago (adaptive cruise control dates back to the late 90s, lane assistance the early aughts, jam automation is probably the youngest with the first live products maybe 5 years back or so?)
> That is kind of deal breaker for me, when talking about 1+ ton metal box moving me around I will take "slow to market but not blind" instead.
The way Elon Musk tells it, they put it into production once it was safer per mile than human drivers. Which seems like a legitimate point.
You use something when it's better than what you would have to use in the alternative, not when it's 100% perfect with no possibility of ever making a mistake under any circumstances. Which is probably not even actually possible to do.
The problem is if human drivers kill more than 30,000 people in a year, it's not news, because it's been that way forever. But one autonomous car kills one person and it's the top story.
> The way Elon Musk tells it, they put it into production once it was safer per mile than human drivers.
I'd caution you against believing anything that he says. It's easy to extract from the data an unfair comparison that shows the group with Tesla AP being safer, but comparing it against cars in totally different driving context or against the safety of motorcycles is just completely deceptive. The stats indicate that Tesla w/AP enabled on freeways is less safe than Tesla w/o AP enabled on freeways.
> Last I checked insurance companies gave a discount for AP Tesla’s over non AP Tesla’s suggesting the opposite. Do you some support backing this up?
That's all marketing fluff.
The only U.S. insurance company offering such a discount is some random startup insurance company available to only a small percent of the population. The company is about 2 years old, requires significant tracking of your driving behavior, and it's entirely unclear why they are offering the discount.
The only UK insurance company offering such a discount said “it was too early to say whether the use of the autopilot system produced a safety record that justified lower premiums. It said it was charging less to encourage use of the system and aid research.” It made this decision working closely together with Tesla, it wasn't based on any actuarial data.
According to the article at [1] which another user just showed me, Liberty Mutual is one of the insurance companies, which absolutely doesn't fit your description.
Also, from that article, it looks like the NHTSA found that crashes were 40% lower in cars with autopilot. [2] I don't know enough about insurance and how it works to know if that counts at all, but it shows that it's not all marketing fluff.
FWIW, it's not Tesla-specific. Many luxury cars have similar features to Autopilot and get similar discounts for them.
"Liberty Mutual Insurance offers discounts for various electronic safety features; such as Adaptive Cruise Control, Adaptive Headlights, Collision Preparation Systems (including automatic braking), Blind Spot Warning, Lane Departure Warning and Rear-View Cameras"
I'd guess that's due more to Teslas being expensive cars that are generally driven by older, more responsible drivers and have a bunch of other safety features (AEB, stability control, etc.)
I'd be interested in seeing crash data for Teslas compared with other $50-$130k cars less than 5 years old.
There's no need to guess, the study was actually done on Teslas with and without AP hardware.
I don't know it well enough to know if there were other factors at play, but it wasn't just that Teslas in general were less likely to crash, only the ones with AP hardware installed.
How? I'm not trying to instigate something here, but a quick search of this isn't showing anything except a bunch of groaning about how unfair it is that Tesla gets all kinds of tax breaks for its cars because it's the only one selling electric at any kind of scale.
That doesn't say they subsidized it, just that they are working with some insurance companies.
I thought subsidize meant that they would be basically paying the difference.
If there's no money changing hands in this partnership, then either the insurance companies are really dumb and are just throwing money away, or there's some truth to the stats that tesla keeps bringing up (40% fewer crashes in cars with autopilot) (or there's something else i'm missing!)
> Last I checked insurance companies gave a discount for AP Tesla’s over non AP Tesla’s suggesting the opposite. Do you some support backing this up?
Insurance companies may do this because it is a strong signal the customer is gullible to questionable upsells. Maybe they've found that statistically they can sell them more things like overpriced umbrella insurance through direct mail marketing pack ins with their car insurance bill.
> The way Elon Musk tells it, they put it into production once it was safer per mile than human drivers. Which seems like a legitimate point.
It's also only used/active when it is reasonably able to work. Therefore the miles used are inherently better/safer as it is (much less whiteout / torrential AP driving).
This is a luxury human drivers do not have, short of "do not drive those miles at all".
I have always hated this metric. It's disingenuous, and selective.
Precisely. During one nasty ice storm at night I might see more wrecked cars in a single hour than I see the rest of the year combined. And no currently available automation system would dare operate in those conditions, so human drivers take a huge statistical hit from conditions like that but the computers don't. All of these 'safer than humans' metrics are apples to oranges.
So look to other metrics, like the effect on crashes before and after the introduction of the autopilot feature. But those also show a safety improvement.
They may not be taking the hardest miles, but the only way to improve the overall average is to be doing better than the status quo on the miles they are taking.
> they put it into production once it was safer per mile than human drivers
The thing is that I can take precautions to increase my safety margin - don't drive impaired, drive in good weather conditions, get lots of sleep, don't be aggressive, don't follow closely, etc. etc. etc.
I don't have a Tesla, but I imagine there's nothing I can do to voluntarily make it safer. It functions normally until it doesn't.
You can pay attention and keep your hands on the wheel. From what I know all the people who had accidents failed to do this. The challenge seems to be how to overcome the false sense of security drivers get, precisely because the thing works so well most of the time.
> "You can pay attention and keep your hands on the wheel."
In that sense these Level 2 systems are a lot like q-tips. The box says you shouldn't stick them in your ear, but everybody knows that's what you buy them for. If everybody were to follow the directions, the consumer desirability of the product would plummet.
Even though you're not supposed to take your hands off the wheel, level 2 systems are often referred to as "hands off". And during a 60 Minutes interview Elon Musk is seen taking his hands off the wheel. Unofficially the point of Level 2 automation is that you can take your hands off the wheel. Officially, your hands must remain on the wheel. The unofficial usecase is what actually gets consumers enthusiastic, but the official usecase is what companies like Tesla cover their ass with.
The actual point of a Level-2 system (other than as a safety measure, e.g. wrt emergency braking) is that it provides driver assistance for things other than having to pay attention to the road & surroundings at all times - and there are plenty of such things that can meaningfully impact driver's comfort. That's all there is to it. Don't think "Autopilot"; think "Fly-by-wire".
Yeah that's nice and all, yet when you have Elon Musk promoting his cars by taking his hands off the steering wheel, it's pretty clear that Tesla's Level 2 system is being sold to consumers as "hands off".
Why are people assuming in this thread that Tesla is the only auto maker that's providing Level-2 driver assistance? This couldn't be farther from the truth.
The topic of this thread is Tesla, specifically Tesla's implementation of automation features. That is why I am talking about Tesla's promotion of their Level 2 system.
This is a very good point, and one I hadn't considered.
It's common knowledge that most people think they're better than average drivers, so we tend to compensate and believe we're no better than the statistical average driver. But yes indeed, most of us in fact are better than average drivers whenever we're doing our best to drive carefully.
In theory, you can take extra precautions sure, but will you take these precautions all the times?
Human nature says no: you're in a hurry, you've eaten too much, you have a cold, all these things reduce your ability to drive safely..
There's zero chance that they know it's safer per mile than humans. Incidents are so far in between you'd need way more more than what the paltry Tesla fleet is collecting before crash per mile could be accounted for with any meaningful accuracy
It is a garden-variety advanced-driver-assistance system (ADAS) same as other top-of-the-line cars have, and not even the most advanced on the market! To describe it as being anything close to "self-driving" is so implausibly charitable that it borders on being disingenuous.
Lots of other vehicles have traffic aware cruise control. Can you name a single non-Tesla that can automatically speed up / slow down to switch lanes and take on ramps / off ramps based on navigation?
There are lots of things wrong with auto-pilot, but it is far from "garden-variety".
So, their navigation subsystem can limit the cruise-control speed before an off-ramp/lane change/whatever? That's nice, but it's a gimmick. It's not meaningful driver assistance, let alone self-driving. Meanwhile, the newest Audi top-of-the-line car (already sporting one of the most advanced feature sets when it comes to driver assistance) will reportedly be able to "drive" itself in very slow-going, very heavy traffic, even if the driver momentarily stops paying attention. That's at least as good as what Tesla will be able to do in the same timeframe.
Teslas do what, exactly? As far as EAP goes, you're supposed to pay attention at all times - the system is not able to make up for even momentary distraction. This is where Audi is claiming to be able to do at least slightly better, in highly selective/favorable conditions. FSD is its own thing that's still far in the future, so I'm not sure how sensible it is to bring it up here.
What are you saying? Tesla does just fine in traffic, you hit a button on the steering wheel every minute or so to show you are paying attention to the road and that’s it. You are still looking at the road, the readout showing the mapping of what it sees to the real world.
The whole point of this discussion is that "Oh, it does great in practice" is a highly misleading metric - it tells us nothing about how the system might 'fail' in a worst-case scenario. That's the whole point of having these "levels" - and Tesla is still not claiming anything higher than Level-2 for their existing EAP.
Right, so you still sit in the driver seat watching the road. Still beats switching back and forth between gas and break, and constantly holding the wheel.
Cadillac Super Cruise is limited to select roads only, their site states 130,000 miles of mapped highways. The Tesla system, I am only referring to EAP, works on any road it can find lines. Having used it the hardest part wasn't in trusting to keep in the lane but stopping. However that part is true with a traffic aware cruise control system.
is it flawless, no, but it does work in conditions I didn't find fun driving in. An example was two lane highway, at night, in the rain, with enough jokers with badly aimed lights or not dimming their lights. It did very well and let me feel confident looking away from the road if the oncoming car's lights were too bright.
I did not buy the FSD suite, I don't have a desire for that level of autonomy and I am not confident anyone will pull it off soon. I mean I have seen examples where they map it all but that isn't the same.
I don't believe this is correct. If you want to change lanes, you engage the turn signal, Supercruise will show blue light. The blue light denotes Super cruise is paused while you change lanes. After you've changed lanes yourself, you wait for the light to glow blue again so Supercruise can resume.
Tesla will speed up or slow down and change lanes automatically. This is not the same. Also, Supercruise is only enabled in very limited circumstances. You can use autopilot anywhere even if it isn't officially recommended.
> Tesla will speed up or slow down and change lanes automatically.
You are still required to engage the turn signal; Tesla will speed up / slow down to execute the merge, eventually, and will steer the merge for you but does not make the decision to do so autonomously (though it'll recommend a new lane to you with a blue lane line on your dash).
IIRC, California posts the annual disengagement reports at the end of January. It’ll be interesting to see if Tesla filed one for 2018. They didn’t report any miles driven in 2017. In 2016, they reported a small number of miles, maybe the number needed to record their FSD demo that they have on the Autopilot page.
The problem is that with a L2 system like the current autopilot where you have to take over when you exit a highway and encounter for example a traffic light, you trigger effectively a "disengagement" while operating with the expected flow of the system. It means that the "disengagement metric" gets very noisy (I'd say useless) in this case.
I don’t mean that Tesla will an anamolous number of disengagements, I mean that that they will file a report at all, which they haven’t done in 2017 because presumably they stopped autonomous driving on real roads. These reports don’t cover systems like the current Autopilot, as it is not considered autonomous driving.
> with all vehicle sensors (radar, ultrasonics, cameras) staying the same.
Still no LIDAR? LIDAR was the enabling technology in the DARPA challenges, and it remains so today. I can think of one Tesla customer who would still have a head if his "self driving" car was equipped with LIDAR.
>Perception is a game of statistics. We believe it will ultimately be entirely possible to build a self-driving car that can get by on, for instance, cameras alone. However, getting autonomy out safely, quickly, and broadly means driving down errors as quickly as possible. Crudely speaking, if we have three independent modalities with epsilon miss-detection-rates and we combine them we can achieve an epsilon³ rate in perception. In practice, relatively orthogonal failure modes won’t achieve that level of benefit, however, an error every million miles can get boosted to an error every billion miles. It is extremely difficult to achieve this level of accuracy with a single modality alone.
>Different sensor modalities have different strengths and weaknesses; thus, incorporating multiple modalities drives orders of magnitude improvements in the reliability of the system. Cameras suffer from difficulty in low-light and high dynamic range scenarios; radars suffer from limited resolution and artifacts due to multi-path and doppler ambiguity; lidars “see” obscurants.
That's what's so striking about this. The new hardware can run neural nets really fast. So they'll be able to do something that can do the job most of the time, but will still screw up badly some of the time.
Not even stereo cameras. That's surprising. With all that compute power, depth from stereo should work well. A 3 camera system with all cameras pointed in roughly the same direction and arranged in a triangle, not a line, is resistant to many illusions that will trouble a 2-camera system. You don't need 3 cameras most of the time, but when you need them, you need them.
Tesla AP2.0+ vehicles have three forward facing cameras, two on each side (front quarter panels, side B pillars), and one rear camera, with overlap between them all. This should be more than sufficient to use structure from motion [1] instead of LIDAR for ranging (not including their ultrasonics).
> Particularly noticeable in Erik’s demo, though, was what seemed to be a slight lag in how other vehicles’ avatars are displayed on the Model S’ instrument cluster, particularly when they are overtaking the electric car. That said, considering that blind spot monitoring is utilizing video feeds from the side and rear cameras, these slight lags could be due to the system shifting from one camera to another. This was particularly notable at around the 1:25 mark in the owner-enthusiast’s video.
Which leads me to believe that the sensors (cameras, front radar, ultrasonics) are sufficient, but why the NN hardware upgrade is going to be necessary. Tesla bet that it was going to be cheaper to drive the cost of the processing hardware down vs being beholden to LIDAR, and it looks like they might be right.
That's what's so striking about this. The new hardware
can run neural nets really fast. So they'll be able to
do something that can do the job most of the time, but
will still screw up badly some of the time.
You have perfectly captured the reason I don't trust myself to operate a car.
I think that Tesla is moving too fast for public perception. A person is smart, but people are dumb, panicky animals, and expected value and engineering [1] set speed limits and goals that are far faster than PR, c.f. Greenpeace's anti-nuclear stance being significantly responsible for global climate change. But "they have to never make any mistakes" is a crippling double standard.
I find people’s response to autopilot failures to be really interesting. Autopilot is different than a human and is going to make different kinds of mistakes than a human, those mistakes are going to be frustrating and look “obvious” to humans.
On the flip side every day humans make mistakes that autopilot would avoid and would be “obvious” to an AP system.
Should we judge the computers vs perfection? Judge them like we judge humans? If a system came out that could be retrofit on every car in the country and make 1% fewer mistakes than humans, but the mistakes it makes are “obvious” and stupid, is it an improvement? Would we accept it? What about 10% or 50%?
I'm fine with accepting a system that's better than humans, given some a priori evidence of such.
But this isn't the case. This isn't the case of a company launching a system that performs better than humans through extensive testing and carefully weighting the risks. This is the case of a company intentionally false advertising something and resulting in preventable human loss.
I'm having a hard time perceiving 3 fatalities in 1 billion miles of Tesla Autopilot as false advertising that resulted in preventable human loss. The average number of fatalities was 11.6 per 1 billion vehicle miles in 2017.
That's less than 1/3 the rate of average.*
I understand the comparison is not direct since autopilot driving happens on the highway and those are general miles, but it is the best number we have for comparison's sake right now.
Those numbers are not comparable, and you don't have enough information to make such claim. It is plainly irrelevant, no matter how "the best data available" it is.
These are traffic deaths per 100 million vehicle miles by functional system type. There is not a single road type on this list, including the lowest (urban freeways), with lower fatalities per 100 million vehicle miles than Tesla Autopilot (which is 1/3 fatalities per 100 million VMT).
Nailed it. One additional piece of information to throw in the pile - insurance claims for Teslas show more frequent accident rates than comparable luxury vehicles [1]. Again, we would need granular data to draw actual conclusions, but Tesla's marketing-friendly claims on autopilot safety are mostly bogus.
> I find people’s response to autopilot failures to be really interesting. Autopilot is different than a human and is going to make different kinds of mistakes than a human, those mistakes are going to be frustrating and look “obvious” to humans.
> On the flip side every day humans make mistakes that autopilot would avoid and would be “obvious” to an AP system.
> Should we judge the computers vs perfection? Judge them like we judge humans? If a system came out that could be retrofit on every car in the country and make 1% fewer mistakes than humans, but the mistakes it makes are “obvious” and stupid, is it an improvement? Would we accept it? What about 10% or 50%?
When we can meaningfully charge a computer with a crime (or better, the developer of the software that caused the crash!), maybe then will public sentiment be aligned with the reality you envision.
If you want perfect, we will never get autonomous driving and 35k people a year die in the USA.
What if there's 10 Tesla fatalities a year, but similar people driving similar cars would have expected 50 fatalities?
Granted Tesla hasn't proven this, but generally there's going to be more deaths and we shouldn't just kill off the technology because they aren't perfect.
Keep in mind the decapitation was from someone watching a Disney DVD instead of using the AP like it was intended.
You don't need stereo to perceive depth. You can do it with one camera over time, especially in an automotive case where the vehicle is always moving when it's making decisions.
> i thought LIDAR has high cost barrier to be in consumer vehicles.
Ok, so you save a few bucks, kill a few people... then eventually come to the same conclusion every other Autonomous Driving company already has... you need LIDAR for this work reliably?
> I don't see how you could be so certain that computers need to
We're nowhere close to emulating what goes on in a human brain... neural networks are an absurdly simplified approximation of simple life-forms... it took the
K supercomputer (having 705,024 cores and 1.4 Petabytes of RAM) 40 minutes to emulate 1 second of 1 percent of human brain activity.[1]
Since we cannot come close to sensory input processing a human can do... we need to augment our systems with better sensors. LIDAR is a better sensor, and can help "see" things that would otherwise go missed.
> you get it working without LIDAR and make boatloads of money
And why can't you make "boatloads of money" while using LIDAR? Killing people is bad for business... no?
Before you jump on it, no LIDAR is no silver bullet... but it's available technology that can help quite a bit. Why not use it? Unless you've overpromised what your price point will be for technology that is yet to exist?
> "Before you jump on it, no LIDAR is no silver bullet... but it's available technology that can help quite a bit. Why not use it?"
Because Elon wanted to advertise his cars as having all the hardware necessary, because the automotive automation hype is very real. He wanted to sell luxury cars to tech enthused idealists, but he didn't want to cut down on his profit margins by actually delivering what he was selling.
The fact that he always admitted the software was lacking gave him plenty of cover. It allows his customers to continue to believe, even though their car can't actually drive itself. That's the beauty of selling hopes and dreams instead of anything concrete.
Humans don't need LIDAR, but we rely on the massive amount of compute power available to us in our brains and a variety of other sensory inputs that go beyond simple vision. Of course, we're easily distracted, which is one reason why we aren't perfect drivers.
I really believe you can make a vision+radar only self driving car, but not without a lot of compute power. LIDAR just makes it easier. Will HW3 be enough? Probably for a lot of scenarios, but it's definitely not going to be level 4/5 anytime soon.
They are a bit expensive but prices have decreased rapidly over the last few years. You can get a good LIDAR for less than Tesla charges for their Autopilot. You can get consumer level ones that go out to 18 meters for around $100.
I don't know of any. But for some cases that Teslas seem to have trouble handling, like plowing into stopped white vehicles or concrete dividers[1], I think a 2D LIDAR like this one[2] would still be useful enough to have saved lives.
That's more or less why it's a pretty heated race to get a reliable solid state LIDAR. I think within a couple years we will have sub $2500 LIDAR units.
On top that, most LIDAR units I’ve seen have been quite bulky which would make designing a sexy looking vehicle around it near impossible. One of Tesla’s goals and main selling points is making “normal” looking cars instead of space bugs and in its present state, LIDAR integration would go against that.
Putting form before function is almost never a good idea. If they actually got ugly Level 5 automation to work, it would sell. Period. And it would sell well enough to likely change public perception of what looks good and what looks ugly. Changes in technology causing changes in fashion is well documented.
There are many startups coming up with solidstate lidars that will be commercialized in the next two years. Teslas of today without lidars during that time will have big disadvantage compared to the competition. I am assuming Tesla will also release one with Lidar in 2020 though Elon Musk publicly say it is not needed. Like Steve Jobs he will change his statement when Tesla releases one with lidar.
Interesting because to me, LIDAR is an absolute necessity. Startups use cameras (+ radar) only because it's technically and financially "easier" than other technology. I don't know of any company with Level 3 and above self driving capabilities not using LIDAR (though I would love to be proved wrong because that would be awesome to see)
In the long run, binocular cameras should be enough. We are an existence proof of that. But that requires human-level cognition and inference, which is not something that's even on the horizon. Until then, LIDAR will be a key technology.
>In the long run, binocular cameras should be enough.
No they won't. They have all the issues that pure visible light sensors fundamentally have in Earth weather and lighting conditions. There hasn't been much choice for human drivers obviously (though in principle there we could have pushed for better HUD ages ago, and could still make use of AR), but with the switch to computers we can and should do better. It is foolish not to make use of better data input.
>We are an existence proof of that.
Proof of what? Millions of deaths and casualties every single year due to driving? We tolerate it because the benefits are even bigger, and there aren't any other options. It is literally worth mass suffering and death, not to mention economic expense, to gain arbitrary point-to-point mechanized transportation. But that acceptance is purely a relative matter because there is nothing better, as with medical technology standards can and will change once we can improve. There is no reason that self driving cars shouldn't generally be able to see animals (deer, moose etc) in a pitch black night in the fog for example and keep from running into them. Yeah that'd be impossible for us, but that's not a law of physics just a limitation of information input limited purely to the visual spectrum. Using humans as a standard is really foolish given how objectively terrible we are at this.
> Using humans as a standard is really foolish given how objectively terrible we are at this.
Yeah, it would be foolish to use my grandmother as a benchmark for driverless cars. Humanity would experience a net increase in safety if she were to take a driverless car to church.
But even the best driverless car today with all their LIDARs can't outperform my father, a 40 year veteran of UPS with literally millions of accident free miles under his belt in every driving condition imaginable - rain, sleet, snow, fog, bright sun. All with binocular cameras, and a finely tuned driving ethic. If I presented UPS with a fleet of similarly capable driverless trucks, I would be a billionaire, and the world would be net-safer.
US car fatalities are at 1.16 per 100 million miles in 2017 [0]. Anecdotal evidence for accidents based on multiple orders of magnitude less miles doesn't say much (fatalities was the first data i could find) [1].
Anecdotal evidence shows that's it's possible. It shows under the right condition with the right controller, it's possible to go millions of miles without making a single mistake for a human with binocular camera sensors.
Over all human drivers we don't get that result, of course. But how many were following too closely? How many were distracted? How many were sleepy? How many were intoxicated? How many were going too fast? How many were old? How many have a slow reaction time? How many weren't wearing their glasses? All of the above factors have nothing to do with the sensory input to the controller, but are indictments of the individual controllers themselves.
Some human controllers are better than others. Some people hit parked cars. Some people go millions of miles without hitting anything. They both have binocular input sensors. If we equip a car with binocular cameras, that doesn't mean we will have the same result as humans in aggregate. Computers will never get drunk. Computers will never be distracted. In the limit, where we have the right cognition, I believe we won't need LIDAR to achieve better aggregate results than the ones you quoted. It would be as if everyone drove like my father, rather than grandmother.
What do you think the fatality rate over all of UPS, Fedex, and USPS are? Some quick Googling shows that UPS alone logs over 3 billion miles per year for their fleet, with only 25 deaths per year. That's 0.8 deaths per 100 million miles. So we can see with proper training, a human controller can be 100% more effective at preventing death than the general population. And the question still remains -- why were those deaths caused? Were they due to being limited to binocular cameras? Or were they something we could program away like being distracted or driving drunk? Or maybe some were unavoidable under any circumstance? I don't know, but it definitely shows me that this idea that humans are de facto terrible at driving and therefore binocular sensors are not enough for driverless cars (again in the long run) is questionable.
Even when cameras are "enough" LIDAR will still provide additional data not obtainable through cameras, allowing even safer driving. If humans had evolved biological LIDAR we would use it while driving.
I may be mis-remembering, but wasn't an argument from Tesla that LIDAR would actually be a disadvantage?
LIDAR has a lot more "false positives" if I remember correctly, and a lot of the work is going toward filtering LIDAR data and cleaning it up in cars that use it. I could see not wanting to spend the time and effort in trying to tame something like that when they have something working pretty well right now with different kinds of sensors.
I think at this stage, nobody is certain. But I tend to side with more information is better, and you filter what you don't need. Same reason sensor fusion algorithms use the gyroscope and accelerometer in concert to measure movement.
Yes, but that's what I was getting at. You filter those out from the camera. The reverse is probably true as well, where you need depth since an image can't give you enough information.
Insufficient binocular vision isn't always the cause though. Not all human controllers are equally capable. There is a such thing as the best driver in the world and the worst driver in the world. For example, my father worked as a driver for UPS for 40 years. He drove literally millions of miles in every condition imaginable and never got into an accident. Not a single fender bender. My sister on the other hand has been driving for a decade and has been in dozens. They have the same sensory input but vastly different performance. I would definitely buy a driverless car that performs like my father. If I could build one like that, I would be a billionaire. Not so much with one that performs like my sister.
Human eyes take in data that's not available to a camera. Fo example we can detect a single photon. We can also change the lens shape of the eye. And the brain uses more than just eyes to drive. We use our ears for example, and our vestibular system. And there are even parts of our body at work that don't talk to the brain... Our brainstem certainly handles many aspects of driving without talking to the eyes. So I wouldn't say there's an existence proof there.
You are ignoring the cost of human labor which in the US is minimum $7.25 an hour which is why waymo is targeting taxis as they don't have to pay those human labor costs if they can get rid of the drivers.
Can you blame them? They were bitten once by relying too much on proprietary tech that they had to license from Mobileye who eventually pulled the plug. I can't imagine they are in any rush to repeat that process.
Bringing as much of the software and architecture in-house as possible shields them a bit more from that happening again.
I have no idea if that is the ultimate reason, or if it was even a significant reason, but it sure can't hurt!