Hacker News new | past | comments | ask | show | jobs | submit login

I thought that that was (also?) a problem with the radar, where it bounced under the truck and thought the road was clear.



> I thought that that was (also?) a problem with the radar, where it bounced under the truck and thought the road was clear.

Ultimately this was driver error, since he wasn't paying attention and didn't break.

As for why autopilot did not engage the breaks, you could blame any of the forward facing sensors since they all failed to see the truck.

If the vehicle had windshield-height radar, or a better vision system, perhaps the system would work.

The problem with the radar was it couldn't see the object at that height.

The vision system confused the trailer for an overhead road sign.

After the incident, I remember some people reporting more breaking occurring on highways underneath overhead signage. That made me think the fix they put in place was to raise the threshold for what is considered an overhead sign. That is, they chose to err on the side of assuming the sign is an object ahead.

That may have been a temporary fix. I'm only speculating based on driver reports that appeared in /r/teslamotors a month or two after the crash was reported.

I don't know whether Tesla ever gave an official statement about how they fixed that issue. Tesla wasn't found to be at fault so they probably didn't have to. Also, the whole system is constantly under development.



Oh interesting. That's a pretty detailed report.

One thing jumps out at me,

> This is where fleet learning comes in handy. Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database. If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.

Relying on whitelists seems like a hack. Then again, I'm not building it =)


> Relying on whitelists seems like a hack. Then again, I'm not building it =)

Agree that it feels hacky at first, but once the whitelist dataset gets huge, it becomes unique training data for Tesla (and hopefully their machine learning will be able to generalize from it).


> Agree that it feels hacky at first, but once the whitelist dataset gets huge, it becomes unique training data for Tesla (and hopefully their machine learning will be able to generalize from it).

I guess. But then you still have to deal with rollout in other countries. And maintenance of sign locations which can change over time, get removed, etc. Still hacky IMHO.

How would machine learning make use of white-listed data? I doubt they could use that data to predict the GPS location of unknown signs.

If you mean image recognition, I assume if machine learning could properly identify the signs with accuracy, then they wouldn't need the whitelist. Then again, maybe they truly haven't collected a full overhead-sign dataset yet. I'd be shocked though if they don't by now. Anyway, you could be right. It would be fun to learn more about these setups.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: