Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have some personal experience as well, and I didn't pick up on any dishonesty like you describe. There was a consistent emphasis on safety, and multiple levels of thorough testing. When I left, I had generally positive opinions about the project.


They definitely care about safety. I don’t doubt their dedication to preventing accidents. My objections come from their hand waving away disengagements from the official results, shared with both regulators and Alphabet leadership. This was done to make their lead look greater than it was and to hide the slow progress with simple, required driving maneuvers like yield signs and unprotected left turns. My position had me handling driver issues directly and what the reports said did not line up with their, or my own, extensive time in the actual car.


I don't know about the disengage rates, but just the training model of data scares the bejeebus out of me. There was training data released and posted here on HN some time ago. I'm pretty sure the post was talking about how bad the training data was. There were all sorts of crazy things like signage being indicated in the trees or other "to a human" obviously wrong data. However, this was the supposed training data for some ML system. It was at this point that I just shook my head and walked away from interest in the subject.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: