Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The link you gave does not support your claims about Waymo, it's just speculation.

What "critical" intervention rate are you talking about? What network magically supports the required low latencies to remotely respond to an imminent accident?

How does your theory square with events like https://www.sfchronicle.com/sf/article/s-f-waymo-robotaxis-f... that required a service team to physically go and deal with the stuck cars, rather than just dealing with them via some giant remotely intervening team that's managed to scale to 10x rides in a year? (Hundreds of thousands per month absolutely.)

Sure, there's no doubt a lot of human oversight going on still, probably "remote interventions" of all sorts (but not tele-operating) that include things like humans marking off areas of a map to avoid and pushing out the update for the fleet, the company is run by humans... But to say they aren't particularly autonomous is deeply wrong.

I would be interested if you can dig up some old skeptics, plural, saying probably centuries. May take centuries, sure, I've seen such takes, they were usually backed by an assumption that getting all the way there requires full AGI and that'll take who knows how long. It's worth noticing that a lot of such tasks assumed to be "AGI-complete" have been falling lately. It's helpful to be focused on capabilities, not vague "what even is intelligence" philosophizing.

Your parenthetical seems pretty irrelevant. First, models work outside their training sets. Second, these companies test such scenarios all the time. You'll even note in the link I shared that Waymo cars were at the time programmed to not enter the freeway without a human behind the wheel, because they were still doing testing. And it's not like "live test on the freeway with a human backup" is the first step in testing strategy, either.



> What "critical" intervention rate are you talking about? What network magically supports the required low latencies to remotely respond to an imminent accident?

I was being vague - Waymo tests the autonomous algorithms with human drivers before they are deployed in remote-only mode. Those human drivers rarely but occasionally have to yank control from the vehicle. This is a critical intervention, and it seems like the rates are so low that riders almost never encounter a problem (though it does happen). Waymo releases this data, but doesn't release data on "non-critical interventions" where remote operators help with basic problem solving during normal operations. This is the distinction I was making and didn't phrase it very clearly. I think those people are intervening at least every 10-20 miles. And since those interventions always involve common-sense reasoning about some simple edge case, my claim is that the cars need that common-sense reasoning in order to get rid of the humans in the loop. I am not convinced that there's even enough drivers in the world to generate the data current AI needs to solve those edge cases - things like "the fire department ordered brand new trucks and the system can't recognize them because the data literally doesn't exist."

> First, models work outside their training sets.

This is incredibly ignorant, pure "number go up" magical thinking. Models work for simple interpolations outside their training data, but a mechanical failure is not an interpolation, it's a radically different change which current systems must be specifically trained on. AI does not have the ability to causally extrapolate based on physical reasoning like humans. I had never experienced a tire blowout but I knew immediately what went wrong, relying on tactile sensations to determine something was wrong in the rear right + basic conceptual knowledge of what a car is to determine the tire must have exploded. Even deep learning's strongest (reality-based) advocates acknowledge this sort of thinking is far beyond current ANNs. Transformers would need to be trained on the scenario data. There are mitigations that might work: simply coming to a slow stop when a separate tire diagnostic redlines, etc. But these might prove bitter and unreliable.

> Second, these companies test such scenarios all the time.

No they don't! The only company I am aware of which has tested tire blowouts is Kodiak Robotics, and that seemed to be a slick product demo rather than a scientific demonstration. I am not aware of any public Waymo results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: