Hacker News new | past | comments | ask | show | jobs | submit login

For the last decade or so, human learner drivers in the UK have had to pass a video "hazard perception test" based on their ability to recognise potential developing hazards (kids playing near the roads, a cyclist confronted by a line of parked cars, a vehicle approaching a junction onto your lane) at a very early point before their movements make their entry into the roadway inevitable.

This strikes me as something which is particularly difficult for an algorithm to process effectively (without generating lots of false positives, which also fails that segment of the test) especially based on the fairly low resolution video human users are presented with in normal test conditions.

Hope they're not going to waive that for the bots, even if they do have 360 degree vision and superior concentration and reaction times.




Couldn't you just as easily reverse that and say, bots have 360 degree vision, superior concentration and reaction times, I hope they're not going to waive that for the humans, even if they do have the ability to recognise potential hazards at a very early point.

Surely the point is to not kill people?

So which is more important, recognising potential hazards early, or unwavering attention and superior concentration? Because although I'm pretty sure it's the humans who're ahead at the moment, but I'm not sure it'll always stay that way. Bots may never match humans at a hazard perception test, but if bot reaction, vision and AI gets good enough, they may not need to.


If you watch this video you can see that Google's cars doe exactly this. You even see the car pick up on hand signals. https://www.youtube.com/watch?v=dk3oc1Hr62g Realistically you would have to modify the software to pass the test for stupid reasons, for example the vehicle might refuse to function at all without it's LIDAR, insisting on slowing to stop and pulling over, where the video wants to continue driving.


Came here to link to theis same video. It would be interesting to watch the Google algorithm run through a similar test (not the actual video, which lacks depth and so won't make a decent model). The chattering classes would be much enlightened.


That's true. I think of turning right at a crosswalk where some pedestrian is really anxious for their turn to walk, so they're standing right on the edge of the curb. I always pay careful attention to them and rely on my human intuition of body language, eye contact, etc, until I feel comfortable that they're going to remain on the edge of the curb instead of steeping out in front of me during my turn. A robot can't do this.


A robot should have much faster reflexes than you do. Robots should be able to compensate for people actively running or diving in front of them. The situation you describe should be very easy to compensate for.

Don't bother relying on body language or eye contact, just automatically sense the person shaped object, if it is close to the roadway, slow to a speed where you can avoid easily if they step out. Assume that they will. Heck, add in a buffer so you don't bother the passengers of the car by having to slam on the brakes.



A robot should always assume the person might suddenly enter the street regardless of body language or other clues and drive at speeds that allow safe stopping if that occurs.

A person driving should do the same.

I don't see the discrepancy.


> A robot should always assume the person might suddenly enter the street regardless of body language or other clues and drive at speeds that allow safe stopping if that occurs.

> A person driving should do the same.

At what range? If you assume someone could always jump out in front of you, and that running them down will always be unacceptable, you'll be doing a handful of miles an hour whenever there are humans around. In practice I doubt too many people are in favour of taking safety to that extreme.


The discrepancy is there are ~millions of years of evolution to account for our ability to immediately process and react to unexpected actions around us - whereas self driving cars, doubtlessly, will not be debuted to the public with nearly as much complexity - humans are not designed with affordability and profit in mind. Theoretically, what you're saying is correct, but the practical reality is likely to be a lot different. The best possible implementation isn't likely to be the one which hits the mass market.


Arguably the pedestrian should be comfortable to cross and expect you to wait for them. You could trust a robot to respect the pedestrians rights and not get angry.


In the context of driving in the UK - that's not how it works. People will cross at red light right after you pass, and will impatiently wait at the edge of the road, leaning forwards, ready to go. This doesn't have anything to do with respecting the pedestrians rights.


If you turn into a junction and a pedestrian is already crossing you are required to wait. It will be interesting to see how pedestrians change as a result of this. I would be much less likely to wait for cars if I know for certain that they will let me cross safely. It may make urban areas more pleasent if cars are consistantly safe.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: