Well, very extraordinary stuff does happen and sometimes you’ve got to break the law and hope that traffic court will see your case given the circumstances.
For example, what if a light breaks right in front of you and is now stuck on red?
Happened to me once. I was a passenger, so I got out, went to the corresponding light to check it was safe and confirm it was broken not just on a long delay.
Self driving sensors are better than eyes, but it is possible for there to be a situation where the car can’t know if it is safe. This probably has been thought about more by the designers than by all armchair critics combined, of course, but even if not then the law is the lowest common denominator of backside-covering.
Yeah, and if IOT really takes off, it would be trivial for a car to use the city’s API to do a health check on the light many time faster than a human could. But now we’re introducing dependencies on API’s coded by God knows who to our heavily regulated, self-driving car.
But yes, engineers probably have thought about this, but there’s an infinite amount of stuff that could happen which is impossible to control for. What about falling trees across the road? Dirt roads that’s way behind on maintanance that would require you to drive off road a bit?
I’m not saying I believe AIs can’t handle this, but they would need room for improvisation to do it. When an entirely new situation occurs humans make decisions, argue in court, and then the ruling becomes precedent.
Ideally the lights would have cameras and report themselves. FWIW, this was a temporary set of lights for roadworks in the middle of the Welsh countryside with little or no mobile signal.
This makes me wonder, as I've been many ask for an "AI government" and such, will our future societies be ruled by constantly-watching and always-listening AIs that follow the absolute most strict versions of the law and punish automatically anyone who breaks them in the slightest?
I remember a few years ago we were arguing about the fact that it's not a good idea to have a society where 100% of the people are punished for breaking the law 100% of the time, because then you can't have any progress. Who's to say the version of American society today is the the pinnacle of a modern society and that no law should ever change?
However, for change to occur, some people first need to break the law, and begin to make that new thing a culturally accepted thing before there's a strong enough movement to support a law change.
But if the AI automatically sees your first instance of breaking the law and then automatically punishes you for it, how is that change ever going to come about?
These things won't just "work themselves out" if by that you mean anything but people viciously fighting for their rights against an AI-totalitarian state over the next few decades. Just like AI today can sometimes make a mistake of seeing a helicopter instead of rifles, we'll need to keep "updating" the AI along the way to include these rights, and hopefully before too many people have to suffer in the process. Hopefully that process will be much shorter than the time it takes today from a law being passed by Congress to being shut down by the Supreme Court when found inadequate.
This is very fascinating. It would also be hard for people to appease legislators to change the laws because if they say "this law is intrusive in my life and puts good people in jail" they just sound like complaining criminals, what's so different between them and a guy who argues he was justified in punching the guy who slept with his wife straight in the face?
There is no obvious victim when selling a drug, if transaction is voluntary. We already accept that by allowing selling some drugs, that were historically consumed.
If you want a future-proof law, you should start from first principles and not use a path-dependent current laws, where selling alcohol is ok, but selling heroin is a crime.
For example, what if a light breaks right in front of you and is now stuck on red?