There's at least two other trends that are, IMO, more to blame. SUVs in general for one, and the proliferation of particular mixed-use road antipattern.
SUVs is an easy one. When an SUV hits a bicyclist, motorcyclist, or pedestrian, the unfortunate person tends to go through rather than the over they'd be subjected to by a sedan. Motorcycle accident stats bear this out - the overall accident rate has gone down as people have started riding safer, but the fatality rate has gone up as SUVs have taken over market share.
The roads this is a bit more complicated, but basically roads need to either be slow enough to safely share space (25mph or below), separated out so that only cars can use it (freeways), or kill an alarming number of pedestrians and bicyclists. Four lanes and a 35 MPH speed limit with infrequent crossings is pretty much going to have a body count.
I'm not trying to excuse Uber here, just trying to maybe convince urban planners to stop building things that convince people to try to cross four lanes of traffic going 35 MPH.
I'm a dreamer, but I'm dreaming self driving cars or even just sensors on normal cars are able to record the data and make this kind of thing indisputable.
Then maybe we'll have corporate pressure aligned with social pressure to change the laws and move traffic patterns beyond what random luck has 'evolved' to safe and sensible defaults.
Why do people blame SUVs when pickup trucks are some of the the highest selling vehicles in America and vans/minivans have existed since forever and are still popular? And they’re both the same size as SUVs?
If you have five children, you're likely buying a car with seven seats. If you regularly transport thousands of pounds of stuff, you're getting a pickup truck. SUVs, on the other hand, often get purchased in lieu of a sedan; pickup trucks and minivans have use-value that often justify their place on the road.
What sort of little bubble do you live in? Where I live people get pretty good utility out of their sport utility vehicles! We have had a ton of snow this winter, including a couple of storms where having ground clearance was key to getting around. The snow was coming down so hard that the plows couldn't keep up. I saw who got stuck and who didn't in that big storm. The interstates were a mess, secondary roads had 2' banks where plows had only gone through in one direction. Front-wheel drive cars and minivans were stuck everywhere, as were tractor trailers, RWD cars without snow tires, and so forth. Everything AWD was still moving pretty well.
Now if you're saying people with true low and high range 4wd trucks and SUVs don't go rock crawling with them that much, that's probably true, but those vehicles have a lot of utility doing things like pulling a boat trailer up a wet boat ramp, driving on a frozen lake to go ice fishing, and driving on farm roads or other unimproved surfaces.
Having lived in both places, commuting in northern New England this time of year often means that you're not driving on a surface a Silicon Valleyite would consider a "road"
Yeah, living in the Midwest I've actually encountered potholes where, if I drove my Fiat over them at any speed, would literally swallow the entire wheel and likely render the car undrivable without major repairs. Not only does an SUV/pickup have bigger tires, they're also made for driving over incredibly rough terrain. Solid axles and four-wheel drive mitigates a lot of those issues.
Statistics/stereotyping said so. If I guess that every New Englander on HN is from MA and lives east of 495 I'll be right far more often than if I pick some other place in New England.
>There are still days that it stays at home and the trucks and SUVs come out because the snow depth is above the bumper.
I've lived in all three of the northern New England states and currently live in MA because money is more important than happiness right now.
I was kind of looking forward to telling someone from MA that you don't need more than AWD car amounts of ground clearance on roads that never get more than 5" of snow between plows. Even in northern New England it's very rare for there to accumulate more than 6" of snow on public roads except maybe a few days a year and even then only in the most rural areas.
I think it's also social pressure to some extent. In the US, cars appear to be on average about 1.5x the size (in all dimensions) of cars in Europe. If you see one on the streets in Europe between other cars it often looks comically huge. I assume that if you drive a regularly sized car in the US (or a smaller one like a Toyota Aigo), it looks comically tiny between the rest of the giant traffic.
If this piece of road is the one I’m thinking of, there’s nary a crossing area for at least a mile. Phoenix isn’t known for walkability, and this isn’t downtown Tempe.
Some of the commenters here think they can do a better job reconstructing the incident from newspaper cartoons than actual investigators who are working at the scene. You guys can just as well make up a story in which a murderous Uber robot chased a pedestrian off of the sidewalk and onto the road and then intentionally ran her over.
Why are we resorting to reconstructing a scene when Uber's Robot caught the whole thing on camera and LIDAR? The police statement seems quite premature, especially when the NTSB is running an investigation into the accident.
+1; Time will tell what happened as there should be enough data onboard. These systems are essentially in training mode and new scenarios will need to be accounted for.
I am curious as to why the driver didn't react. I wonder if the system keeps track of the driver as well as I would imagine it has to be mind numbing to sit behind a wheel for hours on end and not do anything... So in this situation, how attentive and alert was really the driver who is supposed to be the fail safe mechanism.
I think part of the problem is that almost all of the news articles have painted a picture of the fault being the biker. And there seems to be no one defending her and its ridiculous that the news articles are already one-sided here considering that this is a brand-spanking new type of thing that has happened on this planet.
That being said, I'm totally open to this being the fault of either Uber or the woman, but there are too many questions in my head that suggest it was the fault of the woman.
Are we saying that at this point we're going to let the initial police statement win because they are "experts" at driving. Are they also expects at the software in SDVs? I think at this point we shouldn't assume either story - but at the same time we can't let either Uber, the police, governments cover this story up for the sake of money and politics.
I think there are some pretty massive implications that should come from this - even if a human driver may have very well killed this person too (which I don't personally believe) don't we need to take a minute and ask a few questions about why the vehicle reportedly didn't even slow down after the hit? Why the dent is on the RIGHT hand side of the car (.2 seconds seems false)? Would a human that was driving slowed a lot earlier - was the pedestrian/biker expecting the car to slow because she clearly had a hand signal up - seeing a fake driver in the car?
There are too many questions and I hope that this specific story is treated with the utmost careful consideration. I for one can't seem to let it go in my head.
Someone died and it very well could have been because of our collective ego that we can accomplish this (SDVs) at this point in our history. We'll never hear from this woman again.
This is the first report I've seen that indicates the Uber AV was in the right lane. Remember that the victim was crossing from left to right, and that all of the visible damage on the AV is on the right-side bumper. It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react, in a location with good street lighting and with clear weather. I would think most humans would be able to at least hit the brakes, if not completely avoid a collision.
Please stop spreading rumors and speculations. The graphic is an illustration and does not necessarily place the car in the actual lane. It clearly contradicts with the police chief’s statement that the victim walks right into traffic from center medium. It even contradicts with the left arrow pointing to the impact location in the same picture.
Please don't accuse me or other commenters of spreading "rumors and speculations", without evidence or logical arguments.
I didn't state a bunch of claims, I linked to a NYT published article. If it turns out that the NYT depicted the incorrect lane, then I expect we'll see a correction. But they published this graphic staked on their reputation for accuracy, not on just "rumors".
Moreover, it does not contradict what the police chief said; here is her initial and only interview:
> Pushing a bicycle laden with plastic shopping bags, a woman abruptly walked from a center median into a lane of traffic and was struck by a self-driving Uber operating in autonomous mode.
> “The driver said it was like a flash, the person walked out in front of them,” said Sylvia Moir, police chief in Tempe, Ariz., the location for the first pedestrian fatality involving a self-driving car. “His first alert to the collision was the sound of the collision.”*
The police chief also said a number of things that have subsequently been changed. In the Chronicle article, she's reported to have said that the Uber was driving 38 in a 35mph zone. The NYT graphic and other stories from today have said it was a 45mph zone.
"It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react, in a location with good street lighting and with clear weather. I would think most humans would be able to at least hit the brakes, if not completely avoid a collision."
Imagining an approximate version of the scenario and thinking that most people would be able to do what the known unimpaired driver could not is literal speculation. You are reading more into the graphic than can be reasonably derived from the graphic, and the -text- of the graphic is itself explicitly uncertain about the victim's location before and after the collision. Your comment expresses more credulity than the very source you linked to.
If you want to speculate that the Uber driver and police are covering up negligence, that's fine. Trying to claim some sort of logical high ground in a known uncertain scenario is not.
The parent didn't say that he is imagining anything. Read the post again.
They are saying that based on the information in the NYT article it does not seem credible that the woman came out of nowhere, that there was no time to react, and that the first thing the driver noticed was the sound of hitting the woman before even seeing her. The driver is a party in this story and may well be at fault for causing her death.
> It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react, in a location with good street lighting and with clear weather
Explaining the imagining, is the salient issue.
> The parent didn't say that he is imagining anything
Splitting hairs over the phrasing is not compelling. The poster was proposing a scenario as the most reasonable interpretation while couching it to minimize criticism (the equivalent of the "just sayin" trope).
The premise is flawed. It never sounded reasonable to assume the pedestrian did not see the vehicle at all. I can imagine a perfectly reasonable scenario where the pedestrian tried (and failed) to clear the vehicular path before being struck on the attempted destination side of the road.
Not sure how it applies. But anyway, in my defense, I didn't say that I was imagining scenarios. I said the exact opposite, that I could not (or, it was "very hard to") imagine a scenario in which someone crossing left-to-right is mostly unseen by a right-lane driver. By saying I can't imagine, I'm limiting myself to my own experience and observations.
Literally every discussion of note involves reading more into the original article, so yes, I guess you have me there, I am speculating. I didn't realize that was against the rules of HN discussion to bring in evidence external to the OP and state our premises?
Is it speculation to point out that the police and OP misidentified the gender and name of the driver (it's Rafaela Vasquez, not Rafael)? Or that newer articles have disputed the 35 mph limit?
edit: I'm not reading too much into the graphic by stating explicitly what it shows. I'm making the assumption that the NYT designer isn't being loose on the facts here and that the Uber vehicle was in the right lane.
So why can't I claim a logical basis based on the other agreed facts, such as the victim being a woman who is walking her bike across the street, and our general observation of how fast people are able to cross a lane of traffic? What set of physical laws and constants should we be using here?
"If you want to speculate that the Uber driver and police are covering up negligence, that's fine" - the conclusion of the comment you're replying to.
My beef is with 'Please don't accuse me or other commenters of spreading "rumors and speculations", without evidence or logical arguments.' The problem is fooling ourselves into being more certain than we can be. While this is common human behavior, especially in emotionally fraught situations, as people attempting to engage in reasonable discussion on a discussion board, we need to do better. Acknowledging our biases and limited perspective while researching and before replying improves the discussion for everyone.
addendum: what everyone agrees and states is uncertain - where on the road the victim was hit. Another area of uncertainty among us non-witnesses is the immediate traffic at the time of collision.
"The problem is fooling ourselves into being more certain than we can be. "
I want to thank you for saying something I have been having trouble putting into words. I'm not sure if it is me or whether something has changed recently with discourse on the internet but I am coming across more and more people speaking as thought they are an Authority of a subject, while simultaneously posting complete incorrect information.
I can barely even stand to read threads here anymore, if the topic is something I'm very knowledgeable about. So many intermediate-level folks making way overconfident assertions that are way off base, or completely unaware of the state of the art. And it's virtually guaranteed that if I was foolish enough to spend time replying with corrections, they'd try to waste my whole afternoon arguing with me. I wish some of these overconfident statement-makers understood they and this community would be better off if they asked questions instead.
As someone who is doing exactly what you recommend. Most of my questions go answered. It seems people are going perform actions which bring them attention.
If you are an expert/outlier for a given topic, wouldn't you generally expect (in an open forum) that the majority of people speaking/opining on the topic will sound dumb/basic relative to what you know?
Exactly. Their confidence not only helps to spread misinformation, but also signals to me that if I offer corrections, they'll argue with me down to the last pedantic toe-hold they can win. No thank you.
That's fair. I interpreted the other commenter as saying that what I linked to was just rumors and speculation. I don't see the NYT graphic (even with its caveats) as being just rumors, but yes, I'm clearly engaged in speculation and should be called out when I'm out of line. I only disagreed that my speculation was based on rumor (e.g. "Tempe Police have a history of mistreating and oppressing homeless people, so it's likely they are overstating factors favorable to Uber")
OK, but I already explained my reaction was interpreting the user describing my comment as "rumors". I thought it was an unfair accusation and wanted more "evidence", e.g. updated reports that contradicted the NYT graphic. I wasn't aspiring to this being anymore than a usual discussion of news events.
But I agree, yes, it is easy to fool ourselves into being more certain than justified. I like having that pointed out during HN discussions :)
This is a hard conversation to balance. You were right to state OP added more to the scenario than is known. But people tend to react defensively (which turns into offense often) when they’re personally called on as you did there.
The nonviolent thing to do, I am honk, is to simply describe what is actually known and point out how the accounts differ.
For what it’s worth, none of us were there and it sounds like definitive information is lacking. My hope is for this to be valuable used to ensure safer driving for everyone (autonomous or not).
News articles frequently post artistic renditions of far-away star systems but it would obviously be foolish to try draw any scientific conclusions from those graphics.
As far as I know, there hasn't been any official acknowledgement that the accident occurred in the right lane (or any specific lane for that matter). In fact, IIRC, there was a mention of the accident happening at around 100 yards from a crosswalk, whereas the stretch of road depicted in the picture is around 200ft away. The graphic also uses weasel words like "somewhere in this area". I mean, somewhere where? 100ft south? 10ft to the left? Only someone who has seen the actual video would be able to make an informed comment, and so far the police has not released it to the public.
I'm not convinced this picture can be reliably used for armchair forensics.
Agreed. As I mentioned in an earlier thread, it would make a lot more sense if the pedestrian was crossing from the right side (https://news.ycombinator.com/item?id=16625829). But it's all speculation until we see the video, or at least get more detailed and reliable reports.
>I didn't state a bunch of claims, I linked to a NYT published article. If it turns out that the NYT depicted the incorrect lane, then I expect we'll see a correction. But they published this graphic staked on their reputation for accuracy, not on just "rumors".
My experience with the NYT (and basically every news outlet) on technical no issues that I'm familiar with, is that they are not particularly concerned with getting the details 100% correct.
I don't expect better from them in the discipline of auto accident reconstruction.
Not just crossing all those lanes, since the damage is to the right side, she crossed in front of the vehicle. And yet it never braked.
That's a failure on every level, from prediction to just sheer radar obstacle detection. You don't hit things right in front of you, mid-class vehicles you can buy today already have sensors to emergency brake when the driver won't but an obstruction is ahead.
Right, so now she's crossing three lanes. So that's about 9 meters. At 1-3 meters per second for her, and 17 meters/second for the Uber SUV, it should have seen her 3-9 seconds (50-150 meters) before impact.
That's a pretty clear fail.
I wonder what the Uber's field of view is. At what distance should it have detected stuff about 9 meters off track?
But she did likely walk out from landscaping. So perhaps she initially "looked" like waving branches or whatever. And there must be some mechanism to suppress such false positives, or vehicles would brake when it's windy.
And yet the police were so quick to say that camera footage indicates that Uber was likely not at fault. It was so quick that it makes me inclined to think it's pretty obvious. However, after the SF Chronicle published its exclusive interview with the Tempe's chief of police, the Tempe Police PR person had to issue a statement that, "Tempe Police Department does not determine fault in vehicular collisions."
I'm not one to believe in conspiracies or to automatically suspect shadowy influence, so I want to believe that the the camera footage seems to argue that this was an unavoidable accident. But the evidence released so far argues against that, regardless of whether the victim's crossing was illegal or not. And why are the police making a judgment on this so soon in the first place, especially when the decision will be made by county law officials, and ostensibly after referring to Uber's full suite of sensor data?
I don't agree with the sentiment that it happens all the time (but don't disagree that it's happened more than a few times), but I do think police are much less likely to make something up if they know (and have seen) the camera evidence. No point in saying something (the day after the incident) that can so easily be contradicted later.
The police in Georgetown about half an hour north of Austin went with his gut in one murder case - "it has to be someone the victim knew" and ignored physical evidence at the scene and the eyewitness testimony of the murder victim's son. Then the prosecutor kept working against the accused for the next 25 years and suppressed evidence.
https://www.texasmonthly.com/politics/the-innocent-man-part-...
> And yet the police were so quick to say that camera footage indicates that Uber was likely not at fault.
I was surprised at that -- it doesn't seem to me that the police would have the technical knowledge of Uber's hardware and algorithms to categorically state that, let alone so quickly.
Well, if you saw that video and it really did show a < 1 second time between the pedestrian becoming visible and being hit, then the autonomous systems don't matter at all. That accident is physically impossible to prevent in that car.
So whether or not the police understand self-driving systems is not relevant to the fundamental physics problem. The key question is how long was the ped visible.
By the time that pedestrian has even traversed a single lane, the LIDAR on top has generated gigabytes of point cloud data showing her moving, assuming constant velocity of both car and ped, on a straightforward collision path.
I don't see how from a single forward-looking camera you want to deduce this crash was "physically impossible". That isn't even half of what we expect from human drivers. We expect them to swivel their head.
"The pedestrian becoming visible" would be to the video cameras. If the vehicle has other sensors (LIDAR, RADAR, etc), how sooner/later would the pedestrian be visible to these sensors? Would it be possible to judge that by looking solely at the dashcam? Would the sensors ignore the pedestrian as a false positive in these circumstances?
I don't understand that comment. "Sensor fusion" is the integration of data from two or more different sensors. It doesn't matter what those sensors are. They don't even have to be different types of sensors.
It may well be that the Uber cars only have LIDAR and optical, but that doesn't have anything to do with sensor fusion.
An accident may be physically impossible to prevent, but that's not the only standard. Initial reports (from the police) say that Uber AV did not even brake. Is not braking at all within a 1 second window the performance touted by AV's detection and braking system?
I don't see how "ped" is necessarily an offensive abbreviation (though it's one that normally isn't used), so I don't think you're justified in believing that the commenter is trying to dehumanize the victim.
As a non native speaker I'm reacting to the use of Ped as well, that paired with the obvious victim blaming that seems to be thrown around a lot, but to a much lesser extent here than in other forums discussing this issue.
Driving is a sensitive subject, people take it personal trying to do generalizations about drivers is just going to land you in a land of troubles, almost worse than profanity.
You're upset for someone using the word "ped" for pedestrian while using a word that's got a dictionary definition of being vulgar and offensive for women? I have to say I'm rather confused. I feel I'm being trolled.
Upon reflection, I wonder where that British term comes from. Maybe it basically means "woman". Reflecting the slur that women are stupid. So it's still sexist, albeit not as vulgar as the US term.
Bah... my intent was certainly to be vulgar, and I stand by that. But if it's going to be interpreted as specifically offensive towards women (an implication not present in UK English) then I'll certainly apologise to all that were thus offended, because that was not part of the plan.
Looks like it's too late to delete it, so we're stuck with it forever.
I'm a male, but I do think it's an offensive term for women in the U.S., despite its more casual usage in the UK. The average female reader would be inclined to think that such slurs -- and their anti-female sentiment -- were considered OK here.
It's too late to delete but I think it's worth it (for future readers) to say that offense isn't intended, and that that word is frowned upon by most HN users and mods.
I fall more into the "AZ wants the SDC companies in state and having a death due to a SDC would hamper that with the public" camp. Better to find the SDC not at fault and save face.
If it falls to a question of stopping distances, self driving vehicles are not really practically different from any other car, save perhaps for the ability to reduce the reaction time phase of stopping. I can certainly imagine there will be scenarios in which experienced traffic cops can make a reasonable determination, especially if the speed, distances, locations etc of the car and pedestrian are known.
It's all speculation until we get a release of the actual video footage from the car. This is ridiculous. The NTSB/police need to release the actual footage to dispel speculation. We have actual evidence of exactly what happened and yet no one outside of the investigators and Uber can see it.
I do think discussion is warranted because the Tempe Police have gone out of their way to share details and preliminary judgments about the case. Some of these details have been inaccurate, both for and against Uber's favor. But Tempe police officials are already going out of their way to downplay what their chief claimed:
> Glover downplayed Moir's statements, saying some were taken "out of context" by the Chronicle. The chief disagrees with the Chronicle's headline, "Tempe police chief says early probe shows no fault by Uber," Glover said.
edit: I agree with other commenters that the NYT's labeling is a little too vague. My assumption is that the NYT wouldn't arbitrarily show the Uber vehicle in the far-right lane unless there was some factual basis. If it turns out the NYT did make an arbitrary choice, and Uber AV was actually in the left lane, then NYT needs to be called out to make a correction.
Frustratingly, so many stories make no mention of the lane. This is the only one I've found so far, and it's from the local news-weekly:
> The Volvo was in the lane nearest the curb, about 100 yards south of Curry Road, and going about 40 mph at the time of the collision. Initial evidence shows the vehicle didn't brake "significantly" before the impact.
My assumption is that the writer thinks "lane nearest the curb" is unambiguous to the average reader, which would mean the right-most lane. Because...why would average reader (who isn't looking at the accident scene right now) assume that the left-most lane on any road is "nearest the curb"?
When someone thinks of an average road/street, they (probably) think of opposing lanes of traffic. Curbside would be right-side. Yes, there is a curb on the left-side of the street, but it is on the far side of the opposing lane.
Example: if I tell you to drive two blocks down a "normal" street and then pull over to the curb, you will most likely not pull over to the curb on the opposite side of the street.
Also, isn't "curbside" used when talking about valet services, which generally have you pull over to a curb on the right when meeting the valet? (I don't know, I don't have the opportunity very often)
If the car was really in the right lane, I don't think an autonomous vehicle has any excuse for not reacting in this situation. A real person might not see the crossing pedestrian in bad light, but a self-driving car should have plenty of sensors capable of detecting a pedestrian crossing three lanes until they're actually in the fourth lane where the car was driving.
I think that's a fair question, and that includes the text that says "Body seen in this area". I had tweeted at the author about it but hadn't gotten a reply (note: this may be because they don't see/reply to tweets from nobodies like me).
The distances, as depicted, are a little strange. If Uber AV hit the victim at 40mph, you would expect her body to be seen farther away from where it says "Body seen in this area".
1. Wouldn't the pedestrian have to walk through said car then? 2. Quite plausible, especially trying to heft what looks like a pretty heavy loaded bike onto a curb.
Random note: drivers, when you are on a two-plus lane road, do not randomly stop (when their is no designated yield) to let pedestrians cross/other drivers turn. Cars in the other lanes may (probably, even) not stop, causing an accident.
It probably is completely unrelated to this event, but it’s an anti-pattern by well-meaning drivers.
Could this be a demonstration of what people talk about with regards to partial autonomy being a detriment to human attention?
Definitely. By the time the human realises something is wrong and has to take control, it's already too late.
Incidentally, this is also why autopilots in planes have been more successful --- when something goes wrong high up in the sky, there's still relatively much time to assess the situation and react. With cars, even if the car suddenly told the driver to take over, the driver would essentially have to already be fully attentive to the situation in order to make a decision in time. It's the difference between seconds to a minute in a plane vs. fractions of a second in a car.
> The NYT just published a graphic that purports to show the location of the vehicle and the victim:
It has a vague location for where the victim's body ended up, yes, but I don't interpret that graphic as purporting to show the actual location of the vehicle or victim at the time of the trash.
> This is the first report I've seen that indicates the Uber AV was in the right lane.
I'm not sure I'd go that far. I think the arrow represents the direction of travel, but I would assume they just placed it near the body, and aren't explicitly trying to suggest which lane the car was in. And the notation "somewhere in this area", in my eyes, confirms this.
Further, if they do know which lane it was in, that would be an excellent bit of investigative journalism that no one else has reported, and you'd expect it to be in the story, if not the actual lead. It's not, so...
I think the police are incorrect in saying that the woman walked out the median. If you look at the photo (in the NYT) with the bike and car, its shows (and is captioned as) the car damaged on the right side and the bike is on side walk. It seems most likely that the woman stepped out from behind the trees on the right side of the street.
If people are actually correct about the median thing then I cant imagine how this is not a colossal failure of software. But again it makes way more sense that she walked out from the right side of the road.
> The NYT just published a graphic that purports to show the location of the vehicle and the victim:
Emphasis on ”purports”. The NYT graphic doesn’t indicate the vehicle was in the right lane, nor does it indicate a precise location for Elaine. I don’t think you have bad intentions here, but extrapolating a hypothesis from this graphic seems unfounded.
There isn’t enough information to go on to develop reasonable hypotheses from any angle, unfortunately. The investigation needs to complete and footage needs to be released.
> It's very hard to imagine a scenario in which a 49-year-old woman walking her bike manages to cross 3 lanes of traffic so quickly that the Uber AV, moving at 40 mph, had no time to react [...]
(Pure speculation follows)
I wonder if it possible that the problem is that she was moving too slowly for it, not too quickly?
If for some reason it could not get a good read on the component of her velocity parallel to the lane, it might mistake her for another vehicle moving at similar speed to the Uber, and then see her transverse velocity component as being due to normal drift within the lane. It would read the situation as a normal passing situation, not someone about to enter its lane.
The OP article (from bloomberg, not nytimes) says that the woman moved from the median into traffic:
> The Uber had a forward-facing video recorder, which showed the woman was walking a bike at about 10 p.m. and moved into traffic from a dark center median. "It’s very clear it would have been difficult to avoid this collision in any kind of mode,” Sylvia Moir, the police chief in Tempe, Arizona, told the San Francisco Chronicle.
It's not hard to imagine at all. Did you watch the embedded video? The pedestrian is almost invisible. On my first viewing I didn't see her at all. On repeat viewing, and with foreknowledge, I could just make out her sneaker by reflected light, and her torso in silhouette. She has no reflective gear on her body or her bicycle, and the street lighting is nonexistent where she chose to cross (between street lights, and not on a marked crosswalk or corner). I'm pretty sure I would have hit this woman under these conditions, as horrifying as that is.
I'm pretty amazed the SDV didn't "see" her, and yet the photograph shows the SDV and the bicycle in the same frame. I mean, yes, I'm sure it can stop from 38mph in, say, 60 feet under perfect conditions, but there's no way a human reacted to the collision and slammed on the brakes or hit an emergency stop button quickly enough to bring it to a stop with the bike in frame like that. I wonder if the vehicle is programmed to do a full panic stop on collision -- which, in and of itself seems a bit dicey, in terms of determining when to slam on the brakes (bird strike? dog? deer?)
“It’s possible that Uber’s automated driving system did not detect the pedestrian, did not classify her as a pedestrian, or did not predict her departure from the median,” Smith said in an email. “I don’t know whether these steps occurred too late to prevent or lessen the collision or whether they never occurred at all, but the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”
Maybe she stepped back into traffic after crossing? Perhaps the car thought she would be on the sidewalk by the time it arrived at that position but she stepped back or fell back into the lane.
The police are comparing what would have happened if the driver was human. But Ai cars are supposed to have larger range of view. I personally feel waymo car would have probably noted and avoided the woman. And with lidar it's being dark is actually no excuse as it wouldn't matter.
I mean, there was a human in the car too, and the human's statement was that the first sign that there might be a collision was after the collision occurred.
Right, because "I was bored out of my mind and wasn't paying attention until an actual crash happened" is a statement that the human would have totally made if that were the case.
I haven't seen anyone post this yet but I highly suspect the fact that she was pushing a bicycle impacted the machine learning driven AIs ability to determine she was a pedestrian. This reminds me of the kangaroos messing up the AV programs being tested in Australia. A human paying attention may have been wary of a person pushing a bike in the shadows but for all we know the algorithm thought that she was a bush or something because her profile was impacted by the bike. There is a lot to speculate about but machine learning isn't as smart as we humans tend to believe it is and nowhere does it yet approach the form of general intelligence required to respond appropriately to all of it's inputs the way a human paying attention could.
Weird scenarios that can't be predicted are important to consider. Waymo presentations like talking about a situation they ran into where the car needed to stop because it encountered an old woman in a wheelchair with a broom chasing a turkey. As I understand it, most of these companies sensibly say "unknown weird thing in/near road means stop the car."
It may depend if the test cars are used on routes with lots of homeless people or not, for the Mountain View test center I know of a handful of regularly seen homeless people on loaded bikes nearby in downtown Los Altos/Mountain View but it's not close to the level of San Francisco/San Jose/Oakland/Los Angeles/Anaheim/San Diego.
> As I understand it, most of these companies sensibly say "unknown weird thing in/near road means stop the car."
That's not entirely true. A Tesla, for instance, won't stop if it detects a stationary object on the highway, assuming it's a false positives of the algorithm.
Given that another comment says she "suddenly" crossed three lanes of traffic and was on the far side of the car when she was hit, my suspicion is that the car thought she was riding the bike. Of course all of us computers know that bikes always go over 5 mph so surely she'd make it across. Why brake?
I want safe self-driving cars, when they are safe. But Uber's clearly-established cavalier attitude toward human beings apparently can't be trusted with self-driving cars. I'm disappointed that politicians thought they could.
I don't understand this line of reasoning at all, but it's getting repeated a lot on HN. Is the AI expected to hit cyclists, but not pedestrians? Shouldn't it consider any moving object to be a hazard?
It may not have had good enough information to tell that she was a cyclist at all, and instead interpreted her to be something static/non-human altogether (in parent's example, a bush). Of course, without seeing the data its sensors gathered, and what that data registered as, it's all speculation. The problem is with misidentification, not bad priorities.
I wouldn't be surprised if there was some kind of classification problem involved. But wouldn't the AV by default treat a solid object on the road as a hazard to brake for? There's no possible way to specifically train for every variation of thing that could be encountered on a city street (imagine Halloween, for starters).
It was a 3-lane highway. It wouldn't really make sense to break if the system detected a immobile object in a different lane. The problem is more that the system failed to detect the woman moving between lanes.
But the system has the ability to detect whether an object is moving in parallel (i.e. within its own lane) vs. across lanes, right? That would seem to be fundamentally necessary in everyday conditions, e.g. day traffic in which cars are switching/merging lanes.
If the problem is that the system didn't detect the woman moving between lanes, then that seemingly contradicts the police statement that the victim moved quickly enough to surprise the AV and its driver.
I don't see how this is relevant. The car should avoid hitting ANYTHING. It shouldn't have to recognise what the object is if it's in the path of the car.
When I was getting my learners permit I was practicing on a curvy mountain road. I saw a person with a camera phone on the "drop off" side of the road, taking a picture of from what my angle would've been a boring cliff face. But I knew that a camera phone photographer was probably actually photographing a human, who would likely be on the other side of the blind corner. So I slowed down, despite my instructor chastising me for doing so. When we came around and there was a person in the road she asked, "How did you know?" If a self driving vehicle can't make predictions about "irrational" pedestrian behaviour like this and slow down it shouldn't be on the road.
Your own story disproves your conclusion. Your driving instructor, like many (many) other drivers, didn't make the infererence that you did. People don't drive well. Cars are dangerous.
The criterion (really the only criterion) for whether or not automatic vehicles "should be on the road" is whether or not they are safer than the alternative. And that certainly doesn't include "being as good a driver as jdavis703 was in this particular anecdote".
Being engaged in a particular role relative to a situation can (I believe) alter one's capacity to perceive and respond to it.
This isn't to say that for definite the instructor would have had the insight described in the parent post, but it is to say that you can't rule this out just based on the information that as a passenger they didn't have the insight.
> This isn't to say that for definite the instructor would have had the insight described in the parent post, but it is to say that you can't rule this out just based on the information that as a passenger they didn't have the insight.
That's a senseless digression though. Again, the criterion to use when deciding whether autonomous vehicles are safe is whether autonomous vehicles are measurably as safe as human drivers and very much not whether we can "rule out" the possibility that the machine might be subject to failure modes that human driver are already known to have anyway.
For every scenario and hypothetical like this one you can imagine where an autonomous vehicle would fail, I can come up with an equally hypothetical reason why they're better (hell, just read any of the media coverage of them). Both arguments are meaningless without numbers and analysis, c.f. this very article we're discussing.
In terms of your agenda, it might make no sense. It just irks me when I see simplistic assumptions about human information processing that don't take the "situatedness" of the processor into account.
I have no views, and did not attempt to comment, on machine information processing in this instance. I just wanted to point out that you can't simply go from the instructor's actually missing the inference as a passenger to the conclusion that they would have missed it as a driver. Thats it :)
The self driving accidents might be the easiest for humans to avoid, but if accident rates are lower for self-driving cars, it's still worth it. Let's look at overall numbers and severity, not how avoidable it would be for a human.
I am a massive proponent of self-driving vehicles for exactly the same reasons you indicated. However, I do a huge reservation: this tech can only peak if it is the only thing on the road.
My outlook on the transition period is not very optimistic. Humans are good at anticipating crazy behavior. Fatalities increasing (before they eventually decrease) is currently just as likely as the promised miracle in my mind - we don't have enough data to sway my mind either direction just yet.
Probably not. But I also failed my first driving test for going to slow (I was within the minimum and maximum speed range, but new drivers going slow I guess is a sign of an unconfident driver).
It’s been a while since I took a driving test, but I recall the test to be more than staying within the speed limit.
There was also staying in between the lines, following road signs, and integrating with traffic. Do you think you might have failed not because you weren’t pegged at the speed limit, but instead because you weren’t capable of driving safely?
I wasn’t there, so it could’ve been because you really were just driving too slowly. But maybe it was more than that.
I wouldn't particularly blame your instructor, a lot of people would have reacted the same way, which to me proves that it's not just AI, it's the roads and the driving that are unsafe. As pedestrians, we learn what's dangerous and what we can't do, and as drivers, we assume pedestrians behavior from our own knowledge. But both AI and humans are subject to occasional 'irrational behavior'. Humans are not safe either in that situation. Even if in some cases a human might have advantage over AI, AI has many other advantages over humans. It would be a big mistake to reject AI just because there are situations where its performance can be inferior to some humans.
I agree there's still a lot of work to be done, but my point is that the problem is the driving system itself, and a lot of people dies because of it. AI can't directly change that, but if it can still do better than humans (in a few years), then it's worth it.
A proper (e.g. non-Uber) self driving car would never take a blind corner at a speed that doesn't allow it to come to a complete stop before it reaches its vision range.
As a side note, I find - as always - this part disturbing:
>The driver, Rafael Vasquez, 44, served time in prison for armed robbery and other charges in the early 2000s, according to Arizona prison and Maricopa County Superior Court records.
He served time in prison for armed robbery more than 15 years ago, what kind of relevance would have this?
Presumably he has a valid driving license issued by the State, and was not under the effect of alcohol or drugs.
Having committed armed robbery has seemingly no connection (I mean it isn't like he was condemned for having killed someone while driving a car or something like that), and even if that was the case some State must have issued (or renewed) his driving license, meaning that he was legally authorized to drive the car.
>In the US, once you commit a crime, it'll follow you for the rest of your life, even after you paid your time and money.
I know, and understand how this (right or wrong as it might be[1]) could be of use for the police or for the judiciary system, what I am highlighting is just how gratuitious it is on a piece of news.
I imagine they're trying to imply that someone who was convicted of a serious crime could still be an unreliable, irresponsible, or untrustworthy person.
It's another question whether it's fair to assume that. I don't think it is, though personally I think it's reasonable to take the information into account as long as you restrain yourself from jumping to any conclusions.
>I imagine they're trying to imply that someone who was convicted of a serious crime could still be an unreliable, irresponsible, or untrustworthy person.
Sure, the whole point being that it is meaningless and IMHO disturbing to imply that.
Not that I am familiar with bank robbers, and of course have no idea on the specific crime for which the driver was earlier condemned, but I believe that bank robbers need to be actually rather punctual, have planning capabilities and usually be exceptionally good drivers, at least this is what Hollywood usually shows us.
>I don't think it is, though personally I think it's reasonable to take the information into account as long as you restrain yourself from jumping to any conclusions.
I wonder how you take that into account without making any conclusion or however influence your opinion, this kind of info is IMHO either relevant or not (gratuitious).
It is a car accident, where the automatic means failed and the driver supposed to take commands in case of failure of the automated system didn't or couldn't, the criminal past of the guy has no relevance, it is more likely that the whole thing happened too fast, or that he was distracted, and unless and until it is established that the accident was caused voluntarily or by "voluntary inaction".
What if the driver was differently "marked" by society?
Like - say - known to belong to the Communist Party or to the Neo Nazi's?
Or if he was mentioned as being (choose one) gay, transgender, black, latino, illegally immigrated?
What kind of added info is that in the context of a car accident?
Call me crazy but isn't a big part of the promise of these systems supposed to be they see things humans wouldn't or couldn't? If they're just as surprised as the human behind the wheel, that feels like a problem.
Yes, to me, this is the most relevant part of the posted article:
> “It’s possible that Uber’s automated driving system did not detect the pedestrian, did not classify her as a pedestrian, or did not predict her departure from the median,” Smith said in an email. “I don’t know whether these steps occurred too late to prevent or lessen the collision or whether they never occurred at all, but the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”
We can accept that by the time the Uber AV "saw" the victim, it was too late to brake. But that doesn't mean we can't ask questions about what the AI actually saw and classified. And even if it was a "sudden" event for humans, was it "sudden" relative to computer reaction time? How fast did the victim move into the road that the Uber AV couldn't even brake?
Yeah, but my comment was made back when there were 2 stories about the Uber accident. This one, and the original one. The Bloomberg story has little more information than what the police chief claimed to the Chronicle, and at that point, the Tempe PD was downplaying the importance of her determination.
But everything the police claimed was in context of witness interview and camera footage. Since this is a self-driving car, we know there is more data and more ways to assess performance besides camera vision.
it could be that they never saw someone pushing a bike across a road in tests before, maybe it confused that with someone riding on the road at a different angle
But why is accurate classification needed, when the overriding decision is about how the car should react now that an unidentified, moving object is moving directly into the car's path? Even if someone were riding at an angle, it's still in a direction that may intersect with the car's path.
If the autonomous vehicle performed just as well as the human did (meaning a human would've hit her also), then yes although it can be improved, I don't see how it is a problem.
It seems pretty common after a newsworthy crash for there to be a lot of uninformed speculation.
Investigations take time. Just like with airplane crashes, it's probably best to ignore everything about this incident until we see the results of the investigation.
Especially when there is video of it. Please tell me there is dashcam footage, since that's the first thing that should be available as evidence with a self-driving car.
I wonder if they have a camera pointed at the safety driver? I'm curious if he is telling the truth, or was just distracted and could have theoretically braked for the woman.
Yes, the report you'd hope to be hearing from Uber is along the lines of "0.2 seconds after the pedestrian entered the travel lane the automated system identified her as a hazard and initiated an avoidance maneuver. Despite braking and beginning to swerve away, the car impacted the pedestrian 1.2 seconds later." Thats the promise the automated vehicle people are selling. But in this case it seems the car was clueless.
If the woman suddenly walked in front of the car at the last minute, I don't see how the situation could have been avoided, no matter how fast a computer is able to process the information from the sensors.
There was also a driver behind the wheel, so it would suggest that humans are just as clueless.
”If the woman suddenly walked in front of the car at the last minute, I don't see how the situation could have been avoided”
Likely, but not necessarily. Depending (hugely!) on what actually happened, it might be that the car should have started slowing down before the woman changed course.
As an extreme example, if you’re driving at 50 mph and come up behind a kid cycling at the side of the road with a foot separation between the extrapolated trajectories of the bike and your car, should it be OK to continue your course at full speed, or should drivers take into account that kids are kids, and may move erratically?
Also (again purely hypothetical), if the car had already had several similar events on that road that resulted in near misses, should it have slowed down before it even entered the street, knowing the road segment to be particularly dangerous?
I think human drivers, even though they are horrible at attending to the road for extended periods, get into accidents relatively rarely because they know when they really need to pay attention.
Finally, I do not rule out that that “driver behind the wheel” reacted slower than would have happened if (s)he was actually driving the car.
Disclaimer: I’m a layman, and haven’t seen the video ⇒ Let’s wait and see what the NTSB will say about this.
There definitely exists a sort of pre-cognitive human quality to traffic.
Sometimes you just slow down on a subconscious hunch without any information on that particular situation. Or you might be very conscious not to slow down if you're driving a motorcycle in Asia in heavy traffic trying to make a turn with a huge truck barreling behind you.
There are massive differences in road cultures between countries, and it takes a while to adjust to local habits. Somebody who doesn't know these invisible rules is a very accident prone driver even if they can drive safely in another country.
1. I've noticed that adaptive cruise control doesn't account for vehicles entering my lane in front me, even with their turn signal on. As a human driver, I see them entering and know to slow down (at least take my foot off the gas), but the system only performs a hard brake AFTER the car has actually moved into my lane. Disclaimer: I don't know if autopilot systems suffer from the same limitation.
2. I was once waiting to make a protected left turn at a busy intersection. There were two protected left turn lanes, and I was in the farther left lane. Light turns green, I take my foot off the brake, and prepare to accelerate. The car on my right, in the other left turn lane, is also barely starting to move forward, when it suddenly rocks to a halt, indicating they applied the brake, hard. Without thinking, I also brake hard. Out of nowhere, a car appears speeding right through where I would have been if I hadn't stopped. Apparently, this car was entering the intersection on the cross street, going right to left. He wanted to beat the red light (he didn't), and also make his left turn, right down my street, in the opposite direction I was headed. I guess he didn't realize there were two left turn lanes. I never saw him. If I hadn't gone with the herd instinct and hit the brakes, I'm sure there would have been a serious impact. I wonder if autopilot would have caught that one, but it probably depends on camera placement.
A single self driving car probably wouldn't (might do the same thing you did at best).
It'll take actual inter-vehicle communication to extend the sensor-net. Much like humans presently do via proxying though the other car's deviations from anticipated behavior. (EG if the freeway traffic keeps going at proper speed around corners)
It's referred to in the motorcycle community as 'Spidey Sense' - the way your subconscious picks up dangerous situations before you consciously become aware of them, giving you a chance to slow down and become more vigilant.
This is a really great point. I wonder if at some point self driving cars will be taught with these regional anomalies in mind. And will they be activated/changed depending on where the car is shipped/activated (Asia vs. America)?
Apart from regional differences, there are a host of other, similar matters. Cruising through your city's bar district at 3am? I'm sure you'll drive more defensively and be prepared for erratic behaviour on the part of others.
However, one thing that I haven't seen discussed anywhere, but that strikes me as a hard problem: by having a non-human driver, you're literally taking out the human element of communication.
Everybody is taught about the importance of eye contact in driver's ed. What will we replace it with? I have no clue, and I doubt anyone else does. Yet it's a crucial technique we all use to negotiate traffic every day, regardless of our mode of transportation.
This is a hard problem, and it's less technical than cultural. It will be a bitch to solve.
Eye contact is another thing that doesn't apply in some Asian countries where by default everyone has dark tinted windows making it virtually impossible to see inside other vehicles.
> some Asian countries where by default everyone has dark tinted windows making it virtually impossible to see inside other vehicles.
Hmm. Perhaps. Personally, I've never noticed this.
Anyway, the concrete example doesn't matter.
I guess you know what I was trying to say: implicit communication between humans. Even without eye contact, there will be a host of minuscule actions (or inactions) that we use to convey intent. There will always be humans on roads, e.g. as pedestrians. Vehicles will be forced to interact with them, and this communication barrier makes it very difficult.
Don't think of highways, think of supermarket parking lots.
Well actually the parking lots are especially difficult due to lack of eye contact.
But I get your point.
People compensate by being very careful when parking and of course you can always roll down your window if the situation requires it.
A robotic vechile simply couldn't manage here. There are way too many dynamic human exceptions. For example, it's common to drive against traffic on the wrong side of the road for short distances since the intersections are so far apart.
That’s just human intuition, which is trained over time through experience. Which is also how machine learning works, so it’s not quite clear that humans have a sustainable advantage here.
Same here during my road biking years (I switched to Jogging and unicycle). Even though I disliked the risk, I enjoyed the wind as it allowed me to ride faster.
The woman cannot go from being "not in path" to "in path" instantaneously unless you believe in teleportation. Supposing the woman got about 1 foot into the path of the car before being struck, the car would have had a minimum of 1 ft / <woman's speed in ft/s> seconds to react, assuming total blindness until the woman was "in path" (I can't think of a real world scenario where this assumption would be strictly true). Suppose a speed of 12 ft/s (~8mph), the car would have had a minimum of 8/100ths of a second to react. Supposing 3 ft of visibility before in path (about the distance from edge of car to lane), the car would have a minimum of ~1/3 second to react. That's assuming the woman was biking along at a good clip already, which is unlikely given that she was crossing a road. So, in all likelihood, the car had over 1/3 of a second to do something. That's just above the typical reaction time of a human (~1/4 second), but I don't think it's an unreasonable expectation for an autonomous vehicle.
I don't see how this is at all accurate without taking to account the speed of that car. If human reaction time is 1/4th of a second, how time does that actually leave for the car move as well - the car can't teleport either.
We know the car was going roughly 40mph, so that puts some constraints on the minimum response time that was available. Unless this woman literally catapulted in front of the car, there were at least 4’ of lateral walking pace worth of 40mph time to react. You do need to make assumptions about how fast she was moving, of course, and as has been noted elsewhere in the thread you have to assume that the driver was in the left lane for this scenario to even be remotely plausible. Even in this sequence, the car should have been able to substantially decelerate but, looking at the pictures, that doesn’t seem to have happened.
If I was driving a car and saw the pedestrian ahead of time walking or trying to cross I would have slowed down and changed lanes away fro them or at least moved far right if safe. Same thing when you pass bicycles. Change lanes away if possible or at the very least give more buffer. You can never tell what someone may do so you should anticipate the unexpected.
"The vehicle was doing about 40 miles per hour on a street with a 45 m.p.h. speed limit when it struck Ms. Herzberg, who was walking her bicycle across the street, according to the Tempe police"
> "The vehicle was doing about 40 miles per hour on a street with a 45 m.p.h. speed limit when it struck Ms. Herzberg, who was walking her bicycle across the street, according to the Tempe police"
According to the article page that you are commenting, "The speed limit where the accident occurred is 35 mph, police spokeswoman Lily Duran said."
Even so, defensive driving is more than simply driving the speed limit. Also from this article, "... the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision."
Might be an error? If you travel that road on street view, you see a sign for 30mph, the 35mph before the bridge over water, then 45mph at the underpass. The sign is mounted above the road, on the underpass itself.
Of course, something might have changed since street view went through.
Are you sure? Per Street View, there's a 45 mph sign where the road goes under the underpass, prior to the site of the accident. Unless that's since changed, of course.
The human in the car said that the first sign they got of a potential issue was the sound of the collision itself. Unless they were paying no attention, that might indicate that this scenario was likely to be very difficult to avoid?
Humans are terrible at remembering specific events, particularly after something so traumatic as hitting and killing a pedestrian. And given that a woman was killed, driver inattention doesn’t seem at all unlikely.
Yep, after a friend had someone try to use her van to commit suicide I am paranoid and get ready to avoid every pedestrian. I don't know what I'd do if I lived somewhere with many pedestrians.
You should be paranoid about pedestrians it turns out, even if they aren’t suicidal. You are in a pedestrian-killing machine, after all. Being paranoid about this is no less reasonable than being sure to keep your chemicals and sharp knives out of the reach of toddlers if you have them both in your home.
This is an absolutely terrible idea. Don't do this. Your job as traffic is to be as predictable as possible. You're just going to cause more accidents by driving erratically.
An accident might have been unavoidable, but fatality is the big question. The risk of pedestrian fatality jumps by ~4x between 30mph and 40mph.[0] So the question is - did the car start breaking as soon as possible?
As an aside, that statistic is why I never speed in pedestrian zones.
Human driver avoided hitting the pedestrian. There are lots of videos of human drivers successfully not hitting people who "suddenly" pop out "at the last minute". Self-driving cars should be able to match that. If they can't, they don't belong on public roads yet.
But, ideally, the machinery should be much, much faster at becoming less clueless than the human driver; as the parent said, the ideal is that it should successfully identify, and fail to resolve, the impossible situation.
It should not have no awareness it's in an impossible situation (because that implies it will have no awareness is similar, possible, situations)
The fact that in most cases the human has to do nothing means that it is unlikely that the human is anywhere close to being as vigilante as an actual human driver. The fact that the human is clueless isn't surprising at all. In fact this is the real danger with systems like Tesla's autopilot. You still ned to be vigilante, but you are so bored that it is hard to be.
The safety humans here might as well be hood ornaments.
The word you're looking for is "vigilant". "Vigilante" means "a member of a self-appointed group of citizens who undertake law enforcement in their community without legal authority, typically because the legal agencies are thought to be inadequate".
I think the parent commenter to your comment was alluding to this kind of stuff, a lack of depth in fallback strategies and systems trained to focus on objects in motion over stationary ones.
I'm really skeptical about swerving as an evasive maneuver. It's complicated to get that right. But yes, if it didn't at least slam the brakes then something went badly wrong.
Maybe you can't prevent an accident, but you can at least reduce momentum and demonstrate that your system was working correctly.
You can (and should!) brake and swerve at the same time. Unless you are driving an oldtimer, the car is capable of doing that.
In my driving lessons I got told to
swerve for humans. Pedestrians and cyclists are the least protected participants in traffic. If you swerve and hit another car, the resulting crash will cause a lot of damage, but injuries to humans will likely be much less severe, if they even happen.
Animals, especially small ones are a different matter. Using the same argument, it is better to risk hitting the animal than risking sverving into other traffic.
Yes and no. The very action of braking will reduce available grip for the swerve. However, ABS and stability control mean you are at least quite unlikely to lose control if you try to do both at the same time. And of course, in this case, only a mild swerve would have been required, and adding braking would help scrub speed in the event that the obstacle moves in a manner that your swerve does not avoid.
There are so many variable here. If you are travelling at 30kmph and swerve into the path of a car going the opposite direction at 80kmph, that's a massive crash.
Why didn't the human hit the brake? I don't understand how this is even news to be honest. There was a human in the driving seat and he failed to stop just like the computer.
Probably the human wasn't paying close attention. That is the problem with level 3-4 automated vehicles: a human who is not in primary control of the vehicle has trouble paying attention and is unable to take over control quickly if it is needed.
That's why I think vehicle automation development follow the path where the computer is not in primary control of the vehicle and can take over quickly when needed. Unlike humans, computers don't have trouble paying attention.
Ultimately, we have to design systems for the humans that actually exist, not some hypothetical superhuman. If it's not safe for actual humans to use then it's not safe full stop. It's not like this is some new issue; it's one of the main reasons why experts have been pessimistic about the prospects of self-driving cars and why existing car companies have been reluctant to deploy heavy automation in the field.
I think he's blaming the folks who put level 3-4 vehicles on the road presumably knowing full well that said human inattention would put people at serious risk of injury and death in the car's edge cases.
> Computers have better reaction times than humans. Forcibly braking is a simple maneuver.
The question isn't about the MHZ speed of a computer or the theoretical reaction time but the lack of reaction by the computer in the actual real world. It doesn't matter if this is a sensor failure, a software bug, or a slow reaction by a computer. The result is the same. A person died.
From the article:
*"... the lack of braking or swerving whatsoever is alarming and suggests that the system never anticipated the collision.”
Swerving is a better way to avoid a collision, but braking would have at least reduced the severity.
Of course, this all presumes you can quickly analyze the situation and conclude that it is safe to swerve (which, in theory, the computer can do far faster than a human can).
The car can begin to brake right away, and must certainly be expected to have a faster reaction time than a human. Whether it started to brake seems like the most important question here.
Not in the field but I am assuming that the AI/ML cannot handle the ability to see possible road hazards with in the human peripheral vision area
Such as a kid attempting to chase down a ball that is rolling towards the road (object vector path collision), specially with the ball and or kid suddenly hidden from view because of a parked car (real environment vs visual environment). Or a group playing basketball in a driveway. Both where slowing down is always the safer bet.
A 4000 lb SUV, traveling nearly 40mph, at night, on a 4 lane divided highway, hits a pedestrian walking a bike.
The kinetic energy mismatch is the real problem, and at the very least, these companies should be testing at only 20-25mph, with _much_ lighter vehicles.
We'll have to wait for the NTSB to check in, but I'd be surprised if Uber isn't shut down(at least in Tempe) for a good long while.
Agreed. While I can understand it's much easier to strap sensors/equipment on a already available car, it makes more sense to me from a safety point of view to test with some lightweight shell of a vehicle to cause the least damage to other entities.
Oh it's her fault she was jay walking and the robots now take precedence. No need to ticket jay walkers! Uber's fleet will take care of these law breakers!
Ridiculous and this company after all it's done is still around and now it's killing people!
You are right, a better analogy would be if a person tried to cross in the middle of a random track segment. The woman didn't cross at a crosswalk so that part of road was not designated to pedestrians at all.
Cars (at lease autonomous ones) signal before changing lanes and slowing down.
I think that flashing lights is too annoying, but forcing cars that drive in the night to have lights sounds like a reasonable (and existent) rule.
The main difference is that a train, even under emergency braking, can't stop quickly, and it also can't swerve to avoid an obstacle. If you jump in front of a moving train, even if the train driver could see you before you jumped and could predict you would jump, you will get hit.
> "The driver said it was like a flash, the person walked out in front of them," Moir said, referring to the back-up driver who was behind the wheel but not operating the vehicle. "His first alert to the collision was the sound of the collision."
I cannot imagine the detailed logging the engineers might have to do in such a system. When I code I wonder sometimes if I am logging unnecessary events at info level.
This led to one more question. Do driverless cars have (or will have) a black box like that of aeroplane?
What we really need is for all of the logs from a company like Uber or Google to get streamed to some third party.
All of the logs are encrypted (with different keys), and only when an issue occurs, the company who owns the data gives the key for that particular car at that particular hour for decryption. Basically a data-escrow.
This way we know the logs are not being tampered with, and we keep the data hidden from the third party.
These are test vehicles. If you don't let the engineers have access to the results of their testing except in the event of an accident, then we will never have autonomous cars.
Of course, maybe thats what you're arguing for. But either way, I think it's a bad idea.
Test vehicles collect all the data. I think it was Cruise that mentioned they collect about 4 terabytes per day per vehicle. It likely isn't much different for Uber.
Yes; in fact, cars have had this technology since before there was flash memory or GPS or (good) accelerometers. https://en.wikipedia.org/wiki/Event_data_recorder s were originally designed to record the instantaneous mechanical state of the car (e.g. whether seatbelts were buckled, to what degree the accelerator was depressed, etc.) during a crash, for insurance purposes.
It's interesting how high profile this post-crash analysis is - name another time you read so much commentary about the details that caused a car crash?
It seems to me that this is exposing a few gaps in how we think about driverless cars currently:
- A framework for how cars should be making "moral" decisions (the trolley problem [0])
- A defined process for post car crash investigations - akin to the process in air crashes
Will be interesting to see if these emerge soon (or are emerging and I have missed)
> name another time you read so much commentary about the details that caused a car crash?
Doesn't this happen every time a Tesla is involved, too? I think this might be so high-profile because it's the first automated vehicle incident (that I'm aware of) that involves a pedestrian.
I know that one of the Tesla incidents resulted in the driver's death, but that was clearly user error. IIRC Tesla was very quick to release the data on the incident as well, so it was pretty clear versus this situation, which is mostly people guessing.
I really don't see how the trolley problem is relevant to this situation. The car didn't decide that killing a person was better than some less-moral alternative.
I think that's why it's relevant. It's likely the system can't make such a decision because its developers wanted to avoid having to deal with that can of worms. If this is the case the car may have insufficient options available to it in this type of situation. With a working "trolley function" it's possible the system could have noticed the obstacle then done some "moral math" with the chances the obstacle is a pedestrian and the odds of survival for the car's occupant(s) then chosen to drive off the road and crash into a ditch. Without this feature it's likely the system's only recourse is to stop and ask for help, which it may not have time to do.
Self-driving cars are still a fantasy. I don't want the AI to be comparable to an average driver (who collectively get into 6 million accidents) . I want the AI to meet/exceed the skills of the best driver. There is no way anyone would trust an "average" driver to pickup their kids, more than themselves.
> There is no way anyone would trust an "average" driver to pickup their kids, more than themselves.
I mean, most parents will—at some point—trust their teenage children to pick up their younger siblings. And teenagers are decidedly below-average drivers.
The average driver is pretty terrible though. Average is a pretty low bar to clear, and I think there's a decent argument to be made that waymo's disengagement numbers are already better than an average driver.
It's possible (though unlikely) for everyone who picks up kids on behalf of parents to be an above-average driver, since the people doing that are a small share of all drivers.
Did the car stop itself after the accident? Are autonomous cars programmed with a "we've just hit something, stop and pull over" mode? Which sensors on the car even know if it has hit something?
Clarification - the article you posted is about the Waymo technology, not the Uber technology. It's possible that Uber does not use those capabilities.
If this car hit a woman coming from so far on the other side of the street, how are they going to deal with suicides? A coworker was driving home once when a man jumped out in front of her van and she barely stopped in time. He started whomping on her driver's side window screaming "Why didn't you hit me?!?!" again and again. She quickly got away and tried to come up with an explanation for her kids and called 911.
The fact the woman was pushing a bike with tons of plastic bags late a night makes me think it wasn't. It sounds to me like a homeless person jaywalking in a dangerous fashion.
But I spent 5.7 years homeless and was often suicidal, in part because I was homeless and things seemed hopeless, in part because I have a history of being suicidal. I had no knife, no gun, no poison. My suicidal ideation typically involved ideas like hurling myself into some nearby body of water or "playing in traffic." I was often camped near a highway and just walking out into traffic on the highway was often the most immediately available means to try to die.
I was homeless with two adult relatives who were not ever suicidal themselves. Their support is part of why I never acted on such impulses. Most homeless people are on the street alone, which means there is no one to talk to if they are suicidal and no one to try to prevent a spontaneous impulse of "I have had enough and would like to just check out of life since the entire fucking world hates me and there is no means to solve my problems."
This is part of why I say we can't really know. Homeless people often jaywalk very dangerously and it is entirely possible that one reason they do so is because they figure ending up dead would be one path out of their awful situation and bonus points for it looking like a tragic accident since suicide is stigmatizing.
Good point, that's possible. I made the above assumption because it would be easier to commit suicide by putting the bike down first and then walking into traffic. But in the context of always engaging in risky/dangerous behaviour because you don't care about your life then this would make sense as a possible scenario.
If the reported speed of 38 mph is correct it does not appear to have been speeding, unless most of the reports have the location wrong, or the speed limit has changed in the last 9 months.
The nearest sign on the road it was on, for the direction it was traveling according to most reports, that it would have passed says that the speed limit is 45 mph [1]. That image is from July 2017.
Serious question: how are speed limits enforced (or not) in the USA? From what I read on the internet, it seems like they are 90% unenforced, 10% arbitrarily enforced by cops who choose who they want to pull over from the masses who are routinely speeding.
If they are mostly unenforced, how does the system work? Does everyone implicitly add 20% to the posted limit, or do they just go whatever speed feels right to them given the road and the conditions?
(In my part of Australia, speeding is fairly common, but not speed limits are not a joke. You can expect most people to stick within ~5km/h of the limit, and if you choose not to then there's a real chance you'll get done by a hidden camera. The idea of everyone ignoring the limits sounds terrible: of course they are a bit arbitrary, but that's unavoidable for any unambiguous law, and the alternatives of arbitrary enforcement or no enforcement seem much worse.)
> If they are mostly unenforced, how does the system work? Does everyone implicitly add 20% to the posted limit, or do they just go whatever speed feels right to them given the road and the conditions?
Usually people go limit + 10mph on major freeways, and "whatever feels safe" in city driving.
Part of the issue is that within large parts of the united states, speed limits are set at the 85%ile of the speed people drive [1] ... and in my experience, often are only enforced if the police suspect you of something else, or you are driving extremely aggressively (cutting people off, not safe stopping distance, etc.) because speeding is a far more cut-and-dry offense to prosecute, "reckless driving" may be easier to argue away in court due to the level of judgement required.
That actually really sucks to hear, I was hoping that SDVs would adhere and never break the speed limit. 3 MPH at the forces we're talking about can actually make a difference in life or death.
Waymo discovered sections of Mountain View where "the flow of traffic" was routinely 5-10 mph over the speed limit, and driving at the speed limit was actually more dangerous because it caused human drivers to try and go around the vehicle.
Shouldn’t self driving cars be designed to be as safe as possible in the world they’re actually going to drive in, rather than one where everyone else adjusts their behavior?
Sure. 3 MPH would have made a difference. So would 3 MPH less after that, and 3MPH more... heck, if it had been going 2 MPH total she'd probably be fine! In the aftermath of a front collision it would always have been safer if the car had been going slightly slower, regardless of the speed limit.
The strangest part of watching this story unfold has been discovering that some people have faith that speed limits are anything except a rough guess as to what's mostly safe on a given road. Oh, and subtract 10% from that guess to compensate for the fact that human drivers under 60 years old almost all speed by 4-9 MPH.
According to the UK DOT the estimated pedestrian accident survival rate is 55% at 30mph and just 5% at 40mph (due to kinetic energy rather than reaction times) so 3mph in this range is not exactly an insignificant difference. The strangest thing about watching this story is that people who are apparently ignorant of this still feel sufficiently certain of the relative unimportance of speed differences in this range to sneer at other people for commenting on it.
Of course, as others have pointed out, human tendency to ignore speed limits and react badly to vehicles driving slower than ambient traffic speed creates another potential hazard to trade increased accident fatality rates off against when deciding if and when self driving vehicles can speed. It's trolley problems all the way down.
I agree fully with your first paragraph and I chuckled at the "trolley problems all the way down" depiction.
However I can't imagine a car company accepting the legal liability of allowing their autonomous cars to go over posted speed limits to "match traffic".
In a trolley problem you have the option to refuse a decision. By participating in "speed matching" you become one of the trolleys and you have accepted the unsafe conditions.
Oh, I'm pretty sure that self driving software isn't being programmed to systematically disregard speed limits to match traffic, not even by a company with Uber's attitude towards regulations, but it is an example of conditions where rigid adherence to speed limits might increase rather than reduce risk.
With enough self-driving cars on the road (which of course there aren't now), rigid adherence to speed limits might change the safest-and-easiest driving speed for everyone and lead to everyone driving at the limit rather than everyone everyone driving 5mph (or 10mph or whatever) above the limit.
So the safest reasonable thing for self-driving cars to do could depend on how many of them there are around.
(Of course there are other considerations of that sort. E.g., if all cars on the roads were self-driving then they could coordinate with one another in interesting ways and maybe go substantially faster than human-driven cars for a given level of safety. Maybe not, though, because of risks to pedestrians.)
Hmm, turns out Google cars are programmed to go over speed limits in certain situations. Quote from a Reuters article:
Google’s driverless car is programmed to stay within the speed limit, mostly. Research shows that sticking to the speed limit when other cars are going much faster actually can be dangerous, Dolgov says, so its autonomous car can go up to 10 mph (16 kph) above the speed limit when traffic conditions warrant.
> According to the UK DOT the estimated pedestrian accident survival rate is 55% at 30mph and just 5% at 40mph (due to kinetic energy rather than reaction times) so 3mph in this range is not exactly an insignificant difference.
You'd have that range whether people went at the speed limit, above the speed limit, or below the speed limit. As long as speed limits are made with actual speeds taken into account (they are) that argument is irrelevant to whether you should speed. It just means that if you really want to be safe to hit someone you shouldn't go above 25mph, no matter what the speed limit is.
> The strangest thing about watching this story is that people who are apparently ignorant of this still feel sufficiently certain of the relative unimportance of speed differences in this range to sneer at other people for commenting on it.
The speed limit here was at least 40, wasn't it? Your own numbers say that the differences are unimportant above 40.
The police spokesperson is quoted saying the vehicle was doing 38 in a 35mph zone, which is pretty much where the curve suggests an extra little bit of speed is most lethal. (Other reports have suggested other speed limits)
I see this attitude a lot, but it seems strange to me. Of course any hard line will be somewhat arbitrary, but we need to draw hard lines if we want unambiguous, consistently enforceable laws. What's special about speed limits that makes it okay to ignore them?
To me they seem analogous to age-of-consent laws, which of course are not perfectly chosen for every individual -- and of course there's no magical change that occurs on a person's birthday -- but the law reflects society's best judgment of the balance between (protecting the vulnerable and allowing people to make their own choices :: safety and efficiency).
I somewhat agree, I keep fighting this on HN - but speed limits are for two categories of people - the people driving, and the people expecting those drivers to be going a specific speed (or range). And 40 - 50 is not a range that is ever safe to put near people.
I grew up near (in the bay area) Almaden Expy where the signs post 45 (which means that in the bay area the limit is really 55). Just search for "Almaden Expy killed" in google and let me know if that's a safe street.
It's analogous to that, except that speed limits very reliably take the reasonable-balance number and then drop it several mph. It's a feedback loop, where people drive faster and limits are set with driving faster in mind.
I believe it's not clear whether or not it was speeding, From the article, it sounds like it may have been in the middle of transitioning from 35MPH to/from 40MPH.
Going at or below the speed limit would cause autonomous vehicles to be a hindrance to traffic as real humans know that it's safe to exceed the limit by 5-10 mph. If you hard code the vehicles to arbitrarily go exactly the posted speed then adoption will likely be very difficult and it would actually make the roads a more dangerous place to be as normal traffic would need to veer around the car.
That doesn't change the fact that posted speed limits are almost always slower than is reasonable. 38 mph is a reasonable speed on all but the windiest of roads.
I cannot understand how this could be the womans fault? Is it because of the rules of jaywalking in USA? Where I am from it is always considered the drivers fault, even if a pedestrian literally throws themselves in front of the car. Simply because the car can kill a pedestrian and not the other way around. The way I see it the uber vehicle should have slowed down to a speed at which it would have been able to react to these kind of abrupt motions. This should be cracked down hard on, not making it illegal for self driving car but the fine should be high. I dont like the thought of a self deiving car company becoming «too big to fail» and getting excused for killing pedestrians that dont understand how self driving cars work.
Was the car electric? I've had a close call with a silent electric car which would have been my fault. The scenario goes like this: I'm saying goodbye to Joe and about to cross the street and notice there is no traffic around. I realize as I'm about to leave Joe, I forgot to mention something, so I turn back to him and say "By the way, blah blah", and with that, I then step off the curb onto the street without checking for oncoming traffic again, probably thinking about some chore I have to get done when I get home. If I heard cars, I would instinctually check again, but the silent electric mode does not allow for this.
I'd be interested in any studies about this: how much more often do pedestrian-at-fault collisions occur in silent electric mode?
Sorry for the dumb question, but is there already any conclusion on how driverless cars will be handled for accidents, collisions, fatalities, etc?
In the short term it sounds like poor Vasquez is boned as the driver considering the media is profiling his previous conviction. There's no profiling a "driverless car" though, how inherently risky it can be.
I'm all for the future and driverless cars I've just always thought this was an idiotic idea that will turn people into Idiocracy characters.
This woman was killed by a very expensive robotic car with a human being that could have prevented it, but wasn't permitted to, because the future of driverless cars is being tested.
I don't think that respect for the victim and her family is an excuse to not release video footage. It's a valid reason to hesitate releasing actual video footage of a such a terrible event.
Sensor data is different. If Uber wants to help the story that self-driving cars are safer, then they should release sensor data to help substantiate any claims they may make in the future about what the car did and did not do.
Great that all these articles are coming out defending Uber. Even if (it wasn't) her own fault, these make it sound like "well I mean she was stupid and therefore deserved it".
At the risk of sounding macabre, the engineers will be pouring over thousands of lines of logs and if Uber self driving cars ever do survive to make it onto the roads again, the circumstance of woman's death and the after analysis will mean it will never ever happen again. I expect that during the coming years we will analyse car crashes with the same scrutiny as airplane crashes with the hope to solve every single edge case. RIP.
That's a very optimistic view of what will happen -- or I suppose a pessimistic one. I agree that there will be a lot of scrutiny of the logs, but unless they identify an actual bug ("Oh shit, we had a buffer overrun right here!") it's not necessarily going to be possible to preclude this or something very similar from happening again.
The driving of an autonomous vehicle is an emergent event of large numbers of complicated and in some cases opaque subsystems. Changing the interactions of these subsystems in a narrow way to preclude this particular kind of event is not necessarily something they'll be able to figure out a way to do without causing more problems.
But it's still optimistic compared to todays current status quo. We now have a system we can optimize and build into to make our best effort to prevent in future events.
Which is how it similar to airplane crashes, as it's a heavily automated and centralized system (ie, humans are far less a factor in the event than cars, which can't be optimized nearly as easily).
While cars have had improvements in safety via vehicle design, and popular culture, road signs, traffic laws, etc can influence human behaviour to drive differently, I don't think there has ever been a better time to reduce these events if not everytime, then at least far less often than the current standard.
Which is what is so often missing in these conversations: a rational baseline and a clear reliable process that we will now be in a better position going forward for this not to happen again (assuming the right processes are in place, which they may very well not be in this case).
>> Uber Victim Stepped Suddenly in Front of Self-Driving Car
First line of the first paragraph in the article (my underlining):
>> Police say a video from the Uber self-driving car that struck and killed a woman on Sunday shows her moving in front of it suddenly
And the title on this thread:
>> Video Shows Woman Stepped Suddenly in Front of Self-Driving Uber
Both titles are very misleading. The Bloomberg title because it doesn't clarify it's repeating a police statement, the HN title because it makes it look like the relevant video is in the article, or at least that someone from Bloomberg has seen it, neither of which is the case.
The article credits three journalists in the byline, with four other individuals credited as contributors.
Given that this was published a day after the SF Chronicle piece, I think there's a good chance the article has deeper sourcing or corroboration, and that the headline is deliberate.
I was confused when I clicked on the article and could not find a video. The title is misleading, and while it would be nice if we always had the time to carefully read every article we come across, the reality is that I (and probably many others here) will skim through multiple titles and only click on the most interesting ones. Something as definitive as "Video Shows Woman Stepped Suddenly..." can easily leave a false impression in people's minds.
Maybe you are unemployed or something and have tons of free time, but the rest of us are too busy to read every single article that arrives at the front page. That is why headlines should be accurate summaries or descriptions of the submissions.
I like that idea! Gonna buy a bunch of those, amplify their signal and spread them throughout the city I live in. That will ensure that there's not going to be a single crappy autonomous car being beta-tested in my vicinity!
Another idea would be to just throw them right in front of an autonomous vehicle at full speed. Let's test those brakes!
(sorry, but the original idea is just so ridiculous that I am incapable of responding in a non-sarcastic way to this...)
I imagine phones will serve that role eventually. GPS chip accuracy is improving constantly and could be accurate to within a foot in some phones in the next year [1]. Google Maps / Waze already do something similar with predicting traffic times based on phone data (and other sources). GPS accuracy currently may make an individual's location slightly harder, but path prediction may help. (And, of course there are privacy concerns to consider.)
So if you're homeless and can't afford a phone or if you forgot yours at home or if you got pickpocketed... you can expect to get hit by a self driving car?
That sounds like something that will immediately be abused for tracking people, and an over-reliance on it may make it more dangerous for people not carrying one (don't want to, can't afford, signal blocked by item/clothing, simply forget etc.).
Roads are generally marked, and often have elements embedded in the surface (e.g.
[1]). As self-driving cars become more prevalent, I don't see why you wouldn't adjust new/replaced markings to be something more readable by those vehicles. It obviously can't be the only signal used (because it's infeasible to change everything immediately, not all roads are marked, and wear/damage may make them unusable) but it may help improve reliability/accuracy.
> This is about as feasible as putting smart chips in all roads so that cars know where the lanes are.
Lanes are marked, usually. So why not embed something like an UV/IR-only color pigment in the lanes? Invisible so it doesn't affect human drivers but can easily be picked up by cameras.
> The driver, Rafael Vasquez, 44, served time in prison for armed robbery and other charges in the early 2000s, according to Arizona prison and Maricopa County Superior Court records. Uber declined to comment on Vasquez’s criminal record.
I didn't realize that people with criminal records like this could become Uber drivers. That is, these types of crimes are relevant to working as an Uber driver—more than things like tax fraud or failure to pay child support.
> I didn't realize that people with criminal records like this could become Uber drivers.
In a civilized society people should be able to get jobs when they get out of prison. It's called "time served" for a reason. Everything other is just paving the path to recidivism - when people can't make a living because no one will hire ex-cons, they don't have any other way of literally staying alive than living as hobos or breaking the law.
The attitude in your comment is imho perfectly representing why the US has such problems with career criminals.
The only class of people where even after time served there should be safeguards are when there's a psychological reason that actually makes people dangerous, like pedophilia.
Note that I was merely showing surprise that this was allowed, not passing judgment. I had heard of cases (see above link) where people were denied because of prior misdemeanor convictions, which is why I was surprised.
I agree they should be able to get jobs, but not any job. I don't believe those convicted of murder, kidnapping, and other seriously violent crimes should be allowed to drive potentially vulnerable passengers.
Depends on the country and the type of offense(s). I'm fine with justified accesses for employers - e.g. a trucking company might be allowed to ask the state "has XYZ committed a drunk/drugged or other driving offense in the last 3 years", but I see no valid reason to (involuntarily) disclosing to any new employer that someone has been sentenced for public urination or fare dodging. AFAIK, there is no such "filtering" option available anywhere... and especially with fare dodging, people get essentially criminalized for being poor.
The state should provide an interface for employers to input the personal data of the candidate and the job the candidate is applying for, and the interface should simply give a "OK", "not OK", "no way OK" (e.g. for pedophiles applying for childcare jobs) and "OK with conditions". That would be a safeguard for privacy while also providing employers with a check that they don't hire someone totally wrong for the job.
> The state should provide an interface for employers to input the personal data of the candidate and the job the candidate is applying for, and the interface should simply give a "OK", "not OK", "no way OK" (e.g. for pedophiles applying for childcare jobs) and "OK with conditions".
This sounds vaguely similar to the approach taken in the UK. For most jobs, you can't refuse to employ someone because of spent convictions (and the applicant isn't required to tell the employer of them), and sentences up to 4 years are considered spent after a certain amount of time[1]. The employer cannot request a criminal records check, and if an individual requests their own record it will only show details of unspent convictions.
For certain roles (e.g. healthcare or childcare), a employer can request a criminal records search which WILL show spent convictions, as as you indict certain offenses will permanently make someone unsuitable for those jobs. There are three levels:
* a standard check shows spent and unspent convictions, cautions, reprimands and final warnings
* an enhanced check shows the same as a standard check plus any information held by local police that’s considered relevant to the role
* an enhanced check with barred lists shows the same as an enhanced check plus whether the applicant is on the list of people barred from doing the role[2]
Which of these an employer is allowed to do depends on the job in question (e.g. becoming a solicitor -> standard check, working in an elderly care home as a cleaner -> enhanced check without the adult's barred list check, working in a school -> enhanced check with the children's barred list check).
"The PUC said the drivers should have been disqualified. They had issues ranging from felony convictions to driving under the influence and reckless driving. In some cases, drivers were working with revoked, suspended or canceled licenses, the state said. A similar investigation of smaller competitor Lyft found no violations."
In Arizona a few years back they make drug testing mandatory for Taxi/Livery/&etc drivers so I looked up the law after the owner of the company told us "look, the law says I have to drug test you guys but it doesn't say you have to pass so just bring it in and I won't even look at the results."
Nothing disqualifies you as long as they have documentation of a background check and drug test -- axe-murdering crack head with a penchant for buggering small animals, perfectly legal.
SUVs is an easy one. When an SUV hits a bicyclist, motorcyclist, or pedestrian, the unfortunate person tends to go through rather than the over they'd be subjected to by a sedan. Motorcycle accident stats bear this out - the overall accident rate has gone down as people have started riding safer, but the fatality rate has gone up as SUVs have taken over market share.
The roads this is a bit more complicated, but basically roads need to either be slow enough to safely share space (25mph or below), separated out so that only cars can use it (freeways), or kill an alarming number of pedestrians and bicyclists. Four lanes and a 35 MPH speed limit with infrequent crossings is pretty much going to have a body count.
I'm not trying to excuse Uber here, just trying to maybe convince urban planners to stop building things that convince people to try to cross four lanes of traffic going 35 MPH.