UK to allow tests of driverless cars on public roads to start in January.
In the same way that has already begun in the rest of the world. Same legal/insurance issues stand (as elsewhere) and are likely the bigger barrier to adoption.
Since there are a total of zero commercially available driverless cars, was that distinction really required?
Nowhere in the world has permitted non-test driverless cars because non-test driverless cars aren't a thing yet.
The whole "rest of the world" remark seems like smug condescension. Which is a little odd as only a small handful of places even allow test-cars to be run on public roads. Plus isn't this a welcome addition? The UK's roads are quite a different test bed to many West coast US cities where there is no grid pattern, roads are smaller, and curvey.
Well as this is a news site, the title could be interpreted as new, i.e. something new and interesting is happening. It would be perfectly possible (if not slightly, or very unfeasable) for a jurisdiction to be drafting laws regarding the legal use of new driving technologies, even before they before commercially available (arguably they would never be able to be commercially available until they were regulated/legal).
Apologies is there was any condescension, I was only trying to convey that it's not that different to what's been going on elsewhere. I'm from the UK myself.
I appreciated the clarification since I don't closely follow the situation, and the headline suggested to me that UK was somehow ahead of everyone else here.
Are you saying that the UK has a grid pattern and wider streets? I had always heard that the UK had narrow streets in many places, but that's just from watching Top Gear.
That's what I assumed but did a quick Google Maps search and it appears to be mostly a grid at first glance, unclear wording, "The UK's roads are quite a different test bed to many West coast US cities where there is no grid pattern, roads are smaller, and curvey."
Where were you looking at, Milton Keynes? Peterborough? I'm struggling to think of anywhere gridlike in the UK, though I'm sure some of the '60s towns have them.
Glasgow is gridlike; I was there for the Games and it felt like a very un-British street layout. But I don't think anywhere in the UK has grid numbering ("42nd street" etc). We don't do the continental thing of naming streets after significant dates either.
The UK certainly has plenty of surprising road layout, often involving roundabouts, oneway systems, medieval street plans, and roadworks.
I was gonna say that Glasgow city centre is pretty atypical - resembling US cities close enough that a few parts of World War Z (which were meant to be Philadelphia I think) were filmed there. But then I realised that Edinburgh's "New Town" (actually hundreds of years old) is a grid-layout too.
New Town in Edinburgh and the modern city centre of Glasgow both date back to the mid-18th century. They're very unusual for the time insofar as they had a large plan for a large area.
Around Liverpool and Birkenhead were the only places I looked, they're not perfect grids of course, I don't know hardly anything about European street layouts though, that might be considered very gridlike compared to other places in the area.
Salisbury had a grid pattern back in the 13th century. But mostly small roads follow old field boundaries that can be very old and bendy. Even new developments will fit into old field layouts and have curved roads.
>* But mostly small roads follow old field boundaries that can be very old and bendy.* //
The places I know of - Lincolnshire and East Lothian - where roads follow field boundaries have a lot of straight roads with 90deg bends. Interesting you suggest that field boundary following would make roads more "bendy" (suggesting non-straight edges and non right-angles).
Perhaps the fields of Salisbury weren't dissected for inheritance purposes or are older and follow more natural lines?
It depends how old the field is. Around villages you can see very irregular fields that are very old. But a lot of the larger fields have straighter boundaries as a result of intentional enclosure. In lowlying parts a lot of the land is drained, and smaller drainage channels will follow straight lines.
Two wide cars can't pass there, one of you has to go back to a place without the hedge. I'd like to see how that kind of negotiation is handled by self-driving cars.
Take a look at London - basically the kind of road layout you'd get if you threw a bunch of toothpicks on the floor and used those as the guidelines for random branching.
Not counting driving on the wrong side of the roads. I wonder if google cars' AI only needs a "Drive on the wrong side of the road in this country" command, or if some more complex programming is needed.
If you look at Google Maps (at least on the desktop,) there's usually directional indicators on odd roads and in any directions in the UK you'll notice how the blue line goes clockwise around roundabouts instead of counter-clockwise like our friends in the rest of the world
Perhaps less to do with the grid, but more the fact that your road systems were designed for the automotive era. In old residential areas driving can be very challenging after everybody's returned from home as streets barely wide enough for two cars to comfortably pass have cars parked up both sides.
This obviously presents challenges such as driver etiquette (car needs to play nicely with others but still get you to your destination) and safety (children can lurk unseen between the cars and leap out out).
Perhaps there are parts of America that are like this, but Hollywood hasn't shown many of them to me.
As tlrobinson says, I was trying to clarify the content of the article, as the title could easily be mis-interpreted to mean that there were no restrictions.
In addition, I suspect that the cars will be 'self-driving' cars rather than 'driverless' (the distinction being that the former has a human behind the wheel, which can take the wheel at a moments notice).
As for the rest of the world, from the article:
The US States of California, Nevada and Florida have all approved tests of the vehicles. In California alone, Google's driverless car has done more than 300,000 miles on the open road.
In 2013, Nissan carried out Japan's first public road test of an autonomous vehicle on a highway.
And in Europe, the Swedish city of Gothenburg has given Volvo permission to test 100 driverless cars - although that trial is not scheduled to occur until 2017.
The case of 100 volvo cars on the streets 2017 is a bit different than just allowing controlled tests on public roads, that is 100 cars given to customers for every day use, not engineers.
Audi is currently (as in this week) test driving self-driving cars on an elevated expressway in Tampa this week, that has been shut down for this purpose.
I don't understand the distinction. If the UK is allowing tests of driverless cars, it must be allowing them on the roads, and of course "in January" means "starts in January", what else would it mean
Driverless cars are a risky unproven technology, and done wrong could easily kill people.
It'd be like taking "FDA approves new drug for human testing" and making the headline "FDA to allow human use of new untested drug."
Both are technically true, but the latter makes it sound like a widespread thing being done regardless of poorly understood risks. And that's misleading, since it's being done in very limited scope to help us understand and reduce those risks.
While the title is technically true, it could easily be interpreted to mean anyone could buy or build a driverless car and drive it on UK roads starting in January.
When ever I see a new place or country allowing a test of these cars this wired article always comes to mind [1].
I think the two main questions will be liability & drivability -
1) when (not if) these cars get into a serious crash,
who is to assume liability? Is it Google who created the algorithm? Is it the Audi integrator who fused Google technology into the Audi? Is it the fault of the mapping software that did not update the fact that the signals had been moved to a different position on that street?
2) More mundane - will a driverless car be able to drive every single place that a drivered car would? When a flash flood closes down the freeway will this autonomous beast be able to drive on the back road that is normally closed to traffic?
Having said that I cannot wait for Cars-As-A-Service where the Cars park themselves & disappear when I don't need them and magically reappear when I do (without humans - Lyft, Uber et al. need not apply).
You always voice the same concerns, I always hear the same concerns. It's crazy to me that people get hung up on the liability issue.
1) Liability is assessed exactly as it is now. We use the same system to determine which car was at fault. If it's your car, you are at fault. Someone will insure a self-driving car, especially once the safety record is established. Your current provider might not, but someone will. If you were using the self-driving system improperly (e.g. in weather it can't handle well in yet), you may also be subject to different rules or even criminal prosecution.
Vendors may also be willing to pay for that insurance as part of a monthly fee just to shut people up over this very easy-to-solve "concern."
If no one is willing to insure your self-driving car, I'll start that business. I could charge a premium for a service that costs me less to provide!
2) People seem to think self-driving cars should be perfect before they're introduced. They won't be. Neither are we. It just has to be better (under given weather/traffic conditions) than we are to save lives. We'll have steering wheels in the cars for a long time which you'll have to use when the car can't safely drive itself.
The Wired article is just clickbait using the fear angle. We face the same decisions every day, and "unavoidable" accidents will be even more rare than it is now as this technology evolves.
While I agree that liability will be assessed the same way it is now, the answer isn't always:
If it's your car, you are at fault.
There are examples of situations where a car producer/manufacturer was at fault for accidents, caused by bad software. The question is where that line will be in the myriad of situations that will be more complex due to the the introduction of automation.
The control system itself is then a part of the car, which will dramatically increase the likelihood that it is at-fault in the case of an accident. It presumably would have to be audit-able in the case of an accident to identify whether it made the 'correct' decisions or not, adding additional legal and technical complexity.
If privately owned, would there be regular sensor cleaning/calibration tasks that need to be met before the manufacturer is deemed liable? What about tire pressure?
There will be a lot of factors that could go into deciding whether the user or manufacturer was at fault in the case of an accident that simply wouldn't be a problem now, because the user of the vehicle is also the one responsible for maintenance.
> " The question is where that line will be in the myriad of situations that will be more complex due to the the introduction of automation."
It will be determined via litigation, as it is now. Technological complexity of subject matter has yet to present any serious roadblock, or cause any significant change, in the prosecution of the law.
Similarly, whether neglected maintenance (or third-party modifications/parts) contributed to a collision will be determined in court, just as it is now.
As a side note: given the service opportunities afforded by self-driving vehicles, I would be surprised if operators and insurers didn't subsidize or operate "while you sleep" maintenance service plans. e.g. Once a month or so, while you sleep, your car will drive itself to the shop to get checked out, ensure updates are received, recall services performed, etc.
Manufacturers would add a revenue stream and lower their legal costs/exposure and consumers would have yet-another-hassle of car ownership removed.
Technological complexity of subject matter has yet to present any serious roadblock, or cause any significant change, in the prosecution of the law.
Ridiculous. To cite only one counterexample, self-driving cars will make the controversy over the EU's "right to be forgotten" law look like a polite discussion over a beer.
That said, the problems are solvable and we'll all ultimately be better off for facing them.
> "self-driving cars will make the controversy over the EU's "right to be forgotten" law look like a polite discussion over a beer."
You misunderstand me. Of course self-driving cars will generate many and expensive legal fights. But those lawsuits will look much like any of our current lawsuits.
That the courts don't understand the technology will not stop jury trials from deciding liability, just as courts not understanding genetic evidence doesn't stop them from throwing people in jail for life based on misunderstandings.
That legislators don't understand the technology will not stop laws from being written any more than their not understanding criminal justice stops them from writing self-defeating "tough on crime" laws and "prison as punishment" regulations that only increase recidivism and multiply the social cost of crimes.
My point is that courts and legislators not having an understanding, let alone answers, is not a stumbling block to self-driving cars. It won't prevent self-driving cars from moving forward until and unless it's addressed.
They'll just blunder through it, making a mess, making mistakes, as they've done with everything else.
I expect it will be just like what happens when billion pound construction projects go wrong. There is already a specialised civil court that decides on these types of cases in the UK; the Technology and Construction Court.
Correct, that's what I said. The OP claimed that technical implementation problems were not considered a stumbling block for legislation, and I mentioned the "right to be forgotten" mess as a counterexample, where a fundamentally unfair, unworkable law has created a new human right out of thin air, one that's causing a lot of trouble for Google and other search engine providers.
Agreed. This is one area where existing market mechanisms already work pretty well and are likely to accommodate change pretty smoothly. From an insurer's point of view, autonomous cars are probably a pretty attractive risk - even where the technology does fail and they incur liability, it's probably much less than the liability they incur from insuring drivers who drive while impaired/ angry/ distracted/ late but whose actual liability can't be established.
I don't think (1) is all that clear. I'd say it depends on what happens when the liable driver sues the vendor. That said, even if the vendors are found to be liable, I don't think that will change anything much, except maybe slightly raise the price of cars.
Thanks for that great link. I think the answer to 1) is going to end up as "insurance companies".
In order to put an autonomous vehicle on the road, it's going to have to be insured. Insurance companies will have to vet any and all autonomous vehicles anyway, assume direct financial liability for all harms resulting from anything that could go wrong with such a vehicle, and charge whatever premiums they think are approprite to cover those liabilities to the owner of each vehicle.
How they vet autonomous vehicles up to them, and it's probably going to be tricky at first - maybe on a comparable order of trickyness as actually making an autonomous vehicle. But they're going to have to do it if only to get their premiums right. Maybe they'll require certain development practices, like using specific programming languages, or programming conventions within a language, or requiring 3rd party static analysis of all the code (e.g. Coverity), or mandatory reviews of all checkins by at least one other developer than the person who wrote it, or a certain level of automated test coverage, or "passing" a certain percentage of simulated situations.
Whatever the insurance companies do, they're going to have to get some measure of how dangerous these vehicles are.
The better a manufacturer does on the vetting process, the more open they are, the lower the premiums will be to any owners of their cars.
At that point, the insurance companies should be in a position to assume all direct financial liability resulting from an accident - as they are now (discounting your excess) - including negligence, as that's the only incentive we can put on them to ensure they've done their part right. The lawyers will of course ensure that if a vehicle as supplied is not up to the standard that the insurance company confirmed, either in terms of hardware or software, then the insurance company will have a way to take that up with the manufacturer.
Naturally, if you mod your autonomous vehicle in any ways not permitted by your cover, you may void your insurance. At that point the liability falls entirely on you, the owner. But again, that's just as things are now.
The first couple of rounds of insurance might be issued by a venture capital style fund rather than traditional insurance, as a necessary investment in the ecosystem to get the market underway.
There will of course be actuarial exercises involved, but rather than trying to predict everything and more and imposing unnecessarily limiting restrictions (which is not to say there shouldn't be any) on the development process, they'll accept the increased risk to get the feedback from actual daily use bootstrapped.
Once there's a better understanding of the issues that will occur in the real world, traditional insurers will join the fray.
Liability to some extent depends on the jurisdiction. For example, Washington DC will require drivers of autonomous-capable cars to get a special license endorsement and assume legal responsibility for the operation of the vehicle regardless of whether it is in autonomous mode or not. Presumably a driver who incurred legal liability for an accident while in autonomous mode would then turn around and seek damages from the vehicle and automation services provider as a separate action. For a completely autonomous vehicle (eg a bus or delivery van) the victim would likely recover from the corporate operator, who would probably not be using COTS technology anyway (think how UPS has its own trucks, or buses are often built to order for transit agencies).
The federal body (National Highway Transportation and Safety Board) seems to focused primarily on the development of technical standards that will allow for a uniform certification. But questions of driver liability will likely be left to the states, as most accidents resulting in lawsuits are tort actions alleging driver negligence, and different states have different rules on both driving and allocation of fault.
wrt 1), I think it's preferable to study each scarce accident very closely to avoid it to happen again (like it happens in the aviation industry) rather than just being scared of the technology and remaining on the status-quo of allowing so many casualties each year because we still allow humans to drive.
It's also one of those scary questions we'd all like a clear answer to before we depend on the technology. However it's exactly the kind of question the courts were invented to answer.
The aviation industry is a fantastic parallel that should be modelled after.
Exactly, once they get these vehicles right accidents will be a pretty rare thing.
Until then it has to be a case of "tough" if you want to sue. The Victorians had the right attitude to future technology and it's something we need to get back to.
No, they had horrible workplace diseases and injuries and a dreadful attitude to health. If your business model includes telling potential litigants to man up, you're not going into space today.
My impression from at least the google cars is that they aren't relying on map data to find signs and signals, though they might update that data from the cars so they can reduce processing time later.
I'd also expect any accident to end up worldwide news simply because it's the first ever of it's kind. Same thing happened with the first car accident in the world, it happened in 1891 in Ohio [1].
I'd imagine that #2 will take longer to take care of properly, but with the traffic and accidentent and road closure monitoring that goes on now I'm going to bet that it'll at least be kind of decent, I'm actually more worried that it won't be able to follow detours properly and cause gridlock around construction areas during the transition from human driven cars to driver-less.
1) How is this different from cars driven by humans? In my country (no knowledge about other countries), you insure the car, and not the driver. The insurer doesn't care about the driver and shouldn't.
2) I think there should be always an option to drive the car yourself.
I would say that if the technology used in a car is at fault, then the manufacturer of that car will be held liable (just take a look at all the recalls)... later, they can sue whoever created that technology if they feel like it
A friend of mine noted that you could put self driving cars on the road in Italy right now and nobody would notice :-) I agree with the consensus that driverless cars are inevitable. And knowing that they are inevitable is kind of like knowing a train is going to derail on a curve before it does. You get some time to think about what is going to happen next.
In the train case you would do things like get movable property out of the way, for the cars you can think about things like a defensible yard barrier. Some people already do this for drunks, but putting a 12 - 14" 'step up' along the edge of the property that borders the street will stop most out of control passenger vehicles. Laws will get tested and litigated, new ways to analyze risk will be developed, planners will want to think about how they design roads/signage/maintenance around them.
I suspect this a 'moonshot' technology, which is one where you can demonstrate it in 1969 but can't actually repeat it commercially until 50 years later in 2019.
I'd say that the defining aspect of the original 'moonshot' technology was there was (sadly) no opportunity for commercial gain. Contrast with here, where, for a myriad of reasons, people could easily be convinced to pay substantially more for a driverless car (once safety is well-established).
For the last decade or so, human learner drivers in the UK have had to pass a video "hazard perception test" based on their ability to recognise potential developing hazards (kids playing near the roads, a cyclist confronted by a line of parked cars, a vehicle approaching a junction onto your lane) at a very early point before their movements make their entry into the roadway inevitable.
This strikes me as something which is particularly difficult for an algorithm to process effectively (without generating lots of false positives, which also fails that segment of the test) especially based on the fairly low resolution video human users are presented with in normal test conditions.
Hope they're not going to waive that for the bots, even if they do have 360 degree vision and superior concentration and reaction times.
Couldn't you just as easily reverse that and say, bots have 360 degree vision, superior concentration and reaction times, I hope they're not going to waive that for the humans, even if they do have the ability to recognise potential hazards at a very early point.
Surely the point is to not kill people?
So which is more important, recognising potential hazards early, or unwavering attention and superior concentration? Because although I'm pretty sure it's the humans who're ahead at the moment, but I'm not sure it'll always stay that way. Bots may never match humans at a hazard perception test, but if bot reaction, vision and AI gets good enough, they may not need to.
If you watch this video you can see that Google's cars doe exactly this. You even see the car pick up on hand signals. https://www.youtube.com/watch?v=dk3oc1Hr62g Realistically you would have to modify the software to pass the test for stupid reasons, for example the vehicle might refuse to function at all without it's LIDAR, insisting on slowing to stop and pulling over, where the video wants to continue driving.
Came here to link to theis same video. It would be interesting to watch the Google algorithm run through a similar test (not the actual video, which lacks depth and so won't make a decent model). The chattering classes would be much enlightened.
That's true. I think of turning right at a crosswalk where some pedestrian is really anxious for their turn to walk, so they're standing right on the edge of the curb. I always pay careful attention to them and rely on my human intuition of body language, eye contact, etc, until I feel comfortable that they're going to remain on the edge of the curb instead of steeping out in front of me during my turn. A robot can't do this.
A robot should have much faster reflexes than you do. Robots should be able to compensate for people actively running or diving in front of them. The situation you describe should be very easy to compensate for.
Don't bother relying on body language or eye contact, just automatically sense the person shaped object, if it is close to the roadway, slow to a speed where you can avoid easily if they step out. Assume that they will. Heck, add in a buffer so you don't bother the passengers of the car by having to slam on the brakes.
A robot should always assume the person might suddenly enter the street regardless of body language or other clues and drive at speeds that allow safe stopping if that occurs.
> A robot should always assume the person might suddenly enter the street regardless of body language or other clues and drive at speeds that allow safe stopping if that occurs.
> A person driving should do the same.
At what range? If you assume someone could always jump out in front of you, and that running them down will always be unacceptable, you'll be doing a handful of miles an hour whenever there are humans around. In practice I doubt too many people are in favour of taking safety to that extreme.
The discrepancy is there are ~millions of years of evolution to account for our ability to immediately process and react to unexpected actions around us - whereas self driving cars, doubtlessly, will not be debuted to the public with nearly as much complexity - humans are not designed with affordability and profit in mind. Theoretically, what you're saying is correct, but the practical reality is likely to be a lot different. The best possible implementation isn't likely to be the one which hits the mass market.
Arguably the pedestrian should be comfortable to cross and expect you to wait for them. You could trust a robot to respect the pedestrians rights and not get angry.
In the context of driving in the UK - that's not how it works. People will cross at red light right after you pass, and will impatiently wait at the edge of the road, leaning forwards, ready to go. This doesn't have anything to do with respecting the pedestrians rights.
If you turn into a junction and a pedestrian is already crossing you are required to wait. It will be interesting to see how pedestrians change as a result of this. I would be much less likely to wait for cars if I know for certain that they will let me cross safely. It may make urban areas more pleasent if cars are consistantly safe.
It seems to me that it's clear that driverless cars are the future, which is going to become real very soon. Any progressive government would (and should) allow such testing, even given the inflexibility of the state's bureaucratic machine. Allowing is an easy part. The hard part is actually building those cars and to my knowledge no UK firm does this at the moment.
An autopilot, great. But driverless, as in, no way for anyone to take control? No thanks. There are plenty of places your GPS can't take you the last few hundred metres, nor would you want it to, you want to decide where to park based on a variety of factors.
That occupation will disappear, save for a few high-end human driver services, IMO.
After the taxis come the trucks. I am quite certain shipping companies will be the quick ditch humans. After all, this means no more travel expenses & salaries, no more breaks, probably automatic unloading too! Goods will get cheaper, although hundreds of thousands of people will be out of work.
Also, I think the shipping cos will do away with their drivers first. Highway traffic is easier to navigate with an algorithm - nearly trivial if you could compel a retrofit of all vehicles with a low power radio broadcasting its position, so that any autonomous vehicles around it can perceive it via its own sensors, any external gps radar, and the cars pings. Even without that, it is easier to juggle the variables of a high way (multiple lanes, moving in a direction, consider merge traffic, maintain safe stop distances, etc) than to try to navigate intersections and pedestrian traffic. Also, 18 wheelers don't need human interfacing, whereas self driving taxis require that licensed, but non-owners, sit in the drivers seat of the taxi since nobody will let self driving cars pilot themselves without someone to take manual control if necessary for a while. Albeit, that remains a problem with the 18 wheeler, but I see letting self driving tractor trailers without pilots come much sooner than self driving taxis without the passenger in the drivers seat.
There might be some jobs created, perhaps not as much.
Consider, what if several shipping companies delegated fuel administration to a company that had manned staff 24/7 at places like Flying J diesel and gas fuel locations?
If you have autonomous trucks coming in, I don't see tons of places upgrading their facilities to support robotic administration. (At least for a few years)
Why have manned gas stations? That's actually pretty easy to automate, since the trucks are stationery and there is already a standard for fuel tank holes. I am 100% sure that 30-40 years from now on, we are going to have automated trucks & gas stations with automated payments done from a system that computes routes based on thousands of variables.
I seriously doubt that truck drivers are going out of business any time soon. First of all, there's a massive fleet of trucks requiring drivers, and replacement trucks are expensive. Second of all, truck drivers also serve as a security guard. Even if we have self-driving trucks, I imagine they'll have a chaperone for a long time yet.
I expect that trucks driven by a human-computer team will be safer than trucks driven by either one alone.
There would certainly be some upfront cost to making tractor-trailers autonomous but they would save an expensive employee per truck, the trucks would arrive faster and more reliably, and they would save in fuel costs (optimal driving).
They would need to setup a way for the trucks to refuel, just an attendant at the truck-stop that would be paid a small amount for refueling them, or maybe a new kind of pump that would automatically fuel the trucks when they pulled up to them.
True, it's expensive to replace new trucks. But if it's a lot cheaper in the long term, it would be a serious incentive to push for the removal of drivers. And yeah, I too think that computers will get gradually implemented. Though I am sure the process will be 100% automated in the end.
The theft part is massive, if you had a truckload full of computer parts travelling down the motorway at 3am when it's dead. What's to stop some gang with tire traps. Bam! vehicle is over and they have 30 mins to loot before an error is realised.
This is possible in the current configuration, too. In fact, it may be easier to compel a human driver to keep quiet than a networked onboard computer relentlessly feeding speed, position, tire pressure and other telemetry to some sort of remote monitoring facility.
I wonder if self-driving will enable electric trucks? If you had an electric truck with (say) 150 mile range, it wouldn't be cost effective because the driver would have to sit around for hours waiting for the truck to recharge. Much less of a problem if the truck is self-driving.
I really hope one of them licenses the Johnny Cab name from total recall, and the likeness of Don Knotts. It'd be really cool to see something from the movies come to life like that and it'd probably end up a great way for other companies to sponsor taxis and get more exposure to people.
Did anyone else notice in the video that neither one of them were using seat belts? I love this tech. but I am not sure I have this blind a faith in it.
In the same way that has already begun in the rest of the world. Same legal/insurance issues stand (as elsewhere) and are likely the bigger barrier to adoption.