Hacker News new | past | comments | ask | show | jobs | submit login
Cruise is running an autonomous ride hailing service for employees in SF (techcrunch.com)
224 points by edward on Aug 8, 2017 | hide | past | favorite | 175 comments



> Still, Cruise says those drivers have had to take over manual control of vehicles engaged in Cruise Anywhere service only on a few occasions

Not trying to pour cold water on this but.. it's all about those "few occasions". The difference between level 2 and level 4/5 autonomy is vast. Those "few occasions" would presumably be accidents without the intervention of the safety driver. A small corporate shuttle service that "only had a few accidents" would not be considered a very safe service.

So for a service that really only makes sense at level 4, the reporting really ought to focus on those "few occasions", the stringent safety standard that needs to be met to remove drivers, and the actual progress towards it. That's the real meat of the story.


At the end of 2016 all companies testing autonomous vehicles in California filed their disengagement reports. Waymo reported 1 unplanned disengagement every 5000 miles, and GM reported 1 every 180. Nobody else was even in the running. These metrics are crude, but it's currently the only benchmarking system we've got, and we'll see in December where Cruise is really at when they report their 2017 numbers.

Cruise does distinguish itself by driving mostly on busy downtown SF streets, a challenging environment, and they've released impressive videos showing off their capabilities.

A few weeks ago Kyle Vogt claimed Cruise would surpass Waymo 'in a few months', whether they do or not remains to be seen, but they are making sustained progress and they have the full force of General Motors behind them.

Unlike Waymo, Tesla's Autopilot program, and Uber's autonomous driving program, Cruise's operation has been drama free, without any lawsuits, leadership blowouts, or departures of key people. This implies they've got a solid team under good leadership, and I think it matters a lot with complex software projects like this.


Waymo (Google) is not that far from level IV. Average U.S. driver has one accident roughly every 165,000 miles and disengagements is not 1:1 with crashing. I suspect drunk drivers would probably be safer handing over control which could make a huge difference.

Most interesting is the Disengagements by location. (page 9) https://www.dmv.ca.gov/portal/wcm/connect/946b3502-c959-4e3b...

  Interstate 0
  Freeway 0
  Highway 12
  Street 112
Which might be good or bad depending on the relative amount of driving in each situation.

PS: Disengagements per 1,000 miles also dropped from 0.8 (2015) to 0.2 (2016) which suggests very good things.

PPS: Been getting a lot of up and down votes I wander what people agree / disagree with?


Sounds like you're assuming level 4 merely means parity or better with human error. Lots of problems with that. For instance if it's Google's negligent bug that kills the family of 4 instead of a driver, Google gets sued. Google has much deeper pockets than the driver who would typically declare bankruptcy, effectively capping the damages. No real cap for Google, every time. So that explodes the cost of autonomous vehicles. And then there's the matter of potential criminal liability in the case of gross negligence. So there's going to be a much higher bar for transitioning to level 4.

Also, while there may be fewer of some types of accidents, there will be lawsuits for a potentially very large set of other types that never used to get litigated. For example today when a driver causes their own accident, nobody gets sued.


Google however does have the ability to hand over the cost of insuring such a system to the consumer, so even at current accident rates the cost is around 100 dollars a month per car using such a system (driving 12000 miles in a year). This equates to something like a cent per mile. So let's say they want incredibly high quality insurance, charging an extra 5 cents per mile won't make a difference in the scheme of things should they be able to get the overall cost of commuting 15 miles (the average American commute in one direction), to something under 5 dollars.


This aspect of insurance is widely misunderstood. First of all an extra 4 cents/mile just for insurance is really expensive. But wrongful death and personal injury lawsuits can result in very large judgements, especially when we get to punitive damages for a large corporation. Individual offenders regularly go bankrupt. It may be wishful thinking that this can be easily solved with "insurance". There is also the criminal negligence aspect to consider.


4 cents/mile is roughly in line with rates for traditional auto insurance today. And as these rapid improvements are likely to continue, these costs will quickly come down.

Sure you can imagine some class action / criminal legal disaster but the auto industry has survived repeated scandals. Corporate criminal negligence is extremely rare.

In five or ten years this technology could be saving 100,000 lives every year. I hope we are not so risk adverse that these benefits are needlessly delayed.

http://www.insurance.com/auto-insurance/auto-insurance-basic...


Medical errors kill on the order of 700 people per day in the US (high error bars on that number though). And yes, insurance is a major cost in the medical field, but even large scale negligent deaths are regularly dealt with by the US legal system.

Yes, anyone putting out such a system is going to get sued, but so does every car company that exists today. It's just the cost of doing business and ultimately just get's passed on to consumers.


You're correct. The question is what is the cost passed on to consumers. If it is something on the order of the cost of medical malpractice insurance and average settlements, well, wow. That would make AV prohibitively expensive.

More likely the bar for level 4 will just be significantly higher than parity with human error, if only for liability reasons.


"In 2010, there were 1.1 fatal crashes per 100 million truck miles" https://www.truckdrivingjobs.com/faq/truck-driving-accidents... Truckers make ~27c/mile.

So, at similar rate to human drivers that's ~24* million dollars per fatal crash in saved wages. Plus presumably higher truck utilization and lower management costs etc. Don't forget the driver is often not at fault and automated trucks should have plenty of footage of accidents to demonstrate who was at fault. So it's a fraction of both fatal and non fatal accidents, with non fatal accidents generally having lower associated costs. Also, most importantly it's only the truck involved in many accidents associated with bad weather and more generally the driver's medical bills etc do not need to be covered if there is no driver.

Further, technology is unlikely to degrade over time so they just need a profitable starting point. Also, if the self drive button cost ~25c / mile there are plenty of times I would hit it.

*Granted, there will be labor costs linked with automated trucks, but a 95% savings seems likely.


> if the self drive button cost ~25c / mile there are plenty of times I would hit it

25c/mile just for the insurance surcharge, you mean? Sounds like we're in agreement that mere parity with human error would be expensive from a liability perspective.


I don't think the average fatal accident is going to average even 5 million, I am just saying if it cost 20 million that's still not a deal killer. Put in other terms Valet Parking is 5+$, but 'go park' is ~5 cents. In ~15MPG city / stop and go traffic that's 3.75$/hour to not pay attention.

Hell go to sleep and wake up in another city 500-1000 miles away without going to the airport or needing a rental is a major benefit even if your not saying money.


Based on their own reporting, hasn't Waymo been safer than the average driver for a few years now?


The real question is, what routes are being followed and in what conditions? Presumably Waymo et al stick to exquisitely-mapped routes with excellent road markings between well-known autonomous-friendly endpoints during good daytime weather for the vast majority of their testing. Performance in less-than-ideal conditions is a more important question. I'm curious if there's any publicly available data on the routes followed, times, conditions, etc.


Anecdotally I see them driving like snails on Valencia Street and elsewhere in the Mission day and night. It's very well mapped, but one of the more challenging areas to navigate in America.

I've never seen them pulled over in the bike lane on Valencia like very other Uber/Lyft, which is a good thing, but picking up and dropping off is far from autonomous friendly in that neighborhood.

Of course sometime it's obvious a person is in control and presumably they're generating training data.


I don't know why you're getting downvotes.

Does anyone know off-hand, how much testing have autonomous vehicles had at night (as in, full dark/need to use headlights).


Cruise has been testing cars 24 hours a day, 7 days a week for quite some time. Here's a few hours of fully autonomous night driving: https://www.youtube.com/watch?v=6tA_VvHP0-s


There was actually a dispute about a potential Cruise cofounder getting compensation for equity at the acquisition. They recently settled the lawsuit out of court [1]. I agree they have good leadership, but they're not without any lawsuits.

[1] http://www.businessinsider.com/car-startup-cruise-settles-le...


>> Cruise does distinguish itself by driving mostly on busy downtown SF streets, a challenging environment, and they've released impressive videos showing off their capabilities.

I don't doubt that it's a challenging environment, but there are many, many different kinds of challenges that are not even present in that city.


True. But city driving is all they need to offer service in the most potentially profitable, geo fenced areas.


Driving is SF is actually quite challenging. It is quite congested with cars as well as pedestrians and cyclists. Due to lax enforcement many traffic laws are routinely ignored particularly by cyclists, pedestrians and transit vehicles. On top of this there is lots of construction, poorly maintained roads and frequent traffic flow changes.

Not sure what you are imagining but SF is free of winter driving conditions. Other than that it is as challenging as any city in the US, Europe or Japan.


Is disengagement per X miles really a good metric. It doesn't reflect the difference between driving in different environments, i.e dense urban areas, suburban streets, and open highways.


Yes, it's a crude metric. The other thing is disengagements are self reported, and an unplanned disengagement doesn't necessarily mean a crash was narrowly avoided, it may be an overly cautious test driver, or the result of some non-safety critical intervention.

After the core software problems have been solved, the key challenge shifts to discovering edge cases, which become progressively more marginal the further advanced the software becomes, and the only way to find these edge cases is through 1000s of miles of driving waiting for unexpected situations to show up.


I'm not sure what those "few occasions" were, but I think there's another category of problem that these could fall into other than would-be accidents: If the car over-cautiously decided it didn't know what to do, came to a safe stop, and threw on the 4-ways while it waited for human input, that's not as terrible.


I was driving home a few months ago in the rain on Division in SF. Cars were stopped, and I immediately couldn't figure out why. When I passed the stopped vehicles I got my answer: there was seemingly a wall of water coming down from the freeway overhead and an autonomous car was stopped as it apparently didn't know how to handle the situation.


Couldn't they have something like a real Human remote driver. That can intervene, but also can oversee multiple cars.


Would need incredibly low latency for that to be safe, I think. FPS games use a pile of tricks to send minimal data over the network which wouldn't be available to an autonomous car: we don't have access to the game engine, after all...


You wouldn't automatically need low latency. In a situation like the above one the car could pull over and wait for the control center to tell it whether the drive on through the water or not.

Not quite the same problem but the Starship delivery vehicles work a bit like that - remote link to a control center. http://uk.businessinsider.com/doordash-delivery-robots-stars...


That would require the car to have the ability to override its tendency to not drive thru physical obstacles. Remote control to say "go into that lane", "turn left" is one thing, but remote control to drive thru what it perceives as a physical object is different.


It would need to be very low latency if the cars can not take sufficient action fast enough to be safe. But if the cars get to a point where they are safe enough but where that safety comes at the expense of being overly cautious in some instances (refusing to drive through a sheet of water and deciding to stop) that may be acceptable. E.g. in the given example, the car only needs input to say pretty much "safe to proceed at low speed until past obstacle, then reassess".


Onlive allowed gaming over a Videostream with minimal delay. ffmpeg[1] works too.

[1] http://fomori.org/blog/?p=1213


At least one company is doing exactly that with trucks, where the last mile is remotely controlled. This was on HN a number of months ago:

http://fortune.com/2017/02/28/starsky-self-driving-truck-sta...


Does anyone here know if California driving law allows for remotely-operated vehicles like that?


Hopefully not, at least not yet. That doesn't sound nearly as safe, or as accountable— you just wouldn't have the same situational awareness looking at sensor feeds vs. looking out the actual windows.


Except when that isn't a safe thing to do.


That's a fair point, although those would still constitute breakdowns in a level 4. Further reason for the reporting to focus on these incidents.


I've seen ~10 Cruise cars around Mission Dolores and Soma in SF. 100% of the time the driver has his hands on the wheel and appears to be making the turns himself. I'm skeptical of how automated this service actually is.

Maybe those are training runs or the drivers are letting the wheel guide them.


Our drivers always keep their hands on the steering wheel, even while in autonomous mode. There isn't really any way you could tell externally whether the car is engaged or not, other than some of the nuances of our driving behavior.

Source: I work at Cruise


I believe current regulations require the test driver to have her or his hands on the wheel at all times so that's not a good indication of whether or not the car is driving autonomously.


I have made the same observation in SOMA area. Not even once I saw the car being driven autonomously


[Deleted]


> Those "few occasions" would presumably be accidents without the intervention of the safety driver.

You don't know that. Doesn't the car simply slowly stop if it doesn't know what to do?

> A small corporate shuttle service that "only had a few accidents" would not be considered a very safe service.

As soon as it's safer than a human-operated one, it's already good enough - there's literally no safer option.


It's not about not knowing what to do, it's about not recognizing things that threaten safe travel.


> Those "few occasions" would presumably be accidents without the intervention of the safety driver.

I'm not sure that's a safe presumption. The car could have done the right thing but not been confident enough in the planned course of action. Still not great, but not a presumed accident either.


San Francisco would be one of the harder cities to make self-driving work: bad light, heavy traffic traffic, bicycles, poor lane markings, spotty GPS downtown, etc.

I haven't been to Phoenix, so can't comment on the difficulty of running a self driving car there, but I would guess its easier than SF.


All the problems you listed are going to be issues with most major cities. I would hope that self driving companies are testing their tech outside the SV bubble because the rest of us have to deal with rain, snow, sleet, high winds, and other weather that causes low visibility and low traction.


I'm not involved in autonomous vehicles at all but as an engineer, I'm pretty confident we are a long way away from level 4/5.

Without significantly reworking current infrastructure the challenges presented by city driving are too fuzzy to guarantee safety. For example, traffic lights can be obscured, out of order, or spoofed; unless they are modified to support av; I don't think this is solvable.

Although autonomous vehicles will require a safety driver for a while, interim benefits still exist and services like this will potentially provide the momentum needed to make those infrastructure changes.


> Without significantly reworking current infrastructure the challenges presented by city driving are too fuzzy to guarantee safety.

But, there's no guarantee of safety now. Human drivers are fallible and fail all the time.

A guarantee of safety is a crazy metric to require before we can allow autonomous vehicles. A better metric is: "before the vehicle is more safe than the median human driver". Once it hits that level of safety, it should be in the streets.


> For example, traffic lights can be obscured, out of order, or spoofed

Wouldn't that also potentially impact human drivers?


Hi HN, engineer at Cruise here. I just wanted to point out that most people are surprised when I tell them we're hiring a ton of backend/full stack/android engineering positions (mostly go/node/react), and you don't need prior robotics or machine learning experience to apply (although we like those too).

So, if you're interested in working on AVs or the systems that support them, please take a look at our careers page: https://getcruise.com/careers (or feel free to reach out to me via HN profile).


Hi,

I applied on the website. I have a solid background in embedded systems and C++ but I immediately received a rejection email. I wish I could have chatted with an engineer first :-(


Sounds like there was some disqualifying factor you're not aware of. (Citizenship, willingness to relocate, whatever.) No one sends out immediate rejection emails to experienced engineers unless they really have to.


I was open to all those things.


Just giving examples; I have no idea what the actual reason was. My overall point is that you shouldn't take it as an indication for your engineering ability; an instant rejection means that wasn't the thing.


Hopefully that scrolling on mobile will get fixed soon, as I can't scroll the job descriptions. It scrolls the header bar instead. IOS 10, Safari, iPhone 6S, content blockers unloaded.

Pity, too, as I would think there'd be some interest in a previously ASE-certified mechanic with a lot of software experience.


Do you know if Cruise will be looking for interns next Summer? It's a hugely interesting field to me and I'd love to get my feet wet prior to graduating.


Yes! I think we're kicking off 2018 intern hiring in the next few weeks. Keep an eye on the website :)


That they are dogfooding a private beta makes me think that this has leapfrogged what Google had been shooting for previously, and where Uber is now. I don't know what other options there are on the ground as I'm not based there.

It does seem though that there's a real opportunity for GM to sneak past Uber and Google into a first-mover advantage with this.

I'd no idea they were anywhere near this close to a working AV fleet.


Waymo had no business plan, like up until they hired Krafcik they had put no serious thought into how they would commercialize their technology. Through what appears to be arrogance they bungled a promising partnership with Ford that seriously compromised both Ford's and Waymo's chances in this race and ultimately led to the firing of Ford CEO Mark Fields.

http://www.autonews.com/article/20170529/OEM/170529795/googl...

General Motors on the other hand has played their cards brilliantly, and I'm baffled as to how Cruise has managed to make such steady and thorough progress in such a short amount of time. GM is the most futuristic car company out there, and at some point in the not too distant future they won't be so easy to dismiss. It doesn't fit the narrative. Shared, electric, autonomous: there's only one company that has all 3 of these on lockdown and it isn't the company everyone talks about.


Waymo is doing this with real, public customers (albeit hand-selected ones) in Phoenix[0]. Uber has been doing this for months with customers in Pittsburgh[1]. This is certainly a sign that Cruise is making good progress, but I wouldn't say it means they've leapfrogged the competition by any means.

[0] https://waymo.com/apply/ [1] https://www.uber.com/blog/pittsburgh/pittsburgh-self-driving...


The Uber car's are absolutely everywhere in Pittsburgh. See probably an average of two one way on my daily commute.


And Pittsburgh is a far more challenging driving environment than either San Francisco or Phoenix, with far more diversity of weather and a lot messier infrastructure due to terrain and freeze-thaw cycles and reduced maintenance after population decline— think 20% grade on cobbles after a snowstorm.

e.g. https://www.google.com/maps/@40.4763879,-80.0106331,3a,75y,2...


Does Uber's autonomous service go to such locations?


Good question— their office is at the end of a crumbling backstreet, at least: https://www.google.com/maps/@40.4627591,-79.9701442,3a,75y,3...


Google had a private dogfood beta in 2014; I remember being intrigued by it but I wasn't eligible because I'd just moved farther away and my commute now included highway. (It was for surface streets within 5 miles of the Googleplex only.)

Not sure why it's taken so long since...my guess is that they're being super cautious on the regulatory front and also might not have an idea on go-to market strategy.


Volvo has a hundred self-driving cars in test in Gothenburg, Sweden. They're driven by selected Volvo customers. They're only self driving on a few freeways, but they don't have a "safety driver", and the customers are not required or expected to pay attention while in automatic mode.


For those nay-sayers pointing out the incidents where the cars had to have manual control taken, I think you're missing the forest for the trees.

When there's an incident of a car needing a human to take control, what happens next? Cruise will record the incident, and add it to the training data set. Now, the system will learn what it should have done instead and that same kind of incident becomes less likely to occur.

I think this isn't about beta testing the product. I think this is primarily about gathering those failure data points. More cars driving means more miles driven, more data collected, leading to a consumer ready product faster.

And if it happens to give the employees a neat benefit, all the better.


Paraphrasing Tolstoi, every accident is different in its own way. I see that approach as unscalable as current antivirus companies. Making a blacklist of every possible accident condition is as foolish as making a blacklist of every possible harmful Turing-complete program.


On the contrary, there are more common types of accidents and learning from those will prevent future incidents with some probability.


Do you have any evidence to support your claim that "every accident is different in its own way?"


When you learned to drive, was there a big book of every accident to avoid? Or did you slowly learn the overall patterns, from experience and near-misses?

I'm being facetious, but you get the idea.


One big difference between Cruise and the rest is that the CEO has built a self driving car from scratch.

Because of the inherent path dependency in Hybrid systems (i.e. Software written for one car will most certainly not work on other hardware configurations), I believe having technical knowledge in the leadership makes a huge difference in the long run.

The Presence of "old world" tech managers is going to hurt any AV project because building ML systems is so very different because of the implicit empiricism and randomness inherent in the process and the systems.


What self driving car companies does this out cruise ahead of? (Which companies have CEOs that don't have experience building self driving cars?)


The Waymo CEO's experience is from Hyundai and a company called True Car.

Uber doesn't lack a CEO without self-driving car experience. It lacks a CEO period.

The Volvo CEO's experience is in sales.


my unpopular personal opinion (strictly as a casual observer): self-driving will only be commercially viable as an augmented safety technology in single-owner vehicles (collision avoidance, lane keeping etc.) for ~20 years before we come close to reaching full-autonomy.

What we have currently; maybe four or five 9s of reliability in perfect conditions is not nearly good enough for autonomous car sharing. If the car breaks down or is in a collision what is the passenger going to do? drive it themselves to the shoulder and get out? How far are we from autonomous systems harmoniously and safely coexisting with the most erratic human drivers/bikers/pedestrians? How far from adverse weather conditions? How will we finance 100 million $80,000 autonomous vehicles.

Automated collision mitigation is already in many new cars and the data on accident rates is just materializing. Toyota preemptively put it's radar/camera based system in ALL of their cars for 2017. Its rumored that the NHTSA will mandate it for all manufacturers in the near future. The gradual progression of these developments is that these systems will add features incrementally and gain economies-of-scale until one day in the far far future driver input will be minimal. Only then will autonomous ride sharing become viable.


Human drivers are pretty terrible. I'd imagine if it breaks down, then the same breakdown cover everyone has at the moment will kick in.


> Human drivers are pretty terrible.

I beg to differ. The average sober human driver has 1 fatal accident in 120-300 million miles traveled, depending on where they live in North America.

Given that we are bags of meat and water, have reaction times measured in hundreds of milliseconds, were never designed to operate heavy machinery, and manage to achieve all that on a mere 100 watts, that's pretty amazing.


Where are you getting that data? [1] claims that the rate was 1.27 deaths per billion miles in 2008, and 1.25 in 2016.

What do you consider the threshold for human drivers being good or bad? Even if we could get to 1 death per billion miles (with human drivers), I'd consider that pretty bad.

Regardless, if an autonomous driving system can do better, that's all that really matters. We're not there yet, but it's not inconceivable that we could be before too long.

[1] https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


The death rate varies widely from region to region. The article you linked cites Ontario, with its 3 deaths/1 billion miles. Denmark has 1 death/1 billion miles. South Carolina is a graveyard, with 12 deaths/1 billion miles.

Also, ~40% of driving deaths (Again, with significant regional variation) are due to alcohol use.

I'd consider 1 death per billion miles pretty good. That's one fatal mistake in 25 million hours of driving. That's a lot of hours.

> Regardless, if an autonomous driving system can do better, that's all that really matters. We're not there yet, but it's not inconceivable that we could be before too long.

Agreed.


Agreed if human drivers were actually terrible nobody would ever drive. The risk would simply be too high.


- autonomous vehicle could be transporting someone not qualified to drive.

- hard to draft paying & passive customers in to accident mitigation or resolution role.


Can't believe I'm saying this... but what an acquisition for GM. Masterful potential future pivot


GM is killing it right now. Bolt is the first truly great EV ever built at an "affordable" price. Model 3 is getting all the hype now, but when it's all said and done GM will have beaten them to market by a year with an equally comparable or better product at the same price point.


>"Bolt is the first truly great EV ever built at an "affordable" price."

I would curious to hear your opinion on what things specifically makes the Bolt a great EV - features, comparative specs etc.


>I would curious to hear your opinion on what things specifically makes the Bolt a great EV - features, comparative specs etc.

It's the first mass produced, purpose designed, non-luxury EV available in the US with:

- Greater than 200 miles range

- Active thermal management for the battery (essential for battery longevity and extended use with quick charging).

- Full seating for 5

- Top IIHS safety rating

- Standard level 2 autonomy

- Cargo capacity comparable to a regular hatchback

At a price that just barely starts to make sense for the average driver when you factor in fuel savings and tax subsidies.


Thanks, that certainly is impressive. I didn't understand your last comment:

>"At a price that just barely starts to make sense for the average driver when you factor in fuel savings and tax subsidies."

At $36K and just using $5K as an arbitrary tax subsidy number isn't it the same price as most US mini SUV style cars?


Hatchbacks are not mini-SUVs, and many people can't afford to own a mini-SUV. $36,000 with no trims is pricy for a new car.

I've been adamant that owning an EV would be a non-starter for me, but having seen the Bolt, if I were shopping for a new car, I'd buy one in a heartbeat.


I didn't say that hatchbacks were SUVs, it was simply a comp as the sedan market is giving way to the compact SUV in the US.

$36K is pretty close to average car price in the US now:

https://www.edmunds.com/about/press/average-vehicle-transact...

I think you are right that there are many people who can't afford a $35K car which is why leasing is seeing such an uptick:

https://www.edmunds.com/about/press/automotive-lease-volume-...

A compact SUV is one of the best selling card this year in the US and at ~20K its not actually that pricey compared the average car price in the USA of $34K.

And the compact SUV is looking like it it will dislodge the sedan's long dominance. See:

https://seekingalpha.com/article/4064632-new-u-s-car-sales-k...

and

https://www.usatoday.com/story/money/cars/2017/05/01/why-nis...


>$36K is pretty close to average car price in the US now:

You have to keep in mind that $36k 'average' number is highly inflated by luxury purchases. Most people end up spending closer to $20k. I think when they hit below that magical $19,995 number we will see massive adoption.


Yeah, that's a mean not a median, and is therefore the wrong average to use for this purpose, which calls for median (or even mode in preference to mean.)


I wonder if it's a good idea to accelerate the progress towards a future that they may not be successful in. If all this is as disruptive as many people think then they may well be bringing forward their own demise.


Automakers are likely in a precarious position that they are just beginning to realize: If car-sharing really takes off (due to automation making it cheaper to hail on demand rather than owning a vehicle), they could see sales drop off by orders of magnitude. Glad to see that some of them realize this; though as a resident of western Michigan (home to many automotive suppliers), I am concerned about the potential impact up the supply chain.


I am skeptical. It sounds way too perfect, like a story manufactured for PR. If it really is that way, then great.

What is the reputation of Cruise in the self-driving community?


The only reason I paid any attention was because of an impressive demo they put out a few months back. https://youtu.be/6tA_VvHP0-s Still, who knows at this point.


Neat, but I've yet to see a demo of a self-driving vehicle in rain so hard drivers slow down from 65 mph to 45 mph, the wipers at their fastest setting are struggling to keep up, and when you drive through a puddle, the splash from the water obscures the windshield for a couple seconds. Conditions with such low visibility from the rain coming down so hard that you wish you had a clear view screen (also called spin window) [1], usually found on ships and machine tools that use lots of liquid. Humans regularly drive through this kind of rain, though not well. If a self-driving platform relies exclusively upon LIDAR, my understanding is seeing through this much rain requires a lot more processing.

The rain demos I've seen [2] [3] [4] are all in wet weather, but not this kind of heavy rain, where a promise of self-driving technology saving lives and property can be more assuredly realized. If anyone in the field knows of one of the contenders already working on training in such poor conditions, then please post a link to a demo if one is available.

[1] https://en.wikipedia.org/wiki/Clear_view_screen

[2] https://www.youtube.com/watch?v=xs9Gr9V2mOE

[3] https://www.youtube.com/watch?v=nvlazzsroCA

[4] https://www.youtube.com/watch?v=GMvgtPN2IBU


If conditions are so bad that it's unsafe for humans or robots to drive, then refusing to drive in such conditions is the correct behaviour.


IMO, the real answer is that no one should be driving in that kind of weather, and autonomous systems which refuse to do so are simply doing what humans should be doing.

I'd be very content with an autonomous vehicle that would detect these conditions and pull to the side of the road to wait for things to improve before proceeding.


These conditions can come on very quickly, so even if your solution is to pull over and stop, you'll still need to find a way to do it safely.

And if you're somewhere this is common (e.g. Florida), and there are still human drivers, you'll have to account for them not slowing down at all.

If you're in an area where flash flooding can happen, waiting may be a more dangerous option. Or you could be in a place where there is no place to stop.


People should be driving in that kind of weather. They should not be driving over, or at the speed limit in it, though.


Self-driving cars will slowly (or quickly) make car ownership redundant.

Why own a car (pay taxes, insurance, parking), when you can have one at your door in a couple of minutes ? When you arrive at the destination, just exit the car - no need to look for parking or anything like that, let the car figure it out.

If technology can do that, then owning a car becomes more of a hobby than necessity. I hope GM's strategy is aligned with this, as the profits will not come from selling cars, but from providing the service.


> Why own a car (pay taxes, insurance, parking), when you can have one at your door in a couple of minutes ?

Because when I want to use one to go to the mountains or the beach on a holiday weekend there will be none available or surge pricing? This is already the case to some extent with rentals and car sharing services.

I own a car in NYC that is only used on the weekends. Financially this is a somewhat dubious proposition compared to Zipcar, but it's a win for me in two ways: it's always available without any reservation process, and my financial incentive is to use it more often since I've already paid all of the fixed costs (tax, insurance, parking) of ownership. I like this incentive since I enjoy getting out of the city. With Zipcar the annualized costs are fairly similar to owning a car and using it a couple of weekends per month, but a simple weekend getaway comes with a $300+ pricetag up front for the car rental.


> Because when I want to use one to go to the mountains or the beach on a holiday weekend there will be none available or surge pricing?

Some people will still own cars. But owning a car will shift from being a necessity (as it is for people today in areas without adequate transit systems) to a luxury.


This is a pretty solvable problem from a car service's POV: overprovision your fleet to account for people who want to take a car for a full day or even a week, and then charge significantly less than ZipCar but more than a local ride-hop (you could even change pricing plans so you pay by the hour/day instead of by the mile). Because your fixed vehicle costs are amortized over a much larger number of customers than ZipCar and your car can get itself from where there's supply to where there's demand, you can eliminate the reservation aspect and charge significantly less.

On an economic level, the occasional person who wants to reserve a car for a weekend uses up a much smaller fraction of a car than one who owns one outright.


But the economics will certainly change in the future. Self driving cars can be used around the clock and probably be used to deliver packages and food overnight. Maybe even empty your garbage or deliver batteries as an cheaper(?) alternative to your power company. Also scale and competition could get much bigger.


I know you were asking rhetorically, but there are still reasons to own a vehicle, even in a world with L5 autonomy.

A pretty big thing you can do is keep stuff in there. This is perhaps a nice-to-have for many people, but for certain use-cases it's mandatory (ie a general contractor who keeps their tools in their truck.)

There's also the idea of doing clandestine things[1] in your car which you don't want some data vacuum to pick up.

Some people like racing, and/or doing work on their engines as well.

And of course, status symbols remain "in", this year and likely for the foreseeable future.

L5 autonomy does away the "typical" use-cases of personally owned vehicles (ie, move a person and maybe a few light goods a short to medium distance), which is great. Not everyone owns their vehicle purely as a point-to-point human transporter, though.

======

[1]: (Like planning surprise parties) | https://www.youtube.com/watch?v=bu5b_jYWVcQ&feature=youtu.be


I used to think this, and I still think it'll happen in urban centers, but there are pretty big swaths of rural or just low density areas that wouldn't be well served by this, so it'd take a while (a couple decades extra maybe) to get local rideshare hubs going, and in some more remote places it maybe would never happen.


This ^ Exactly this. Now if every person in a rural area sold their car back to the manufacturer and then those cars were used to service those rural areas than there is a possibility. But cars will still be purchased because of rural areas. I live 120 miles from my job. My neighbor is a mile down from me. If I want to drive to nearest town, where a car service would most likely be located, I'm not waiting the 30 minutes for it to pick me up. And it doesn't make financial sense for them to have one closer to my house.


If you don't drive, waiting 30 minutes for a robotaxi to show up is vastly better than having to rely on someone else to ferry you around or being stuck at home.


While obviously density improves the efficiency of almost any transportation system I don't understand why driverless vehicles would ever be less efficient than traditional cars. Sure they will initially cost more due to additional sensors and computers there will also be saving from removing unneeded human controls. Lastly accident/fatality rates are several times higher in rural areas so the potential safety gains are larger.


What's that economic phenomenon where a decrease in the cost of something results in an increase in the overall amount spent on it?


Once they are safe and reliable, autonomous vehicles will quickly replace the subsidized demand response bus services that run in rural areas.

(drivers are a big expense for such services)


You can still do that with Uber. Why own a car today?


You have to pay a human to take you. Humans are expensive.


and smelly, and talk too much, and the biggest one of all: suck at driving.


The human part is pretty cheap compared to the other parts at least here in Toronto:

Car insurance $220 (20h at minimum wage)

Car payment $780 (70h at minimum wage)

Gas $200 (18h at minimum wage)

Parking $250 (22h at minimum wage)

If I didn't enjoy driving and having a nice car, it would already be cheaper to Uber everywhere.


$780/m lease is payment for 6 years pays for 60k car. More realistic lease payment is $200-$250/m.

Average insurance in US is ~$100/m


Maybe I'm getting confused due to currency changes, but I think you could certainly divide all of those numbers by two in many American markets.


This is the right answer


Because Humans quickly change your rider review from 5 stars to one when you do not tip.


Drivers have to rate you before they see your tip unless you give a cash tip.


There have been reports of drivers changing ratings retroactively after seeing their own rating drop, or if a customer doesn't tip.


Autonomous cars should be a cheaper than Uber eventually. Additionally I imagine they will be more reliable, which is something you can't necessarily guarantee with a random uber driver.


It’s available across all of the mapped area of San Francisco where the test fleet operates, and the app works like any ride-sharing experience, mapping ride requesters with available cars.

What area is that? Isn't all of San Francisco "mapped" 100 times over by this point?

If the "mapped area" isn't all of San Francisco, does this speak to a limitation of their tech or a regulatory/deployment issue?


They may be creating their own highly detailed maps. In that case it would be a limitation of how much effort they have invested in generating the maps.


I'm not an expert, but the more detailed your maps, the higher chance that they are inaccurate. Details of SF streets change regularly.

If they don't have good coverage of SF now, it's not just a fixed effort to make it more detailed. It's a continuous effort.

Also there is no map of detours, etc. I had thought that many self-driving algorithms were not relying as much on maps for this reason, but I could be wrong.

At least Elon Musk said that humans don't need a global map to drive. They just need two cameras pointing out the front windshield. Global maps help to navigate, but they don't necessarily help you drive.


Yeah, they are all pretty tight lipped about exactly how map dependent they are. But anytime they start talking about limited operating areas it is an obvious question to ask.

And it is even possible that operating in areas with detailed maps is simply more valuable (if the map improves their ability to evaluate the performance of other systems).


Yeah, they have their own in house maps team.


100 times over, but by 100 different silo-ed datastores... so its hard to equivalent draw conclusions needed for AV's in a city.

not to mention the biggest complexity is still understanding deviations from said-detailed environments.


Probably they need more precise maps than are available from Google Maps or the equivalent. And other autonomous car companies aren't willing to provide their maps.


This makes me so excited. I can't wait until the interiors of cars can change completely due to the safety brought by self driving cars. If all cars were self driving and communicated with each other, the risk of crash would be almost nonexistent (obviously there would still be wildlife and other issues). Given a near 0% chance of crash however, we could remove many of the annoying 'features' of a current car, like seat belts and having all the seats in the car face forward. Maybe you can even order specific types of cars for what you want. Tired? Order a bed car. Just a door and inside is a bunch of mattresses and pillows with windows that can be blacked out. Late for work? Order a work car. Has desks with swiveling chairs etc. Just getting groceries or going shopping? Order a hauler car, etc.


Or why not order a groceries shop car instead?

https://techxplore.com/news/2017-06-convenience-mart-wheels-...


Rear-facing seats are not illegal I think (obviously not allowed for the pilot, currently) - and they have been amply proved to actually be safer. They've been the standard on military planes (again, not for the pilot :-) ) for quite a few years. Once people get around the initial shock (something not to be discounted - it may kill the sales of a car) they are quite happy with them on planes.


I've never heard abour rear-facing seats being safer befofe.

What makes it safer? Do you have any references? It sounds interesting.

Is it because if you crash, your back absorbs the impact better?


I think that's basically right. The worst crashes are usually frontal, and your entire body is evenly supported, rather than hanging on a seat belt or something.

Rear-facing seats are definitely not illegal in cars. Tesla has them as an option for a third row in the Model S. I think they would be legal in airplanes too, and the main reason they don't see more use is just that people are often uncomfortable not facing the direction they're going.


A public 'bed car' sounds gross.


I mean, yea. I guess I left out the details of how the car service you were renting from would clean things between uses. Didn't think that was necessary to spell out. There are public 'nap rental' places now that have that particular problem pretty well sorted out.


Think of hotels or airline/train sleeper seats not public transit.


Sleeper trains usually have an attendant in every carriage. Doing that would seem to defeat the point of a self-driving car.


Not necessarily any grosser than a hotel.


>each also has a safety driver in place behind the wheel for testing and as required by law. Still, Cruise says those drivers have had to take over manual control of vehicles engaged in Cruise Anywhere service only on a few occasions, with the vast majority of the driving done autonomously.


I've seen these driving around SF, they're still using the standard engineer in the driver's seat model. One almost hit me when I did the standard SF, 'drive around the person waiting to turn left in your lane' thing on 16th.


Did you have the right of way in that situation?


I think so? I don't know the law, but I do know if I'm turning left and someone else is turning left in the other lane - in sf, I need to make sure the cars behind them aren't going around them in the same lane.


I guess I'm not understanding the scenario. A car was stopped in your lane waiting to turn. Presumably you should wait until they have turned, or make a safe lane change to another lane to move around them?


Yeah, I don't know if you're in SF - but on the one lane roads, traffic doesn't stop for people turning left at lights. Again, I don't know the legality - but I've seen police and government workers do it as well. Because the streets are made w/ enough space for vehicles to park next to the lanes of traffic - there is enough room to pass on the right in an intersection.


I understand that part. But not the one about the cruise car almost hitting you.

So: one lane street, person in front of you turning left, you pass them to the right in the space where there might be street parking which extends into the intersection. During that time, the car trying to turn left is finally able to do so and the car immediately behind it is the cruise car. Cruise-car is now able to go straight and almost hits you as you're both competing for that one lane?


Nope, cruise car was in the other lane turning left. It turned into my lane just after I passed the car turning left and the cruise car made a sudden stop in the wrong (my) lane. Had it continued, it would've collided with my rear wheel - so I'm like 75% sure I wouldn't have been at fault.


So, by the sounds of things, everything worked as designed. In many similar situations with a human behind the wheel, that would have been a collision.

Based on personal experience (as a firefighter/paramedic), I wouldn't be surprised if a not-insignificant percentage of collisions (both car/car and car/bike) in SF are caused by that very scenario.

Personally, that sounds like an anecdote very much in favor of automated cars.


You're completely off base here. I'd say I do this 10-15 times a day. So ~100 times a week, and there is never an issue. I am infrequently the first car to do it. I put it in quotes because it is a very common practice in SF. You would have to wait through every light twice to get anywhere in the Mission if you didn't do this. It's mostly single lane with no turn light. I noted that it was odd because I very rarely have problems doing this.


Passing on the shoulder (over a white line), or in a designated parking lane is indeed a violation (obviously I'm not sure of the specifics of the intersection, so I don't know if either of those apply).


Neither of these apply.


AFAICT, no; the requirement that passing on the right, where allowed, only be done when the maneuver can be executed safely would seem to require yielding to other traffic (CVC Sec 21755(a).)


So GM bought this, but I thought they also had their partnership with Lyft? Are GM just betting on different horses and seeing which properties turn out to be the most fruitful?


GM purchased Cruise for access to its self-driving technology and software engineering talent. GM invested $500 million in Lyft to leverage the ride sharing platform with its future fleet of autonomous vehicles.

These two horses aren't competing against each other.


but then why build a ride sharing service for Cruise if Lyft has already done it?


I think if Cruise or Waymo want to take the next step in promotion they will take the gags out of the testers mouths and let them tell us how well it actually works.


Just an observation. There's this tricky corner in the Marina Fillmore/NorthPoint. It took the corner pretty fast and I was surprised (could have been under manual control). People cross there indiscriminately without looking and I have been caught in a few close situations in my car. A creeping system around corners blocked by cars could be helpful.


Their cars have trouble with left turns as well. I was walking to Rainbow a few weeks ago and heard a bunch of honking. Turns out the car wouldn't make a left on a green light (Folsom at 13th).

Fast forward a few days and the same thing happened, this time at Mission and 18th. Only at this intersection there's a no left turn restriction and the driver overrode it and made the left.

If one of Cruise's autonomous cars is doing something aggressive I'd assume it's under human control. Hopefully the cars won't pick up too many bad habits.


Left hand turns are one of the tricky things (as experts like John Leonard have also pointed out). The appropriate behavior is VERY context-dependent. Ordinary prudent and conservative behavior in some circumstances is you're-never-going-to-make-a-turn and you'll cause road rage in other drivers under other circumstances.


Sure, the first time the driver intervened there was no oncoming traffic. The second time there was traffic and a "no left turn" sign.


Talking about creeping: I just saw a self-driving Chevy Bolt hold up traffic turning left from Franklin onto Lombard. It crawled through the turn like a panicked tourist. Granted that this is not a smooth turn, requiring a sharp adjustment at the end. I think this was a GM test or perhaps Lyft. The car was named "Wombat".


Sounds like the tech itself is in pretty good shape, but was surprised at the haphazard looking way it's grafted onto the car. Pretty sure I'm seeing zip ties and generic aluminum t-slot, with the front sensors sticking quite a ways out from the overall profile of the car. Pretty sure that's hitting someone's shin/knee as they exit the car at some point.

Not trying to make too big a deal of that, but now that they are offering something public facing, seems like a good time to address that. It doesn't convey a strong message of ready-for-prime-time. It's certainly less polished than the competition.


They stick out about the same amount as the side mirrors, which I assume is not a coincidence. I suspect that molded body parts to hide the extra hardware won't make much of a difference to the majority of consumers. Do you want a ride in the cheap, safe, ugly car or the expensive, glitchy, sexy car?


>Do you want a ride in the cheap, safe, ugly car or the expensive, glitchy, sexy car?

Consumers tend to perceive things a certain way that's not always grounded in common sense.

Also, obstacles close to the ground are easier to miss visually.


Me too. They're part of GM now. They should have access to GM's design facilities. Both the 2013 GM/CMU self-driving car and the Cruise Automation prototype had better sensor integration.


Is it currently free for employees?

I wonder what the premium would be for the psychological anxiety of taking a self driving car. If I pay $5 for a Uber ride, I'd need a significant discount for the equivalent autonomous ride at this stage.


FYI I see cruise cars all the time in SF. Natural next step.


I thought this was about Tom Cruise.

Who the hell is "Cruise"? That title is not the way to present the company.


Now that's a perk. Way better than free kale chips.


Why does the photo show a lady getting in the back seat? There is no driver(?), and the front seat has an airbag...


The air bag is there because bashing your head on the steering wheel or dashboard is unhealthy. Not bashing your head on something hard in the first place is even better. (I'm guessing, but don't know for sure, that the front seats are sufficiently weak that they're OK to bang into.)


Back seat behind the driver is statistically the safest. In case of a collision a driver will instinctively try to save himself and therefore crash with the passenger side. Being behind the driver therefore puts you at less risk


In an autonomous ride would this still hold true?


I can't tell if thats a joke or if you're serious :)


I think they're legally required to have drivers at the moment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: