Hacker News new | past | comments | ask | show | jobs | submit login

What do you mean by "most economical sense to automate"? It seems like everything about automating a truck would be more expensive (cost of 1 truck = 5-6 cars), the insurance and maintenance on the truck and hardware, the potential for catastrophic failure (1 runaway car doing 70 = bad, 1 runaway truck doing 70 = REALLY bad), the extra complexity of pulling a trailer and monitoring the trailer as well as the truck, monitoring the size and type of load of the truck and modifying the driving characteristics to match the load, the extra regulations that trucks are subject to (what roads they can be on, what loads they could cary on certain roads, what time of day they can drive on said roads, when they can or can't use engine braking).

Edit: even on 'simple' point A to point B route that involve 99% highway, what happens when a small part of the highway shuts down for whatever reason (flooding, multi-lan accident, fire, etc) and all traffic is routed on smaller adjacent streets, or forced to share a lane with oncoming traffic. Automating a truck is as hard or harder than automating a car, since you're dealing with the same external variables but have more internal/attached moving parts.




Trucks often drive mostly or only on major thoroughfares and often on extremely long and tedious routes. A self driving truck that could drive 50 hour stretches without getting tired or making mistakes would be a huge improvement in efficiency.


I could see there being a nice "hybrid" avenue, too: Have a human driver in the truck for managing all street level driving, overseeing loading/unloading, refueling, etc. And then let them take a break, or work on their side hustle or master's degree or whatever, while the truck is on the highway. And either pull off the highway or have the human take over if conditions aren't favorable.

Self-driving taxis, OTOH, feel like they've got a much longer way to go before they can generate any real profit.


This is generally seen as the path to full automation - you can couple autonomous "easy" highway driving with a remote driver taking over for the final mile. How long until we have aircraft doing the same thing ? ;)


Well once airborne, aircraft already fly themselves and do everything except the last 100 feet of landing already.


Aircraft have six degrees of freedom. But for the ground, birds, weather, and other aircraft there is nothing a plane can run into. At 20,000 feet that mostly reduces to other aircraft. And still humans on the ground orchestrate among the flights and provide specific direct oversight of each and every flight in real time. Watching for weather. Watching for birds.

Don't get me wrong, I find aircraft automation impressive. But there is a massive human workforce that makes it possible for the cabin crew to run planes on autopilot. There's a mountain of rigid regulations, licensing and certifications that control every part of that workforce. Every part of each aircraft. Every piece of communication.

That's not how the roads work.


And for avoidance of aircraft: except for some last-second emergency reaction, this is the opposite of automated. We have specially trained, highly competent people on very short shifts under pretty much ideal working conditions ensuring that.

We just don‘t place them inside the aircraft.


We place them _both_ inside the aircraft and on the ground. While ATC does provide separation, conflicts still can and do happen. Aircraft have onboard systems to warn of conflicts and even to suggest corrective action, but the pilots must be the ones to make the correction.


Indeed, I believe TCAS is one of the only places where planes and pilots override ATC, in that if ATC tells you to go down, and TCAS tells you to go up, you go up


Unfortunately, the pilots at Lake Constance didn‘t really act this way.


In fairness, cars can stop or divert in a couple of seconds. Planes, not so much.


> Aircraft have six degrees of freedom. But for the ground, birds, weather, and other aircraft there is nothing a plane can run into.

I would also add that aircraft are monitored by ground control stations, while cars are controlled by the driver alone.


> Well once airborne, aircraft already fly themselves and do everything except the last 100 feet of landing already.

That's only if nothing goes wrong. On the other hand, if the airplane I'm in loses both its engines and has to land on the Hudson River[1], I'd much prefer to have an experienced pilot and copilot in the cockpit.

[1] https://en.wikipedia.org/wiki/US_Airways_Flight_1549


Black swan events are not very useful points of comparison - this event is not dubbed Miracle on the Hudson for nothing.


With constant, vigilant monitoring


For some definition of constant, planes aren't seconds away from disaster like cars/trucks are pretty much constantly.


Since the context here is "autoland", yes, that means "do not stop monitoring the bird, be always ready to take control." Landings are one of the trickier parts: you are moving at hundreds of mph, on an almost-collision course with the runway, by definition: the plane is supposed to stop flying very few feet above the concrete, AND not drop too hard.


Actually, most modern passenger aircraft can land autonomously on properly equipped airports with CAT 3 autolandings.

(Note that it's not fully autonomous, the pilots still need to do quite a few things like lower flaps, extend the landing gear, ....


Yeah he knows.


AFAIK this is the accepted path forward (minus the side hustle aspect). Automate that which is easier to automate: the long distance highway driving. For the first/last mile, let a human do the work.


I think what is going to happen is people are going to realize robot cars are death traps. Not because their rate of accidents will be any higher, but that when they do have accidents they will seem bizarre and inhuman mistakes that even the most incompetent human driver would never make.

Driving on the open road requires real intelligence. Not the pretend intelligence that modern AI gives, but real understanding of situations and terrain. Before that happens (which is basically skynet, and a very very long time away) all you have is a bag of tricks cobbled together. Those tricks will miss things and get confused and make mistakes. Maybe not very often but definitely in strange ways that are frightening.

The unknown is scary. Drunk drivers, tired drivers, old drivers, et al. are plenty dangerous, but they still behave in ways that can be understood. AI mistakes will be / are / have been strange unsettling things that can't be reasoned about if you're a person in the area of the misbehaving vehicle.


> Driving on the open road requires real intelligence. Not the pretend intelligence that modern AI gives, but real understanding of situations and terrain.

People used to say this about every single thing that computers can do better than people.

In my college town, some pedestrians got ran over by a driver who later pled insanity due to "caffeine-induced psychosis". I think you're seriously overrating the predictability of human failure modes.


> People used to say this about every single thing that computers can do better than people.

But computers can't do it better than people! That's what drives me nuts about this debate -- it's just accepted as a premise that either the self-driving cars are much safer than human drivers, or the path to getting them there is very close and no serious obstacles remain. Neither is true and it's not clear they will be. https://blog.piekniewski.info/2017/05/11/a-car-safety-myths-...


I think what is clear is that virtually no one is okay with large-scale public deployment of self-driving cars that aren’t clearly statistically safer than human drivers, so when we’re talking about public deployment of self-driving cars, it’s implied that we’re talking about when (if) they reach that level of safety.


Frankly, that's not clear to me. A lot of people seem quite eager to put the first vaguely plausible thing all over the road because of an exaggerated idea of the incompetence of human drivers (which is understandable, but harmful in this context)


The question is, which human drivers?

Let's say we have a self-driving car that is as safe as the 20th percentile human driver. Do we allow that self-driving car on the roads? Do we selectively revoke licenses from 1 out of every 5 drivers and replace them with a car that's at least as safe as they are if not probably safer? Do we replace breathalyzer interlocks for drivers with DUI convictions with an AI driver and just revoke their licenses permanently?

There isn't a trivial solution to this problem. At some point, some AI driver is going to cause an accident that would not have been caused by a 95th percentile human driver. At the same time, human drivers do shit like this all the time: https://www.youtube.com/watch?v=oidHSzukSss


OK, nice anecdote, but if you follow my link you can see that if we just blindly take the average of all drivers and accidents per mile driven and then compare it to AI performance AI is nowhere near average, let alone the 95th percentile. Here is another example of what I'm complaining about: an argument that simply takes as its premise something not yet proven to be true.


Like what, “self-driving AI will never improve past the state of the art in mid-2018 so it’s useless to speculate about it ever doing so”?


It's not entirely clear it will in the foreseeable future improve to the point where it is a serious prospect to be safer than human drivers.


To me, this seems like an a priori assumption that accepted as the truth. Then, when it's questioned, the person questioning is made out to be some kind of luddite.

Which is an odd phenomenon to me. I can't even get Google Assistant to understand me 3/4 of the time, yet I'm supposed to take it on faith that autonomous cars are inhumanly safe?


Google Assistant is a funny example, because my wife has a foreign name and I can do absolutely nothing to get it to understand when I ask to call her, even attempting to imitate its weird pronunciation of the name. The only thing that works is hand-typing "Xxxx is my wife" into the Assistant and then referring to her exclusively as "my wife," and that gets reset with updates periodically


Questioning the capacity of Friend Computer is treason here on HN, Citizen! Report yourself for immediate termination.

In other words, much of the autonomous robot debate here is based on handwaving, wishful thinking and No True Autonomous Scotsman (...would run over a human).


No, require data to be public, require a billion km simulated driving test for every software version that's released to a car without a safety driver.


> People used to say this about every single thing that computers can do better than people.

Did they though? Think about things that computers can do better than people: they're mainly things that we completely predicted computers would be better at (arithmetic, precision manufacturing, drafting, telecommunications routing). Beyond that, you're left with things computers are only better at dependent on priority, the canonical example being service jobs where economics trump's QoS; computers are much worse than a cashier, but comparatively cheaper by a margin that makes the quality compromise worth it.

The only possible exception I could think of that's come up in recent discourse is diagnosing patients, but even that, while encroaching on a role that has traditionally been revered as a career, is still something that seems at least on the surface to be quite predictable given the nature of what's required to make diagnoses (simultaneous access to a trove of data and knowledge).

Beyond the above, I think it's pretty reasonable that there's a broad range of things computers will not be better than humans at for a very very long time, if ever.


But this is a cliche in the AI field: that AI is defined as the things that computers can’t do as well as humans right now. As soon as computers match human ability, well, clearly that doesn’t count as “intelligence.”

And I don’t think the problems computers have proven themselves useful in solving have been what most people expected. Chess, Go, facial recognition, Jeopardy, image classification (hot dog or not), captchas (clearly, since they’re designed specifically to resist computer solutions), etc. seem to me to be things that, before computers proved to be decent at, would have been widely considered to require intelligence on the level of humans.


People only thought that chess required intelligence because they had no idea how much raw computational power computers would obtain. To anyone who understands the rules of chess, it's obvious that a machine which can perform a near-exhaustive search of the state space for a few moves ahead is going to be able to play chess better than a typical human.


People only thought that driving cars required intelligence because they had no idea how much raw computational power computers would obtain. To anyone who understands the rules of driving cars, it's obvious that a machine which can perform a near-exhaustive search of the state space for a few seconds ahead is going to be able to drive better than a typical human.


>To anyone who understands the rules of driving cars, it's obvious that a machine which can perform a near-exhaustive search of the state space for a few seconds ahead is going to be able to drive better than a typical human.

But there aren't any formally-specified rules for driving cars, and this isn't obvious.


Driving has a universal, formal, self-contained, non-contradictory, simple set of rules that all the road users unconditionally follow? Can I see it?

Nope, despite a myriad of road codes, the actual traffic doesn't follow a set of formalized rules: a chess rook can't just decide that it will start disintegrating all of a sudden, as opposed to a vehicle. You could probably approximate the ruleset if you made it self-modifying...which will then demolish your second point about near-exhaustively searching the state space - good luck doing that before the heat death of the universe, as you're essentially simulating the whole environment. Oh look, there's also weather. How's that exhaustively searchable? Asking for the Nobel Prize committee.

For the sake of discussion, let's say that a miracle happens and you managed to do all that - but sorry, it's useless again, the few seconds have already elapsed and you need to do it again. And again. And again, ad infinitum.

Now, I could envision "by our current technology, we can't yet, but we're hoping for a miracle in this specific spot" - but "assuming a massive miracle happens every few seconds, for each vehicle" is completely removed from reality: why not have teleports, if we're in magical wish-granting land already?


it's ok, you only need a near exhaustive search, and to be better than humans.

For highway driving Waymo had 6 disengagements in 2017, street: 57.

Total driven: 352000 miles

1 disengagement for "a recklessly behaving road user" 5 for "incorrect behavior prediction of other traffic participants"

Seems like predicting other people is almost perfect, the others were more internal problems.

Driving a car when a weird thing happens isn't that complicated: You stop, braking at the minimum amount required to do so safely, to avoid cars behind you hitting you


Near-exhaustive search of what? You're handwaving away that there isn't a stateless formal graph to search - rather a stochastic, everchanging environment. Again: how do you near-exhaustively search that?

(In other words, yes, it might be eventually possible to have self-driving vehicles, but pretending that the search space is bounded, or even near-exhaustively searchable a la chess - that's just pure technobabble)


Well yes, my original comment was mostly a joke after all.

Anyway, self driving cars will only get better, and the more of them there are the better they will be, because no humans around doing weird things who don't talk over the SDC network to explain where they're going


Now that is a future that I can at least imagine, starting with SDV-only enclaves: "no humans driving on the West Coast" etc. Still doesn't solve other road users (cyclists, pedestrians), but it would definitely do away with entire classes of problems.


That just isn't true. It was widely believed through the seventies and eighties that chess inherently required creativity, and a machine could never beat a grandmaster.

Rather, I suspect, tasks which computers start outperforming humans in we reanalyse as "completely procedural". Nobody called chess procedural in the middle of last century.


Well, a computer playing chess is simply enumerating all possible boards. Which is the same way that a GAN produced “art”. Humans do it creatively because that’s how you do it if you lack exhaustive computing power and memory. In neither case is a computer mimicking that process of a human, they simply arrive at the same outcome by a brute force means.


No. Computers don't play chess by "simply enumerating all possible boards". That would require ludicrously more compute power than we have, and, of course, it would also _solve_ chess, rather than just allowing the computers to play once it would (if it could ever be done) show that the game itself has a solution, a best way to play, like Tic-Tac-Toe.

Historically AI chess (e.g. "Deep Blue" or Stockfish) is played by machines using one heuristic to estimate how "good" positions are without truly knowing, not so dissimilar from how humans evaluate a chess position. and then another heuristic to try out moves to get to further positions. The machine considers possible plays and how they affect the heuristic "value" of the board, preferring those with more value. Human Chess AI authors design the two heuristics used, though they often aren't very good at actually playing chess because it's a different skill.

Google's AlphaZero AI plays chess differently again, it had no preconceptions of how to play Chess, instead it learned through self-play - it knows the rules of the game but began with no idea what's a good or bad move, it adapted its own heuristics based on how well they'd won or lost. It actually recapitulated most of human chess theory history over its incubation period of thousands of games, discovering ideas like the Sicilian Defence for itself, new attacks would at first see overwhelming success, and then, playing versions of itself that had seen these attacks, they'd be defended more effectively.

Alpha Zero plays a radically "more human" style of chess than most modern human Chess grandmasters, huge multi-move strategies in which pieces are sacrificed to take positional advantage. It looks like something humans were doing last century - except Alpha Zero does it much better than they ever did.


A typical chess position has fewer than 100 possible moves, so a modern computer can do quite a deep exhaustive search of the state space. You won't beat Kasparov just by doing that, but I'd bet it's enough to beat me.


"Just by doing that" you can't even begin.

The problem is that you lack an evaluation function. Let's consider two of those 100 possible moves. Your rook could take this opposing pawn, or, your own pawn could move forward one space. Which is better? Why? Neither of them immediately wins the game, but we must pick something. In a smaller, tighter game, like Tic-Tac-Toe we could crank our exhaustive search until we discover that this opening move leads to a possible win... but the search space in Chess is categorically too enormous for that.

Both Google's Alpha Zero and simple human play encourages the belief that a good evaluation heuristic is essential. The evaluation heuristic looks at a board position and it doesn't recommend a move it says something like "I rate this position 0.418" where 1.0 is "I'll definitely win on my turn" and -1.0 is "My opponent wins on their turn". Google's engine contemplates relatively few possible moves (for a computer) but the results are striking because it's looking at _good_ moves more of the time rather than wasting a lot of time thinking about moves that are a bad idea.

This seems obvious, but, well, learn chess and see for yourself.


Yes I'm aware that you need an evaluation heuristic. My point is that you don't need a particularly good one to be able to beat an average intelligent human at chess. (After all, most humans don't have very sophisticated chess position evaluation heuristics, and they are able to examine vastly less of the search space than a computer.) Beating Kasparov is another matter, of course.

Here's an example of a simple chess engine that is good enough to beat amateur human players at least some of the time:

https://news.ycombinator.com/item?id=8133125



> People used to say this about every single thing that computers can do better than people

Even if that’s true (and I doubt it is), there is ample precedent (AI winter) for the industry dramatically overestimating what computers can do.

I bet if you time traveled and showed Siri/Cortana to an AI researcher from 1960 they’d be incredibly disappointed.


> I bet if you time traveled and showed Siri/Cortana to an AI researcher from 1960 they’d be incredibly disappointed.

It's a common misconception that the 1960s and 1970s were a time of unbridled enthusiasm in AI. In fact, there was a ton of pessimism back then too: for example, ALPAC [1] was so pessimistic about the future of natural language processing that it got the US government to pull most of its funding.

I think if you were to show Siri, Alexa, etc. to some of those folks they'd be pleased that we've gotten as far as we have, while acknowledging the obvious fact that there's plenty more to do.

[1]: https://en.wikipedia.org/wiki/ALPAC


Not sure why people are disagreeing. That seems a blindingly obvious comment. There's so much today that would seem almost like magic to pretty much everyone living in 1960. But coice assistants? (And even just voice recognition.) almost certainly seemed like relatively "easy" problems. Perhaps less so to AI researchers than the general public but still.


I don't know about "magic." A modern smart phone might be "magic" to someone in 1860--it operates based on technologies that didn't exist back then. But by 1960, the building blocks of modern computing were already in place: digital von Neumann computers built out of transistors, radio communications, signal processing, etc. AT&T used frequency-division multiplexing of multiple voice channels in phone transmissions in 1918. The mathematical framework for modern technologies like LTE was in place by the 1950s and 1960s. Would it really have surprised anyone that transistors would continue to get smaller and faster, allowing higher complexity, higher-throughput signal processing, which would allow Facebook?


I remember reading a dreamed up device that would hopefully show up some day written about in 1960.

Weight ~1 ton, cost ~1 million dollars inflation adjusted, non toxic, delivery date ~1990. What did they want? A 1 GB random access HDD.

A 32 gigabyte micro SD card for 10$ would have blown their mind let alone a smartphone.


They were off on size/weight, but they guessed the capacity just about right (IBM 0681). I don't think extrapolating out pre-existing trends, for sophisticated people, be "mind blowing." Would your mind be "blown" if you learned that by 2048 you'd have a 100 petabyte drive using, say, magneto-resistive memory (or something else based on anticipated, if not fully developed physics)? Seems like hyperbole (and setting a low bar for peoples' imagination).

Reading stuff written in the 1960s about what today would be like, what strikes me is that technology is so incredibly not mind blowing compared to what we had back then. Even in the area of computers. Hell, we haven't even come up with an input device that beats keyboards, which were invented in the 19th century (electro-mechanical keyboards, not typewriters).


I think the difference is that we today have a lot more reference points for technological advances than people in the 1960s did.

Just continuing with the storage example, for decades now we've all been witness to data storage sizes growing massively, while the housing of said data storage has shrunk in size tremendously - as has the cost.

So when a couple MB of incredibly slow storage weighs thousands of pounds and costs millions of dollars, I do think the concept of tens/hundreds of GB of super fast flash memory contained within an object the size of a thumbnail would be mindblowing, whereas your example of

>a 100 petabyte drive using, say, magneto-resistive memory (or something else based on anticipated, if not fully developed physics)

wouldn't, just because we already all know how far technology has come since the 60s.


It's not so much the hardware as the combination of things. GPS+the Web+smartphones+... But, yeah, physical infrastructure has a lot of friction. So we have amazing pocket devices with access to much of the world's knowledge. But traffic jams.


They where vastly off in terms of size, weight, transfer speed, latency, and cost. In 1990 you could get a cheap RAID array so pick do you want 100x that capacity for far less than that price and weight.


No way that you could get a cheap raid array with 1GB capacity in 1990. My pc had probably a 40MB HDD at the time.


Cheap in terms of multi million dollar hardware budgets.

1980: IBM introduces the first gigabyte hard drive. It is the size of a refrigerator, weighs about 550 pounds, and costs $40,000.

That’s ~1/10the the cost and 1/4 the weight they where looking for. You really could do vastly better in 1990. For ~2,300$ you could get a 700 MB HDD buy 3 and your talking 1.4 GB with redundancy for ~1% of his budget.

PS: If I extrapolate current trends and say we might get a self driving 400 HP Honda Civic in 2050. Then someone says sort of a Tito costs 3,000$ has 50,000 HP but nobody drives that under powered piece of crap. It would be a shift in how you think about things.


That must have changed rather quickly. In 1972 Alan Kay wrote "A personal computer for children of all ages". Here is the abstract:

> This note speculates about the emergence of personal, portable information manipulators and their effects when used by both children and adults. Although it should be read as science fiction, current trends in miniaturization and price reduction almost guarantee that many of the notions discussed will actually happen in the near future.

The paper is a great read. He basically imagined that in the future we'd develop the iPad and some high quality educational software for children. Forty years later, we can proudly say we've successfully developed half those things.


  >> if you time traveled and showed Siri/Cortana to an AI researcher from 1960 they’d be incredibly disappointed.
If you time traveled and showed 2013 me what self driving cars would be doing in 2018, 2013 me would have been astounded.


>> People used to say this about every single thing that computers can do better than people.

Who were those people, who said those things (i.e. where they AI researchers, or computer scientists?). And what exactly did they say?

There have always been strong criticisms of AI (e.g. [1]) and opinions dismissing computers voiced by people who did not have an adequate understanding of computers.

The interesting thing is to see what the people in the know actually thought over the years and what they think right now.

Edit: to clarify, what AI researchers usually do is overhype the capabilities of their systems and claim they can achieve things that they never manage to show they can- completely the opposite than saying that "computers can't do that".

__________________

[1] "What computers can't do" by Hubert Dreyfus

https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_ar...


> People used to say this about every single thing that computers can do better than people.

And they were right, until they were eventually wrong. There will be this phase for automated vehicles too.


True. However, "eventually" is a long time - the assumption "level 5 hardware exists today" is IMNSHO premature by many decades.


> when they do have accidents they will seem bizarre and inhuman mistakes that even the most incompetent human driver would never make.

I agree -- but it's even worse. Even if robots do make human-like mistakes, that doesn't mean humans will be forgiving of the same mistakes. For one thing, I might forgive a human being unable to react due to a 1/2-second reaction time, but I sure as heck wouldn't be that forgiving for a robot. I would expect and demand an order of magnitude better. For another thing, people have more tolerance for mistakes made by "closer kin", if you will. (e.g. if my own child steals from me, even 10x as much as a random thief does, that doesn't mean the thief can expect more lenience than I had for my kid.) Self-driving cars pretty much have to be strictly _and_ significantly better than more than the majority of humans for people to trust having them around. Merely being better than average, even if it's in all respects, isn't necessarily enough to cut it.


> I think what is going to happen is people are going to realize robot cars are death traps.

Drop the "robot" -- it's cleaner.


While that's true, the debate rages around "the devil you know x the one you don't." Death traps on wheels are known (and risks ignored), their autonomous modes...not yet.


The funny thing here is we might already have the best solution to the "easy" automation scenario: Trains. It's a shame they're so expensive and require special tracks. If we could invent a hybrid train / car, that behaved like a train, but didn't require special track...that might actually be a better direction.


You're pretty much describing a truck or a bus in that breaking up a train is one of the things you need to do to allow it to travel on non-dedicated/special right-of-way.

And, with respect to freight, a lot of the easy automation is handled by trains. A huge amount of truly long distance freight in the US (including but not limited to bulk cargo) goes by train for much of its overland transport.


The special/segregated track is one of the major reasons they're more secure.



People seem to have much greater fear of spectacular and bizarre deaths than of considerably more likely but relatively mundane deaths.


Why are you so sure? Waymo demonstrated that they can handle correctly a duck-chasing wheelchair rider that used a broom. I’m not sure that all the human drivers would have handled it so well.


And Uber demonstrated that incorrectly classifying a detected obstacle is solvable by running it over. I'm pretty sure that no sane human driver would have handled "could be a bike or a human or an animal, but it sure is something in my path" by ignoring it for six seconds before the collision and even afterward.

"But we should let it out on the road, it drives on par with an insane, legally blind and completely drunk driver" is not a very convincing proposition.


> Automate that which is easier to automate: the long distance highway driving

Long distance highway driving except that one highway exit on 101 that kills you. I'm just saying that when human lives are at stake, bottoms up approach may not be as feasible as with web software.


> except that one highway exit on 101 that kills you

The conversations on the topic that I've heard involve building special, dedicated exits, sort of like truck weigh stations. And having them live not particularly close to urban centers.

So it a bit more of a holistic approach than bottom up, and something that will take a while to implement since you're not able to roll it out everywhere at once.

Not to say that there still won't be issues. I'm sure there will be. And I'm sure there's a long way to go still. But its also not a black and white issue.


At this point I'm starting to think engineers for self-driving vehicles need to find a way to explicitly map out every possible scenario on a given stretch of road... and from that - in real time - derive a custom template for each individual vehicle, tailored to the range of possible interactions between the vehicle in its current state (i.e. tire pressure is significant) and the current scenario playing out on that road.

Much of the process of getting close to this point can be done with the latest A.I. tools, but I have this nagging feeling we're going to need humans to fine tune a whole lot of 'last mile' stuff.


Unless you ban all human drivers from that road, and find a way to keep out pedestrians and animals, there will always be an infinite number of possible scenarios. Mapping everything out in advance can't possibly produce the level of safety that the general public would demand.


But with driving human lives are at stake now.

It just has to be better than the average driver.


That's one more of those "irrational exuberance" regarding the future of autonomous driving. It would never even come close to being legal if it's only better than the average driver. It will need to be better than the 95th percentile of human drivers at the very least. 48% of the human population are not going to accept a more dangerous, potentially deadlier, car ride because it's "safer on average".


In which case any driver of above-average ability would be insane to put their lives in the hands of one.

How many people do _you_ know that consider themselves below-average drivers?


I'm just spitballin' here, but perhaps they could make something that was better than the average driver, and wouldn't swerve or accelerate into stationary objects like highway barriers? That sounds like something a lot of people would want.


Looks easy, right? Too bad it's been looking easy for 50+ years now, with no solution in sight.


You could have 'Pilots' like you do for the last part of sea voyages


The model of truck to the railhead, rail for long distance, the back onto trucks for the final leg already exists. Trains have large economic advantages over trucks for well known routes. So what are the advantages of self-driving trucks in this scenario?


> master's degree or whatever

Are you trying to make people make fun of us?


Dont be so cynical. I took the comment as serious and agree.


Side hustle or masters degree? Pilots don’t work another job while autopilot is in use, and neither should truck drivers monitoring its autonomy system.


A self-driving vehicle would be a step beyond current autopilot, and asking any human to constantly monitor an autonomous system is going to result in pretty catastrophic failure. An autonomous system needs to be fail-safe to be used.

Plane autopilot only works because air is so empty and flying in the same direction at constant elevation is unlikely to result in any problem. There are repeated examples of both pilots falling asleep and planes over-shooting destination airports, for example.


> A self-driving vehicle would be a step beyond current autopilot, and asking any human to constantly monitor an autonomous system is going to result in pretty catastrophic failure. An autonomous system needs to be fail-safe to be used.

I think the idea we’ll attain this in the next decade or two is borderline delusional, but to each their own.

Commercial aviation is one of the safest transportation mechanisms in the world; aircraft can go runway to runway in mostly automated fashion (auto throttle/TOGA [take off go around] for takeoff, autopilot for cruise, autoland for landing). We (customers and regulators) still require human attention the entire time.


> aircraft can go taxiway to taxiway fully automated.

Source?


My comment was unintentionally overgenerous. I’ve corrected my statement after reviewing the aviation stackoverflow site, and flushed it out with more details (specific aviation terms).


If it requires human oversight, what's the point?

At that point you're getting no more bang for your buck, since the operator is going to be subject to the same limits on time behind the wheel as a driver of a non-autonomous truck. And you're not getting any more safety, because, as Uber and Tesla have been illustrating for us so vividly, a self-driving system that needs a human overseer can't drive safely, and a human who isn't physically in control of the car at all times can't oversee safely.

(Edit: This is, naturally, not accounting for the need for a transitional period while getting the technology bootstrapped. But that's time invested in developing the tech, not time where the tech generates any profit.)


> At that point you're getting no more bang for your buck, since the operator is going to be subject to the same limits on time behind the wheel as a driver of a non-autonomous truck.

Human lives and property loss are expensive. The average cost of a fatal crash is well over $3 million. The average cost of a large truck crash that does not involve a death is approximately $62,000. Settlement payouts due to big rig accidents are roughly $20 billion per year.

You don’t need to replace the driver to see significant upside.


Even if this was only available on Interstate 40, I think you would make a lot of money. Trucking is something like $800B industry, and a lot of that flows on I-40.


If only there was some mechanism for moving freight along a pre-defined route without the need for constant steering. Perhaps some sort of rails....


And if only there was some mechanism to get freight from rail yards to its final destination, without the inflexibility of rails. Perhaps some sort of steerable vehicle ...


Context is key here. The comment you’re replying to was made in the specific context of the volume of long-haul truck traffic on I-40.


Not sure how the context of what I said changes this? Yes, trains already exist and have existed much longer, and yet have obviously lost out to the truck.


> Yes, trains already exist and have existed much longer, and yet have obviously lost out to the truck.

That may be in large part due to the substantial government subsidation of car/truck traffic through public roads, not the merits of one against the other.


Which was worth more, the government funding of the interstate project, or the almost unchecked power and monopoly of the robber barons of the Gilded Age?


It would be foolish for me to attempt to quantify what worth "more"; however, it is worth keeping in mind that the railroads we're very heavily subsidized in their time as well.

http://archive.boston.com/ae/books/articles/2011/06/05/colla...

https://mises.org/library/crony-capitalism-and-transcontinen...


Trains have in no way "lost out" to trucks. Rail transport is a huge and growing industry. But long haul trucks are sometimes preferred because they're faster. Our cargo transportation system needs both.


Trucks aren't just "sometimes preferred", they move far more tonnage than trains.

https://www.bts.gov/content/weight-shipments-transportation-...


Trains didn't "lose out" to the truck, they're both widely used, along with ships, pipelines, airplanes, bicycle messengers, and every other conceivable means of transporting goods.


Yes, because everyone in the commercial chain forgot about freight trains when they decided to start using trucks to move goods around.


I'm not saying I think trains will disappear, if anything, it seems they would likely benfit from increased capacity in the intermodal system, allowing them to move more of the type of freight they are well suited for.


The trucking industry is many many times larger than the rail freight industry in America.


It'd be nice if we had some more rail expansion. Rails are so efficient, and are just more impressive mechanically the entire system is like a giant computer with actual switches. There was an article posted here a while ago about how rare train accidents are and how incredibly fail safe those systems are.


That's mostly because of:

1. (As others have mentioned) a lot of freight is intermodal so containers in particular shipped for long distances by train also travel by truck for the first and last X miles.

2. A lot of freight is relatively local in nature, so it makes more sense for it not to be intermodal.


The trucking industry and rail freight industry overlap extensively, in the form of intermodal freight


Right, I never said they didn't. But the trucking industry is still by far the main mode of moving freight within the US.

https://amp.businessinsider.com/the-staggering-statistics-be...


Remember: not all the roads are the same everywhere.

Self driving cars and trucks is a very hard topic.

With the today's technology, it might work only on hypothetical straight and perfect roads and during the day. Work in progress, floods, snow, holes, traffic and /r/IdiotsInCars are factors not taking into account for today's algorithms, because the data and the tests are missing.

I see that the automated trucks might be the vehicles of the future, but a driver (or "operator") should be always present onboard, just like it happens for planes and ships


I agree that there's a case for improved efficiency, but I'm not sure if the economies of developing all the tech from scratch adds up. One accident can endanger the entire company, a problem they're still reeling from with the bicyclist fatality. The car they were trying to automate has much better stopping (and overall safety) performance than any truck, yet they're still experiencing all the woes of developing new tech. Now imagine an automated truck accidentally punting a Grayhound buss full of band kids off the road.


This venture kind of scared me to be honest. Having known a few truck drivers and having a good friend who was permanently disabled by a tractor trailer I know this industry needs to be made safer. I'm not convinced total automation is the solution as well but maybe somewhere in-between is good enough to improve things. Just doing this for pure profit is irresponsible. There is also the issue of if they are going to put any more trucks on the highways and interstates more roads need to be built first. The ratio of big rigs to commuter cars is presently out of control!


> Having known a few truck drivers and having a good friend who was permanently disabled by a tractor trailer I know this industry needs to be made safer.

Having known a few truck drivers myself—including my father—and having a very good friend who was permanently paralyzed by another friend driving a standard car, these things don’t influence each other. The trucking industry is far safer than your average driver[0].

> The ratio of big rigs to commuter cars is presently out of control!

The numbers disagree with you. That ratio is currently about 1:135[1].

[0]: http://www.iihs.org/iihs/topics/t/large-trucks/fatalityfacts...

[1]: a couple quick searches place big rigs at about 2M in 2017 compared to nearly 270M registered passenger vehicles in 2016.


Yes, but I never implied a year by year analysis either: https://www.forbes.com/sites/trucksdotcom/2016/05/23/a-glanc...

I never implied that these were hard facts they were my opinion. What is your opinion? We need more trucks on the road? To me 1:135 is out of control- that's my opinion.


You stated your opinions as if they were facts. You said you know the trucking industry needs to be made safer—that sure sounds like an implied statement of fact, not sharing of opinion. I shared a ton of data that disagrees with what you know—ahem, believe. That should encourage updating your opinions.

Given the facts available on safety records of licensed, commercial truck drivers vs normal drivers—the latter of whom outnumber safer drivers 135:1—it seems sensible to me to form an opinion that the 135 less-safe drivers need to be dealt with before we get too worried about the 1 safer driver. The overwhelming majority of multi-vehicle accidents with big rigs find the passenger car driver to be at fault—we’re talking from 70-90%, based on types of crashes. Non-truck crashes outnumber truck crashes by roughly 3:1 per 100M vehicle miles traveled.[0] This seems to indicate non-truck drivers pose the greatest threat on the roads to public safety.

So sure, form any opinion you like. But maybe be more careful to share them as obvious opinions that aren’t implying they are actually fact-based—by stating you know something is true or declaring the ratio of something is out of control—or someone is likely going to call out those statements as being questionable when compared to the facts of reality. There’s no clear evidentiary basis for arriving at an opinion on a correct ratio of trucks:non-trucks, other than the data we have seems to indicate that fewer passenger cars on the roads is the surest way to increase public safety.

[0]: For a new link summarizing various studies—http://www.trucking.org/ATA%20Docs/News%20and%20Information/...


There are already rules in place for human truck drivers that push the limits of what a human should be called upon to do for many hours at a time. Human truck drivers already have regular accidents due to fatigue. Replacing or augmenting long haul trucking can only be a positive.


What rules are you referring to? Current regulations prohibit drivers from having > 14 hours on the clock, iirc. Ignoring any drivers—or the companies who employ them—who are overtly ignoring these rules and faking their log books to be active longer than 14 hours, there are mandatory 10-hour windows drivers must not be working. That 14 hours encompasses all activity—loading, unloading, weighing, weight redistribution, driving, etc. Driving time itself is capped at 11 hours of an allowed 14-hour workday. Newer trucks even have cameras in them to monitor drivers, as well as other systems that report violations and actively prevent the truck from being used in a way that violates regulations.

> Human truck drivers already have regular accidents due to fatigue.

Fatigue accounts for 13% of truck driver-caused accidents according to DOT[0]. Fatigue is coded twice as often for passenger vehicles as it is for commercial truck drivers.

Moreover, the rate of commercial rigs involved in accidents with passenger vehicles is quite low. The rate of single-vehicle accidents is also lower among commercial trucks. Commercial trucking continues to grow increasingly safer every year since we’ve been keeping track in the 70s[1].

[0]: https://www.fmcsa.dot.gov/safety/research-and-analysis/large...

[1]: http://www.iihs.org/iihs/topics/t/large-trucks/fatalityfacts...

Maybe relevant disclaimer: my father is a truck driver and we regularly talk about this stuff. His experiences have led me to do a bit of research and study on the matter. I don’t work for or on anything trucking-related.


My father is a retired truck driver, it'd be great to curse at each other sometime. :)

My understanding is that the industry (and maybe this has changed or was not good anecdata to begin with) is rife with gaming of the regulations, which in my opinion are already grueling. A human being, no matter how accustomed they are to driving, should not be asked to sit and drive down long mundane stretches of road at a high degree of alertness for 11 hours per day, multiple days per week. I understand that the new time tracking systems will reduce the ability to game the system, but I feel the fact that regulators are calling for these devices and driver awareness monitoring devices should be an indication that maybe we can find a solution that doesn't involve a human.


Your evidence seems to contradict what you are saying. From your second link:

"A total of 3,986 people died in large truck crashes in 2016. Seventeen percent of these deaths were truck occupants, 66 percent were occupants of cars and other passenger vehicles, and 16 percent were pedestrians, bicyclists or motorcyclists. The number of people who died in large truck crashes was 27 percent higher in 2016 than in 2009, when it was the lowest it has been since the collection of fatal crash data began in 1975. The number of truck occupants who died was 47 percent higher than in 2009."


You’re misreading the data presented. Yes, 2016 was worse than 2009, but it was far safer than ‘75 and the decades that followed. That other particular stat on fatalities is only looking at the fatality rates of truck drivers vs passenger car drivers in accidents that are between a passenger vehicle and a commercial rig—and is not a surprising rate considering one ought to expect a truck driver to have a higher likelihood of surviving such a crash compared to a passenger car occupant. When you look at the comparative rates among non-truck accidents and fatalities, truck drivers are far safer. If I correctly recall the data, the rate at which truck drivers are at fault for accidents with passenger cars is also lower than the reverse. When you look at the comparative rates of truck accidents vs passenger car accidents, non-commercial drivers continue to be the most dangerous drivers on the road, and there are vastly more of them putting others at risk.


> The number of truck occupants who died was 47 percent higher (in 2016) than in 2009.

Do you have numbers for dead truck occupants per mile driven? It could be that this is due to truck traffic being lower overall during the 2008/2009 crisis.


That data is in the link I shared. 2009 had more miles driven than 2016.


Can the robot perform a brake check? Can the robot ensure a load is properly secured? The act of driving the truck is only one of the many jobs a trucker does.


They're addressed by established process and routine - 2 things machines do better than us. Can a machine monitor braking or perform the current brake check - absolutely. Could we shift the responsibility of check the load tie-down to the freight facility and monitor ongoing status with sensors. I'd hope so.

You should be asking "can a robot drive for 12+ hrs without fatigue?" or "can we eliminate the restrictive, expensive and often gamed system of keeping drivers within their hours?" This is were long-haul transport could be "disrupted"


>> Can a machine monitor braking or perform the current brake check - absolutely.

I'm not sure you understand what a "brake check" means. It isn't checking the brakes for current functionality. It means a visual inspection of all the brake parts to ensure they aren't going to stop working somewhere literally down the road. It is checking for pins, debris, excessive or unusual wear, or leaks. It would require 3d vision backed up by some serious AI to understand what is going on. And you would probably need some sort of robotic actuator to remove any debris blocking inspection areas.

When you see a truck stopped by the road with the driver walking around the trailer, he is probably doing a legally mandated "brake check". It isn't just pumping the brakes to see they are still there.


It is also checking brakes for functionality before you set out. There is an in-cab procedure too, not just a visual inspection. But I believe that both can be automated. The in-cab brake check is testing for certain pressures in the system under certain circumstances, and could be automated via software-hardware combo. The visual inspection could be done via a series of cameras under a bay the truck rolls over, with software to detect issues with the braking system components visually. Or either of these functions could be performed manually by a human at a waystation before the truck heads out on the (next leg of the) trip.


I don't think people are claiming computers can load or unload a truck.

Computers can check if a brake is working or if the cargo is well balanced better than a human. One could do those today with cheap (on the 100's of dollars) electronics and few lines of code. Nobody does this because it's 100's of dollars more than letting the driver do the same.


I'm saying if you limit the scope to tractor trailers hauling only 20/40 ft ISO shipping containers, then "loading/unloading" can also be done in an automated fashion. I'm saying that if you limit the problem space, and try to approach even just a single use case such as this on a major artery like I-40, I don't see any way you end up in a worse spot than human drivers.


> I don't think people are claiming computers can load or unload a truck.

I think truckers would absolutely love it if this was the case. It would save them so much time waiting on loading/unloading, which decreases their pay.


I don't think they would be the ones to profit from this invention though.


I understand that the job encompasses other duties, my father is a retired truck driver. That said, my answer is "yes, I think they could, just as well, or better than any human".

You could have multiple cameras on the load using computer vision to detect movement. You could have weight sensors sending feedback to the system. I'm sure there are many ways to do this, and any of them would beat a human behind the wheel.


Then why not deploy such systems now? Prove they can handle all the odd jobs better than the human.


Huh? I'm not saying they exist, just that it's not impossible for me to imagine these problems being solved or being minimized to a level that is no worse than current human driver would perform.


dsnuh - I haven't seen anyone argue against "[software+hardware] would beat a human behind the wheel", its just that developing the tech to achieve this feat with the same level of accuracy as the top 10-20% of current CDL drives is extremely expensive and caries a lot of risk. Nobody seems to of cracked the nut on it.

You're right in that there are many ways to do this, but none have come even close to beating a decent human behind the wheel.


I don't see any indication in the article that they are stopping due to technical challenges. They already demonstrated it on the road, so it seems they were well on their way. It appears this project is a victim of politics and legal action.

Why do you think we need to get to the level of top 10%-20% of commercial drivers? How do you come to that cutoff point? If we had automated trucks that could move freight 24/7 with even 5% better than average accident rates (for example) would be a huge win.


I agree for the most part, but it's not quite as simple as just having better accident rates. That is one thing, but there are other considerations as well. When there are accidents, are they predictable ones, or is it completely random? Did it make a decision that we cannot explain which led to the accident? People aren't going to like completely random accidents, even if there are slightly fewer of them. Can we assign fault in the case of an accident, sometimes, always?


I think if you have a predictable accident, that's called negligence, and we have courts to deal with that.


That wouldn't work everywhere, in my country trucks aren't allowed to drive after about 2100 until 0600 in the morning due to the noise they generate (trucks are pretty bad noise-wise).


Just yesterday, I saw a (German) news report about Volvo's line of electric trucks which is coming out next year. They said that Volvo hopes that those trucks will be allowed to drive in cities at night because they don't make engine noises, thus helping to partly alleviate the traffic issues that cities are dealing with at daytime. (Not sure if that adds up; a large portion of street noise comes from the tires.)


Tire noise will be the major issue there, I did take a course on environmental protection (fire and noise, Brand- und Lärmschutz) and they did mention the engine is only 50% of the total noise for a large truck. Mostly because it has lots of tires.

Whisper asphalt is also still somewhat rare and doesn't solve the problem entirely.


50 hours? Can this truck fuel itself? Inspect its load? React to a load that becomes unsecure mid route? How about when the truck is pulled over for an inspection? How will it deal with a stowaway? Brake inspection before a big hill? All the rare events add up to significant barriers to actually removing the human from the truck.


> Can this truck fuel itself?

Who cares? That's how trucks are different. Somebody at the company just calls the fuel station and say "Hey, I want to refuel my trucks there, when they get there, you refuel them, and I pay you at the end of the week. Deal?"

> React to a load that becomes unsecure mid route?

How common is that? You basically have the truck phone home and send somebody there to solve the issue. Depending on the likelihood it can be a major cost by hiring people every so distance, or a delay you just deal with to something that is solved by dispatching people by plane.

> How will it deal with a stowaway?

Most likely, it won't.

> Brake inspection before a big hill?

By braking and checking the acceleration, just like a human. In two axes and paying attention to frequency responses, what a human can't do.


> > React to a load that becomes unsecure mid route?

> How common is that? You basically have the truck phone home and send somebody there to solve the issue. Depending on the likelihood it can be a major cost by hiring people every so distance, or a delay you just deal with to something that is solved by dispatching people by plane.

First off, how do you detect a load that has become unsecured? I saw bees on a flatbed getting hauled. How do you detect that the net is no longer tied down securely besides looking at it occasionally.

Hauling livestock is one of those difficult hauling items. If you accelerate or decelerate too fast, they die.

Or how about the tale of the worst load ever - Oregon Potato Chips to Texas ( https://www.dat.com/blog/post/my-worst-load-ever-hauling-ore... ). While that one has a "ok, this needs to be another factor in the routing" - local barometric pressure could cause a problem.

---

Brake inspections are likely less of an issue than chains for that storm that just hit. The storm that dumps a foot of snow on I80 over the Sierras. That's not too much - but everything needs to chain up unless its a 4 wheel drive pickup trick. That includes the semis.

Or a wind advisory over the Mackinac bridge ( https://www.channel3000.com/news/mackinac-bridge-partially-c... )

> The bridge that typically enables travelers to pass over the Straits of Mackinac has been closed to all vehicles except passenger cars, passenger vans and empty pickup trucks, authorities said.

> Motorists permitted to travel across are advised to reduce speeds to 20 miles per hour and to be prepared to stop.


>> By braking and checking the acceleration, just like a human.

And your robot just lost is license for not performing a proper pre-hill brake check. Large trucks are not cars.

https://www.drivesmartbc.ca/sites/default/files/images/brake...

Then look in this pdf for "En route inspections"

https://www.icbc.com/driver-licensing/Documents/drive_commer...

From Alberta:

A vehicle inspection at a rest and check stop should include the following: • All lights are clean and in working order. • There are no air leaks. • All the wheels are secure, and tires are properly inflated and are not hot. • There are no broken or loose items on the vehicle. • The load is secure. • The dangerous goods placards are clean and secure (if applicable). • The trailer locking mechanisms are secure and in good condition. • The brakes are properly adjusted.

These procedures assume human eyes. A robot might be able to tick all the boxes via sensors, but that isn't going to be enough to constitute due care when something goes wrong.


Ok then, same as with the refueling example: Contract that out to the personnel at the truck stop.


Checks are needed ever 3 hours. So if you are stuck in traffic you might have to do them every few miles


In winter, does the truck put on chains before the pass and take them off on the other side?


50 hour stretches are also pretty rare. If you’re shipping that far by truck I’d imagine you are on a train.


I think 50 hours would get you across Interstate 40 nonstop. But you wouldn't need to have the trucks go all the way. Just have transfer stations every 500 miles or so where the truck transfers the trailer and recharges/refuels. Even if it was a manual process to transfer the trailer and refuel the truck, you would more than make up for stoppage time with human drivers.


How would it be economical to carry 50 hours of fuel + load.


Let's assume that the fuel economy of an autonomous semi truck is comparable to that of a regular semi truck. supposedly the average fuel economy is 5.9 mpg which is honestly better than I thought it would be. For a trip from Seattle to Miami that's a 49 hour trip according to Google Maps which adds up to 3305 miles. That only adds up to 560 gallons of diesel which sounds like a lot, but look at the existing fuel tanks on a semi truck. Those things are massive. 150 gallons per side isn't uncommon for long haul trucking. Adding enough fuel to go nonstop from Seattle to Miami only adds 260 gallons of fuel which at 7lbs per gallon for diesel means 1,820 lbs. The maximum weight for a typical semi is 80,000 lbs so that's still only 2.3% of the weight that could otherwise go to cargo. But one thing to keep in mind is that without having to cater to a driver you can remove most of the weight of the cab. There's no need for a dashboard, heater core, air conditioning, steering wheel, seats, windows, doors, giant empty space on top for the driver, etc.

It would not surprise me if even with all of the extra fuel (which wouldn't even really be realistic because no one has a route from Seattle to Miami) if the weight difference was a wash between the two. At the very least so long as it can be loaded with fuel at the origin and destination it's not really a concern.


Why would you need 50 hours of fuel?


Most of those sound like benefits...

Trucks are more expensive: cost of sensing and compute is smaller relative cost to the vehicle.

Lots of rules about roadways and times: computers are great at adhering to, enforcing, and providing logged accountability.

Monitoring size and load: sounds much more tractable compared to general City street driving.


> It seems like everything about automating a truck would be more expensive (cost of 1 truck = 5-6 cars),

Yes, the initial cost is about that. But, the truck is directly making money whenever it's rolling, and an automated truck would presumably roll more than a driven truck, since that's an advertised feature. A car is usually only indirectly making money at best during the few hours of the day that it rolls (getting you to work, or to the park and ride).

BTW, there has been a partial solution to "trucks don't make money when the driver sleeps" problem for years. Team driving. You have two drivers, and one sleeps in the back while the other drives. That gives a truck up to 22 possible hours per day, 11 federally regulated hours per driver.

> Edit: even on 'simple' point A to point B route that involve 99% highway, what happens when a small part of the highway shuts down for whatever reason (flooding, multi-lan accident, fire, etc) and all traffic is routed on smaller adjacent streets? Automating a truck is as hard or harder than automating a car, there's no way around it.

As I mentioned elsewhere, if automated trucks are on the road, what you describe would already be solved by necessity, because you already have to go through surface streets to get you your shipper or receiver. Or, in an often imagined scenario, to get to the "freight yard" where humans would drive the first and last miles. Yes, automating a truck is at least as hard as automating a car.

The actual problem in your emergency scenario is getting the truck to follow the diversion. Right now it's cop-eyeball to trucker-eyeball communication, or even just an orange sign on a saw horse. That'll have to be worked out, plus fallbacks, but it will.


From my uninformed point of view, most of those things apply to human drivers as well.

Automated trucks seem appealing because they solve (what appears to me to be) a large problem with human drivers - fatigue and allowable working hours.

Building a fixed-route automated truck seems more doable than a self-driving car that can handle arbitrary roads. I don't know if it's actually any easier, but the problems that need to be solved seem smaller in scope.


I think you trade one complexity (fixed vs variable routes) for many other complexities (larger vehicle with more demands). Think of the complexity of getting a driver license for a passenger sized car vs a commercial driver license, there's a lot more rules (physics and human regulations) around a CDL. So yes, you have a fixed route, but is that really the hard problem to solve, the route to take? Isn't the hard part monitoring road conditions, identifying objects, stopping for emergencies, etc?


Let's say we limit the application to just ISO compliant steel shipping container transport. It seems like we could apply much of the self balancing technology to trailer design to account for various load weights and distributions. Even without some redesign of the trailer, it's not too difficult for me to imagine software being better at controlling the truck than a human. The things that are difficult for humans to learn about driving a truck don't seem all that challenging, especially when given the limited problem space related to what is trying to be achieved with self-driving cars replacing all human transportation in cities.


Fixed route is already solved by trains and trains can be made self-driving or be remoted easily. There is a problem in variable-route, variable-size transport.


If they platoon or draft (or whatever the word is where like five travel right on top of each other and save fuel based on avoiding air resistance incurred only by the front one), there can be substantial savings if that is automated.


peloton (comes from french word for platoon, so, close enough)


No, the word relative to automated trucks is actually platoon. https://duckduckgo.com/?q=automated+truck+platoon&t=lm&ia=ab...


You’ve made an argument that it’s easier to automate cars rather than trucks, not that there’s an economic advantage.

Among other things, I’m guessing most americans who own a car don’t do it because they’ve crunched the numbers compared to other transit options and figured out that a car is cheaper. It’s a convenience thing where it’s an option at all.

Meanwhile, it’d be fairly easy to demonstrate marginal savings in a business where transit costs are already under high scrutiny.


> It seems like everything about automating a truck would be more expensive (cost of 1 truck = 5-6 cars)

Isn't that a good thing for adoption?

Here in the agricultural industry, self-driving technology can cost tens of thousands of dollars to add it to your equipment. But when a tractors cost hundreds of thousands of dollars, it seems like a drop in the bucket, so the buyers are willing to put down the money. You don't see many tractors around these days without it.

On the other hand, tens of thousands of dollars on a $30,000 car, potentially doubling the purchase price or more, makes for a much more difficult pill to swallow.

Humans aren't exactly well known for their rationality. Even if the technology provides tens of thousands of dollars worth of utility to the car owner as it does to the farmer, many will struggle to accept paying 100% more for the same car plus one more feature. Farmers see it as a 5% increase in price for one more feature, so, hey, why not tack it on?


The best way would be a hybrid model where on easy, predictable highways (say 90%)use automation and for the rest of the route (say 10%) have a human driver ready to tackle it. And in future the driver part can be avoided as well if you can engage a driver remotely like say using a VR headset with 360.deg view. It can also then handle the emergencies like highway blocks etc. Yea but like any kind of self driving it's scary, a single mistake can get the entire company sued vs the single driver getting sued before.


If anything, it would be much easier for a computer to follow the extra regulations than humans, who routinely violate such regulations in the first place, in the same way that autopilots only follow approved routes for planes.

Add in the fatigue and long, repetitive drives that are a feature of long-distance trucking, and it seems like a situation more ripe for automation.


I don't mean to be sarcastic here, but I hear developers (myself included) throw out "it would be much easier to..." or "this should only take an hour to automate..." all day long.

If it was an easy and profitable problem to solve it would have been solved by now. I agree that it would be very beneficial for everyone if trucks/truck routes were automated (except maybe the truck drivers getting laid off) but it's obviously a very hard and very risky problem to automate.


Otto is not the only player in the automated trucking space.

https://www.commercialappeal.com/story/money/industries/logi...

In fact, Otto probably suffered negative impact from its association with the Waymo scandal, coupled with the recent bad press for Uber's autonomous vehicles. It sounds like Uber is retrenching in favor of just automating cabs for now.


> If it was an easy and profitable problem to solve it would have been solved by now

No one said it was easy. The issue was easier than another problem that is also not yet solved, not easy in some absolute sense.


All of those things would cost more for a truck, indeed, but a truck is also worth WAY WAY WAY more than a car per mile.


Yes but you also get to automate way way less trucks than cars if we look at the current ratio.


Yes, but my [total] guess is that you get to automate way way more truck miles than car miles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: