Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the same kind of "automation" that is in place at most mass transit systems in the US (sans DC, which still uses human operators).

The requirement of having that human operator is typically born strictly of labor requirements first, and not necessarily the need of having someone's hand over the "Oh shit" button.



Sans New York also -- with one exception, all of the MTA's trains have human operators, and the majority have two-man crews. It has been possible for a single person to operate the MTA's subway trains for decades, but attempts to do so on "full length" trains (most lines; shuttles and the G are the main exceptions) have generally resulted in failures to meet safety requirements e.g. evacuating passengers quickly enough during a fire. It is overly simplistic to claim that this is strictly about labor requirements; while the TWU certainly works to keep its members employed, there are many other factors, at least in NYC. Also notable are the various attempts to increase automation, which have failed not because of the union, nor because of technology not being up to the task, but because the MTA does not know how to hire competent engineers for its projects (despite having billions of dollars available to pay them).


The 'Oh shit' button is not something we can reasonably expect the driver of a 'partially' automated train to be able to press: humans just aren't good at maintaining attention for these sorts of tasks with half-second reaction times once every fifty years of service. They can occasionally make useful decisions about edge-cases that represent small chances of failure (like wet-rail operation), that it is possibly for the automated operators to fail to account for.

The human driver instead serves as a sort of last-ditch assurance that the transit company values human life sufficient that a jury wouldn't destroy them in the event of an accident. He performs a sacrificial function by being the first one to die in the event of a crash, at which point it becomes a tragedy for the operator company, milliseconds before the people behind him die, at which point it becomes a tragedy for the passengers. There is no point, therefore, at which a disaster can be seen as a tragedy for the passengers but not the operating company, a position which is fraught with political-legal consequences in the US corporate and municipal environment, ever obsessed with liability. The driver's failure to respond adequately is implied to be at least partially the fault of the late driver, sufficient to draw fire until the panic dies down.

Absent liability issues, we would all be riding perfectly safe labor-less cars and trains, which were perfectly safe because we insisted on using them once they were mature enough to be safer than individual automobile drivers, and learned from each crash that happened afterwards, and improved our algorithms iteratively. Instead, every time we have a crash we blame the algorithm's existence rather than tweak it, switch to using a more human-intensive mode, let the automated infrastructure rot, add weight to our trains, and decry the tragic no-fault coincidence of driver inattention and algorithm failure that doomed the driver and the passengers.

The DC Metro was designed for full automation. Rather than implement the automation and improve on it over time, the predictable initial failures resulted in scaling back the automation partially and later fully and now it's not even a realistic capability, the infrastructure has degraded.


I don't know of many other heavy-rail systems where the control system has full authority over the control of the train. Most systems can stop the train if the operator exceeds the safety limits, but it's up to the operator to do routine things like stop at a station, open the doors, and accelerate away from the station.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: