Rather than demeaning someone's effort, learn why specific methods are considered appropriate/state-of-the-art when solving certain problems.
This is an unbelievably wrong comment. All the Lyapunov and traditional Process control theory in the world won't help you solve autonomous driving.
Also regarding "Guarantees and Safety" they don't magically appear out of thin air when you use traditional process control especially in noisy domains like autonomous driving.
This comment is equivalent of "I can write code to solve Atari Pong in any programming language deterministicly so any post showing Deep Reinforcement Learning is stupid"...
So control theory is ok for unmanned aerial drones but autonomous driving is just too far? Control theory can't handle noisy domains?
Guarantees of safety (more accurately stability) is the entire point of lyaponov analysis, and it's used on noisy systems all of the time (https://www.mathematik.hu-berlin.de/~imkeller/research/paper...). Can you point to a specific noisy system that control theory is ill suited for?
Once you have a path to follow, classical control theory can be used to control the steering angle to follow it.
But classical control theory hasn't been able to extract, from camera pixels, the open path in a road with cars, bicycles, and pedestrians. Camera inputs are million-dimensional, and there aren't accurate theoretical models for them.
Unmanned drones are orders of magnitude easier since you don't have anything that you can just fly into once you are above few hundred feets. They also don't have to rely on any vision based sensing. E.g. a drone has altitude, current speed, heading all of which while noisy can be represented easily as a small set of values.
The whole Lyapunov and control theory assumes perfect knowledge of sensors. Even though the signal itself might be error prone you have a signal. In case of autonomous driving even in simple cases as those described in the blogposts knowing the exact position of the markers and then using them to tune the contoller is not as easy as you might think.
The end-to-end system shown here solves three problems it processes the images to derive the signal, it then represents it optimally to the controller and then tunes the controller using provided training labels.
I cited Lyapunov, more as the ABC of nonlinear controls. Much more can be done in an analytical fashion, the "end-to-end" system here does not "solve" anything. It is a trained steering command regressor, nothing fancy, it's likely to work in this guy's living room, under certain lighting conditions, there is no way of predicting its accuracy, sensibility or anything else. Engineers have been breaking down systems into sub systems for a reason -> tractability of testing and improvement. End-to-end systems like that have close to zero value if you need something reliable.
Obviously, my message was slightly provocative, deep learning methods and classical controls (which by the way are able to quantify robustness to plant uncertainties and noisy signals) are all very useful but shall be used in combination. End-to-end techniques that bundle perception, planning and control in an opaque net are fun to play with (like in this article), it just very sad to see people believing this produces robust and safety-critical systems and we see too much of such articles on HN.
I agree with you in the sense that if a known and reliable way to map knowledge and information from one domain to another (e.g. from desired trajectory + perceived current position to steering inputs), I'd much prefer that than black box ish neural nets. Neural nets aren't meant to be the silver bullet.
But in this case though, any kind of state space control also requires rather precise knowledge of the physical laws that govern the dynamics of the vehicles. When such information is not available, can neural nets do a decent job at mimicking an analytical control algorithm? I think that's an interesting problem worth exploring.
This is an unbelievably wrong comment. All the Lyapunov and traditional Process control theory in the world won't help you solve autonomous driving. Also regarding "Guarantees and Safety" they don't magically appear out of thin air when you use traditional process control especially in noisy domains like autonomous driving. This comment is equivalent of "I can write code to solve Atari Pong in any programming language deterministicly so any post showing Deep Reinforcement Learning is stupid"...