Perspective: nearly 50 years ago we did have Category IIIa instrument landing systems in commercial jet aircraft capable of landing without pilot input. We had reliable gyroscopic autopilot capable of keeping aircraft on a set course long before that.
Vast sums of money have been sunk into developing exceptionally reliable avionics systems since then.
Passenger carrying aircraft still all have a pilot or two.
And to complete the thought: the main reason why human pilots are still on all those aircraft is because... the automated/computerized systems still can't really make decisions that match what the humans can do. So the computer is really great at maintaining heading, altitude and speed for a set period of time, but really bad at deciding which heading, which altitude and which speed, and for how long (the best you can do that I'm aware of is have a human lay out the desired route and punch that into the FMC, and then all bets are off if conditions change along the way, and the humans will have to decide what to tell the plane to do again).
The exact opposite has happened to cars. We've delegated route building to our GPS devices, and we maintain heading and speed until the computer tells us to do something different.
And those decisions themselves could be easily automated, but people choose not to. There's a checklist for making those decisions. If you can write a checklist, you can write code. And for when there's not a checklist, then you can pick randomly, because that's what the human is essentially doing.
A human can notice when new evidence becomes relevant. If something isn't already on a computer's checklist when it becomes relevant, the human has a better chance of deciding what to do.
EDIT: I understand that there is research going into making computers recognize "novel" situations. My comment only applies to algorithms that contain "checklists".
No, then you need to get meta. There's a checklist for identifying "novel" information and reacting. By checklist I mean algorithm.
Sure, it can be tough to tease out the algorithm from the minds of current pilots. Luckily in aviation we already have training manuals. In other domains it'd be tougher to know when we're done extracting knowledge from humans.
It's not the landing that's difficult, it's the "write an algorithm which correctly identifies which specific parameters should cause the aircraft to choose a river near the airport as its landing site, and get approval for it" that's the difficult part.
How often does it happen, though? You don't need a human on every flight in order to deal with that. Build a few ground-based emergency remote-control centers across the continent, and train the autopilots (or the air traffic controllers) to call them when things get bad.
I'm not convinced anyone in the commercial aviation industry would agree with you.
Some on-ground duty pilot sitting in a simulator who's suddenly placed in control of an aircraft that's notified an anomaly seconds before landing has absolutely zero chance of being able to make the same judgement call as a human (co)pilot who's been sitting at the controls for the last few hours. And the likelihood of human input being required is highly correlated with the possibility the aircraft has lost connectivity and/or some of the equipment that's supposed to be sending back data is malfunctioning
(and yeah, you could build in more redundancy, but that still doesn't eliminate the problem and probably costs more than the pilot)
Pilots on modern large aircraft are more like managers that tell the aircraft what to do, and do fly them to keep with minimum legal requirements (minimy numbers of landinds etc...)
Vast sums of money have been sunk into developing exceptionally reliable avionics systems since then.
Passenger carrying aircraft still all have a pilot or two.