FSD is a substantial subset of AGI: driving is full of edge cases where you need to be able to reason about unusual conditions or behavior by other drivers, understand what someone like a flagger or police officer is saying, etc.
It is not obvious that even level 5 FSD will require either self-awareness or a theory of mind: adequate modeling of the possible behaviors of nearby actors in the immediate future may be "all" it takes, and current systems are struggling with that.
Of course, if one were to define FSD in terms of being so capable that it could also likely pass the Turing test (or whatever better replacement we come up with as a measure of AGI), then, by definition, it would be close to as hard a problem as AGI itself.
I said “substantial subset” precisely to avoid this kind of tangent. My point was simply that there are a lot of edge cases we have no clear path to solving which are masked at the current level by punting the problem to the human driver. We are a very long time from being able to build cars without manual controls even if we might hit the point where a majority of driving miles are automated long before then.
Cars without manual controls are closer than you think. The Waymo cars do just fine without a human driver in the seat, and before they got suspended, Cruise was just starting a pilot program with vehicles without manual controls called the Cruise Origin. Sale of such a vehicle to the general public is a ways off but for a taxi service we're pretty close.
Cruise reportedly had human interventions every 4-5 miles. I haven’t seen a similar figure for Waymo, who are generally believed to be considerably better.
"Substantial subset" is precisely the claim here that I have my doubts about. I think it is entirely possible (for the reason I gave previously) that AGI is at least as far beyond level 5 autonomy as level 5 autonomy is from the current state of road vehicle automation.