If autopilot created a situation in which a crash was inevitable with no warning to the user. E.g. if it suddenly grabbed the steering wheel and swerved right resulting in the car spinning out in only a few hundred ms for no good reason.
Even if autopilot caused a crash such as above, you still have to run a cost benefit analysis of whether or not it prevented more crashes than it causes. Considering the failure rate of humans it's ok for autopilot to have a non-negligible failure rate, and it doesn't really matter if those failures happen in the same places as the human failures.
Hands on the steering wheel doesn't mean affirmative control. That's why the example includes putting the car in an unrecoverable scenario faster than an attentive person can react.
Even if autopilot caused a crash such as above, you still have to run a cost benefit analysis of whether or not it prevented more crashes than it causes. Considering the failure rate of humans it's ok for autopilot to have a non-negligible failure rate, and it doesn't really matter if those failures happen in the same places as the human failures.