I don't know about the disengage rates, but just the training model of data scares the bejeebus out of me. There was training data released and posted here on HN some time ago. I'm pretty sure the post was talking about how bad the training data was. There were all sorts of crazy things like signage being indicated in the trees or other "to a human" obviously wrong data. However, this was the supposed training data for some ML system. It was at this point that I just shook my head and walked away from interest in the subject.