The interesting challenges will come when trying to explain
to a jury just why any sufficiently esoteric algorithm
(AI, ML, DL) choose an action (surgery vs. “watchful waiting”, braking vs. “lane following”, buy vs. sell, etc.) that was taken.
All of this stuff is going to question ethics. People who do mental health care on truly sick people (people that want to hurt other people) understand very clearly how many factors need to align to produce such a person. That changes our definitions of autonomy, that changes belief.
These are things that are core to people who don't understand computation, and they are core to ego - what makes the lives we live better than the lives we compare ourselves to? That's the lion inside of us, that doesn't give a shit who gets ripped to shreds (or simply can't afford to think about it). I know I am a good person because I have hurt less than all the others. But that's not true. I tell myself this, but is this a thing I can prove?
There are profound arguments to be made about why a machine can do a better calculation than a human does. It has access to more information. If people can't believe that, that's their own ego.
Create a job called computer science lawyer, make sure the judge understands computer science, explain the computation to a jury in a way that explains how the algorithm was designed, align that with present understanding of psychology. Checks and balances.
That's why I don't see it coming in law anytime soon. It's the same reason why Google removed all the AI on the search part. Everywhere where you need to explain why an answer was chosen, AI is a poor solution because of its poor debugging.