Maybe I wasn't that clear, but I did say in my original post:
I used to think AI would replace doctors before nurses, and lawyers before court clerks - now I think it's the other way around. The doctor, the lawyer - like the software engineer - will simply be more powerful than ever and have lower overhead. The lower-down jobs will get eaten, never the knowledge work.
Yet you and a few other people insist I'm saying "AI will replace human judgment" - why? I'm saying the doctor isn't replaced, the lawyer, the software engineer, etc. aren't replaced. It's more like the technician just got a better technical manual, not like they are replaced by it.
I did not. I pointed out that you assumed a similarity between human judgement in courts to technical documentation and medical diagnostics, and asked on what grounds you make this assumption.
It can't be that engineering and biology are so similar to jurisprudence, because they aren't. There has to be another reason for you to lump them together.
Again the human judgment is not replaced in either scenario, I'm talking about a tool the lawyer, the doctor, etc. would use.
Lawyer and doctor are often listed as comparable examples because both involve sensitive info you can't afford to get wrong, unlike creative use cases for AI like image or song generation.
As stated, because both involve sensitive and personal information about people - unlike say, Stable Diffusion which is using AI for creative image creation etc.
> why do lawyers and doctors get it wrong all the time
Because they're human. "Medical error" has been in the top 5 causes of death in the United States for several years. Our legal system is also far from perfect and could use the help - consider systemic biases and wrongly convicted people who spent their lives behind bars unfairly due to human error or bias, omissions of information, etc.
How did you come to this conclusion?