multiple lawyer friends I know are using chatgpt (and custom gptees) for contract reviews. They upload some guidelines as knowledge, then upload any new contract for validation. Allegedly replaces hours of reading. This is a large portion of the work, in some cases. Some of them also use it to debate a contract, to see if there's anything they overlooked or to find loopholes. LLMs are extremely good at that kind of constrained creativity mode where they _have_ to produce something (they suck at saying "I dont know" or "no"), so I guess it works as sort of a "second brain" of sorts, for those too.
There's even reported cases of entire legislations being written with LLMs already [1]. I'm sure there's thousands more we haven't heard about - the same way researchers are writing papers w/ LLMs w/o disclosing it
Five years later, when the contract turns out to be defective, I doubt the clients are going to be _thrilled_ with “well, no, I didn’t read it, but I did feed it to a magic robot”.
It only has to be less likely to cause that issue than a paralegal to be a net positive.
Some people expect AI to never make mistakes when doing jobs where people routinely make all kinds of mistakes of varying severity.
It’s the same as how people expect self-driving cars to be flawless when they think nothing of a pileup caused by a human watching a reel while behind the wheel.
My understanding is the firm operating the car is liable, in the full self driving case of commercial vehicles (waymo). The driver is liable in supervised self driving cases (privately owned Tesla)
There's even reported cases of entire legislations being written with LLMs already [1]. I'm sure there's thousands more we haven't heard about - the same way researchers are writing papers w/ LLMs w/o disclosing it
[1] https://olhardigital.com.br/2023/12/05/pro/lei-escrita-pelo-...