Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for the in depth, passionate followup! Right off the bat I want to clarify that I was talking about human cognition, not just typical attorney work — I’d stand by the assertion that it’s hallucination all the way down, at the very least “hallucinating a symbolic representation of the book passage you read 2s ago”.

Re: LLMs and law, I agree with all your complaints 100% if we constrain the discussion to direct/simplistic/“chatbot”-esque systems. But that’s simply not what the frontier is. LLMs are a ground breaking technique for building intuitive components within a much larger computational system that looks like existing complex software. We’re not excited about (only) crazy groundbreaking products, we’re excited about enhancing existing products with intuitive features.

To briefly touch on your very strong beliefs about LLM models being a bad architecture for legal tasks: I couldn’t disagree more. LLMs specialize in linguistic structures, somewhat tautologically. What’s not linguistic about individual atomic tasks like “review this document for relevant passages” or “synthesize these citations and facts into X format”? Lawyers are smart and do lots of deliberation, sure, but that doesn’t mean they’re above the use of intuition.

As far as we’re in an argument of some kind, my closing argument is that people as a whole can be pretty smart, and there’s a HUGE wave of money going into the AI race all of a sudden. Like, dwarfing the “Silicon Valley era” altogether. What are the chances that you’re seeing the super obvious problem that they’re all missing? Remember that this isn’t just stock price speculation, this is committed investments of huge sums of capital into this specific industry.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: