Hacker News new | past | comments | ask | show | jobs | submit login

> I agree, but "1" must include all tasks where a mistake could lead to liabilities for the company, which is probably most tasks

If you hire a junior programmer and they make a mistake, they aren't held liable either. Sure, you can fire them, but unless there's malice or gross negligence the liability buck stops at the company. The same can be said about the wealth of software currently involved in producing software and making decisions. The difficulty of suing Microsoft or the llvm project over compiler bugs hasn't stopped anyone from using their compilers.

I don't see how LLMs are meaningful different from a company assuming liability for employees they hire or software they run. Even if they were AGI it wouldn't meaningfully change anything. You make a decision whether the benefits outweigh the risks, and adjust that calculation as you get more data on both benefits and risks. Right now companies are hesitant because the risks are both large and uncertain, but as we get better at understanding and mitigating them LLMs will be used more.




Even with a junior there is generally a logic to the mistake and a fairly direct path to improving in future. I just don't know if the next token was chosen to be x statistically is going to be able to get to that level.


Good thing LLMs aren't a glorified statistical model, then, eh?

Anyway, why wouldn't there be? You reach out to the parent company with an issue and request for improvement. If you're a big enough client, you get your request prioritized higher. Same as with any other product that's part of your product today.

The appliance of LLMs today isn't straight up text in, text out; It has become more complex than that. Enough that it can be improved without improving the LLM model.

Your argument is moot.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: