Hacker News new | past | comments | ask | show | jobs | submit login

If a common-sense LLM is listening to grandma's calls (privacy alarms going off but hear me out), it can stop her from wiring her life savings to an untrustworthy destination, without having seen that particular scam before.



Once we can run our own personal, private LLMs it will definitely open up a world of possibilities.

Actually applications like this will probably be implemented on the cloud-based models, since 98% of the public does not care about privacy as much as people on this forum.


It will open up a whole new category of vulnerabilities of the 'how to fool the LLM while convincing granny' type as well. Then there is the liability question, i.e. if the LLM is lured by one of those and granny sends her money to Nigeria or the punks around the corner - take your pick - then is the LLM vendor liable for (part of the) loss? In this it may start to resemble the similar conundrum in self-driving vehicles where a nearly-perfect but sometimes easily fooled self-driving system will lull drivers into a false sense of security since the system has never failed - until it did not see that broken car standing in the middle of the road and slammed right into it. When granny comes to rely on the robot voice telling her what is suspect and what is not she may end up trusting the thing over her own better judgement just like the driver who dozed off behind the wheel of the crashed self-driving vehicle did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: