Hacker News new | past | comments | ask | show | jobs | submit login

And with a lot of people uploading reviews or DIY videos to YouTube it's going to be pretty easy to harvest enough of a voice print. If you have written comments on forums and other social media such as right here on HN they also have your writing style to work with.



I think we're on the verge of few-shot believable voice impersonation. Between that, realtime deepfake videos, and AIs being more than good enough to solve CAPTCHAs, it seems like we're at most a few years from having no means of verifying a human on the other end of any given digital communication unless someone figures out and implements a new solution quickly.


> few-shot believable voice impersonation

We have been there for the past few years. And yes, it has been actually used for scamming.

Submitted only a couple of days ago: Voice scams: a great reason to be fearful of AI - https://news.ycombinator.com/item?id=35459261


There are (mostly) solutions but I lot of people won't like them. As with things today like notarized signatures or just transacting in person, they basically depend on some sort of in-person attestation by a reliable authority. Of course, that means adding a lot more friction to certain types of transactions.


I can see how that might destroy many business models. But from the top of my head I can't come up with any whose loss would have a dramatic negative effect on my wellbeing. Could someone elaborate why I should be worried?


Why would passwords, personal devices, policed platforms etc. fail as an authentication method between known counterparties? Between unknown counterparties the issue is much bigger than just about being a human or not.


It does make it kind of hard to verify someone's identity.

That said I think trying to verify someone's identity through online means only became viable a few years ago when everyone had a somewhat working camera and microphone available, and with any luck the risk of deepfakes will cause an early end to the the scourge of people trying to film themselves holding a form of ID.


Online verification of identity might just not suffice for some things or there might be specialized IDs for online purposes etc.


During COVID brokerages did start allowing online transactions for certain things they didn't used to. However, at least my brokerage has reverted to requiring either a direct or indirect in-person presence.


If a common-sense LLM is listening to grandma's calls (privacy alarms going off but hear me out), it can stop her from wiring her life savings to an untrustworthy destination, without having seen that particular scam before.


Once we can run our own personal, private LLMs it will definitely open up a world of possibilities.

Actually applications like this will probably be implemented on the cloud-based models, since 98% of the public does not care about privacy as much as people on this forum.


It will open up a whole new category of vulnerabilities of the 'how to fool the LLM while convincing granny' type as well. Then there is the liability question, i.e. if the LLM is lured by one of those and granny sends her money to Nigeria or the punks around the corner - take your pick - then is the LLM vendor liable for (part of the) loss? In this it may start to resemble the similar conundrum in self-driving vehicles where a nearly-perfect but sometimes easily fooled self-driving system will lull drivers into a false sense of security since the system has never failed - until it did not see that broken car standing in the middle of the road and slammed right into it. When granny comes to rely on the robot voice telling her what is suspect and what is not she may end up trusting the thing over her own better judgement just like the driver who dozed off behind the wheel of the crashed self-driving vehicle did.


We will just get ChatGPT 6 to solve it for us. Done.


It is not the fact that the poster writes «Done.» where actually and comically it's not "done" at all in said proposal,

nor the other possible point that Statistical Large Language Models are not problem solvers, as in fact are special in Machine Learning for optimizing for goals transversal to actual "solutions" (not to mention that it is already a Sam Altman, while proud of the results («They are not laughing now, are they»), the first to jump into alarm when people drool "So it's AGI!?" on him),

but it must be noted that those scams happen because people live lightheartedly their responsibilities (among which, realizing that they are not in the XVIII century anymore) - and the post has a tint of this dangerous laid back approach.


I'm sorry, I was being sarcastic, it was a crappy comment and it was unhelpful.


No prob on this side C., I just took the occasion of your input for substantiveness.

(I would suggest that you mark sarcasm, e.g. with '/S' or equivalent. Once upon a time we all thought that rhetoric is immediately recognizable, then we met people who would believe ideas beyond the boundary of "anything".)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: