Cofounder here,
What you see in the above demo is a very rate-limited demo of our upcoming model.
We realize how dangerous this technology can be and have built a lot of mitigations on our main product (Play.ht) to reduce possible abuse:
- We strictly moderate the generated text of any sexual, offensive, racist, or threatening content. It automatically gets detected and blocked.
- We built and are offering for free a tool that can identify AI generated vs human-generated audio (https://play.ht/voice-classifier-detect-ai-voices/), we will continue to invest in this tool, and we hope it helps with deploying this technology safely.
- If we get any reports of a cloned voice without consent, we block the user and remove the voice instantly.
- The price of high-fidelity voice cloning is too high for scammers to use at scale; we have been live with it for four months and haven't had any cases of abuse so far.
Like any technology, it has the potential to be abused, and we are working hard to mitigate that and deploy it safely. We will continue to observe the use cases and user feedback and improve the safety of the service accordingly.
Since we launched voice cloning 4 months ago, we have seen enough genuine use cases which motivated us to keep moving forward and figure out safe ways to make the technology useful for all.
>We strictly moderate the generated text of any sexual, offensive, racist, or threatening content.
This won't be the problem. My voice calling my parents asking for money to be sent to a random account will be the problem. And none of that will be sexual, offensive, racist, or threatening.
>we are working hard to mitigate that and deploy it safely.
> We strictly moderate the generated text of any sexual, offensive, racist, or threatening content.
This is exactly what makes me so angry about "AI safety" initiatives: they are largely worrying about the wrong thing. People have been so focused on the "this may make some obscene joke, or be biased against some skin colors" that they have completely missed out on the much more serious harms that AI will cause with respect to, in this case, impersonation scams.
Congrats, people can't say the N-word with your technology, but they can say "Hi Bob, just calling to verify that we did indeed change the target account where you should wire your invoice payment."
- We built and are offering for free a tool that can identify AI generated vs human-generated audio (https://play.ht/voice-classifier-detect-ai-voices/), we will continue to invest in this tool, and we hope it helps with deploying this technology safely.
- If we get any reports of a cloned voice without consent, we block the user and remove the voice instantly.
- The price of high-fidelity voice cloning is too high for scammers to use at scale; we have been live with it for four months and haven't had any cases of abuse so far.
Like any technology, it has the potential to be abused, and we are working hard to mitigate that and deploy it safely. We will continue to observe the use cases and user feedback and improve the safety of the service accordingly.
Since we launched voice cloning 4 months ago, we have seen enough genuine use cases which motivated us to keep moving forward and figure out safe ways to make the technology useful for all.