I would consider studios taking voice actors' voices and using them to generate new content beyond their contract to be abuse. I'm sure big corporations are rubbing their hands in anticipation, but I'm sure killing the VA industry will make the world just a tiny bit worse for everyone else.
Mods are more difficult to attach a moral judgement to. I don't think I'd really consider them malicious, as long as they're not sold, but there's a very thin line between a high quality mod and stealing someone's voice.
I think it will probably kill the current Business Model of the VA Industry. Having the ability to generate as much audio content as you like without the risk of the VA not being available anymore (dead, booked out,...) is just too good to pass up.
Instead we will probably see licenses for generated voices. And in case for games the game developer could make the voice model freely available for mods of his game.(The mods are already using assets from the game, why not also audio?)
On the other hand, why shouldn't voice actors benefit from this tech?
I can easily imagine a future where AI-generated impersonations are deemed by courts or new legislation to be protected by personality rights. In that world, voice actors could expand their business by offering deeply discounted rates for AI-generated work.
Alternatively, if/when tech like Play.ht is consistently good enough, maybe it just becomes a standard practice for all voice acting work to include a combination of human- and AI-generated content, like a programmer using Copilot or a writer using GPT.
Sure, why not? If you could earn more money and produce more value to society with the same amount of labor, and the legal/regulatory environment supported it, I wouldn't see a reason not to.
If you had a solo contracting business, and the technology existed to fully outsource a development project to AI based on carefully documented requirements, using it would be a cheaper alternative to subcontracting. Rather than writing every line of code by hand, you would transition to becoming an architect, project manager, code reviewer, and QA tester. Now you're one person with the resources and earning potential of an entire development shop.
I have my fair share of complaints about AI coding tools, but that isn't one of them. Maybe the increase in supply would result in a lower average software engineering income, but it wouldn't have to if demand kept pace with supply.
Furthermore, code is more fungible than a person's voice. If someone wants a particular celebrity's voice, that celebrity has a monopoly on it. Thus, it's not obvious that increasing the supply of one's voice acting work would decrease its value. (I suspect the opposite to be the case, until a point of diminishing returns.)
Although the voice acting case has a similar concern; will we get an explosion in new and/or higher-quality media, or will we see a consolidation to a smaller number of well known voice actors taking an outsized amount amount of work? Another issue, if we look beyond impersonation specifically, is that human voices may become marginalized over time in favor of entirely synthetic voices. I imagine that this would start with synthetic voices playing minor roles alongside human/human-impersonated voices, but over time certain synthetic voices would organically become recognizable in their own rights.
Again, I see plenty of concerns with AI in general, but more of a mixed bag than strictly negative, and there isn't anything inherently nefarious about this product in particular.
Personally, I'm optimistic about what society looks like in the long run if humanity proves to be a responsible steward of increasingly advanced AI. By the time we're at a point where 90% of people can be effectively automated out of a job, we'll have had to have figured out some alternative way of distributing resources among the population, i.e. a meaningful UBI backed by continued growth of our species' collective wealth and productivity. I can easily imagine a not-too-distant world that is effectively post-scarcity, where it's not frowned upon to spend years (or lifetimes) on non-income-generating pursuits, and where the only jobs performed by humans are entrepreneur, executive, politician, judge, general, teacher, and other things of that must be done by humans for one reason or another.
So am I happy that AI is encroaching on skilled labor? In the short term, not necessarily. But it's not necessarily bad either, it's the reality that we're in, and long-term I'm more optimistic than not.
Star Trek: Prodigy has already used audio from previous movies and TV to bring back to life several actors from previous series. It's not exactly the same as this, but their dialogue was taken out of context to create new scenes and story.
I think “talking” with dead relatives or friends will become real pretty soon.
If people can find comfort hearing their mom say words of encouragement in a tough situation, I think a lot of people would do it. Kinda hard because for some others that would mean never getting closure.
The last thing on earth I'd want is for any aspect of my dead relatives to be reanimated through technology. No. That's absolutely fucking horrific to consider. I don't need a hallucinating AI pretending to be my dead wife. That's literally shambolic.
There is vastly more potential for that to be abused by others than used in any emotionally or socially constructive way.
I would also find that very creepy and it would probably keep you from moving on.
I think there is a big difference between remembering what happened by looking at a photo or hearing an audio recording and having newly generated "content" from a deceased loved one.
there has been some media coverage on this already (e.g. [1]). an emerging concern among mental healthcare professionals is that a sufficiently-convincing simulation could interfere with the progression of the stages of grief, prolonging the 'denial' stage and potentially heightening the intensity of the stages that follow.