That seems wrong. The null AI would have been better at minimizing legal liability. The actual character.ai to some extent prioritized user engagement over a fear of lawsuits.
Probably it's more correct to say that the AI was chosen to maximize lawsuit_dollars. The parents and child could have conspired to make the AI more like Barney, and no one would have entertained a lawsuit.
OK, it seems like a nitpick argument, but I'll refine my statement, even if doing so obfuscates it and does not change the conclusion.
The AI was trained to maximize profit, defined as net profit before lawsuits (NPBL) minus lawsuits. Obviously the null AI has a NPBL of zero, so it's eliminated from the start. We can expect NPBL to be primarily a function of userbase minus training costs. Within the training domain, maximizing the userbase and minimizing lawsuits are not in much conflict, so the loss function can target both. It seems to me that the additional training costs to minimize lawsuits (that is, holding userbase constant) pay off handsomely in terms of reduced liability. Therefore, the resulting AI is approximately the same as if it was trained primarily to minimize lawsuits.
So you think it's more than "not much." How much exactly? A 10% increase in userbase at peak-lawsuit?
It's obviously a function of product design. If they made a celebrity fake nudes generator they might get more users. But within the confines of the product they're actually making, I doubt they could budge the userbase by more than a couple percent by risking more lawsuits.
My impression: In these early days of the technology there's huge uncertainty over what gets you users and also over what gets you lawsuits. People are trying to do all kinds of things with these models, most of which don't quite work at the moment. On the lawsuit side, there's potential copyright claims, there's things like the present article, there's celebrities suing you because they think the model is imitating them, there's someone suing you because the model insults or defames them (even if the model speaks the truth!), there's Elon suing you for the LOLs... As you're hoping to go global, there's potential lawsuits each jurisdiction which you don't even have the resources to fully evaluate the potential of.
You say that both factors are clear "within the confines of the product", but I'm not convinced there even are such clearcut "confines" of the product. To enter this market and survive, I'd think those confines woud have to be pretty flexible.
That seems wrong. The null AI would have been better at minimizing legal liability. The actual character.ai to some extent prioritized user engagement over a fear of lawsuits.
Probably it's more correct to say that the AI was chosen to maximize lawsuit_dollars. The parents and child could have conspired to make the AI more like Barney, and no one would have entertained a lawsuit.