> No actually dev decision. Generative models are complex to release responsibly and team still working on release guidelines as they get much better, 1.5 is only a marginal FID improvement.
Honestly this whole "responsible AI" thing is a sad last attempt to run away from the inevitable. The reality is that our politicians will end up in fake photos/videos/audio recordings, people who can't even draw a straight line will be able to make crazy online memes with any kind of imaginable image in just seconds no matter how offensive it is to some people and there is absolutely nothing we can do about it.
When these companies and OS projects create "responsible" or restricted AIs they are at the same time creating a demand for AIs that have no limitations and eventually open source or even commercial AIs will respond to this demand.
I hope while they still play this "responsible" game, they are at least using the time to figure out ways how we can live with this kind of advanced AI in a future where everything is fake/false by default.
"Responsible" is just PR talk for "We're gonna try and NOT release it for as long as possible so that we can milk money from you by blackboxing the models and having you pay for tokens".
Sounds like some middle manager is trying to put his foot down and is saying things like "no more releases till we have designed and tested a 37 step release signoff procedure"
> No actually dev decision. Generative models are complex to release responsibly and team still working on release guidelines as they get much better, 1.5 is only a marginal FID improvement.