StableDiffusion isn't freely available, in the "Free Software" sense. They use the highly uncommon "CreativeML Open RAIL-M License" which is a wall of text composed of weasel words describing how the software is so incredibly advanced and dangerous that despite the authors' earnest wish to do so, they cannot in good conscience make it genuinely Free Software.
These people wrote a bunch of Python code that pipes images and text through a GPU, and they're acting as if they had created a secret weapon that somehow humanity must be protected from. If that's not megalomania, I don't know what is.
If they think it's a secret weapon humanity must be prevented from using, giving it away for free with examples on how to use it with a note on the side saying "pls don't misuse thanks" seems like a very odd thing to do.
Oh please, all the creators of these image AIs (OpenAI, Google, Midjourney, SD, etc) are being very very cautious with this stuff. It’s not going to end humanity, but it could easily lead to some really gross content that would paint the responsible organization in a bad light.
It boils down to who's doing it. An artist in photoshop the artist is responsible. An AI asked to do it likely the people that made the AI is responsible (until AI can think for itself).
Take it out of drawing. If you write a program to control elevators and it breaks the elevators aren't you, the person that wrote the program, responsible? Why would Adobe be responsible for something someone else draws?
Or let's take an easier more close example. If you just made a character generator. Here's one
And there was a "randomize" button that one out of 100% made a very pornographic image. Who would get the blame? The person that pushed the button or the person that created the project?
Perhaps it's the use of the AI moniker but this thing is a computer program under the control of a human, who is the person responsible. It takes time and effort to use and is very much not like automated elevator control. In order of difficulty:
1) figuring out how to rejig the prompt to get what you'd like, adjusting seeds and tuning configuration options. The more control you want, the more complex and manual the pipeline of backing software will be. All it does is amplify everyone's innate artistic talent.
2) Coming up with a good prompt. This relies on the person's imagination, facility with words and familiarity of limitations of the image generating software and its training datasets.
3) Selection. This can be a frustrating experience that tries one's patience.
> made a very pornographic image. Who would get the blame?
You would, it's not like the software automatically distributes all its generations. The vast majority of images these software generate are not good enough to be shared and aren't. You made the conscious decision to share it.
Even if it were an AGI, you would be responsible. It's very much possible to commission a ninja from a human artist and get something very pornographic on a famous celebrity and you would be held responsible for choosing to share it, you had the choice not to.
True, they are all very cautious not to let bad actors generate bad content.
Where "bad actors" are defined as "people who disagree with us", and "bad content" is defined as "things we don't want to see".
Needless to say, the list of bad actors never includes the authors themselves, and the list of unacceptable applications never includes anything the authors had in mind.
They are not cautious at all to prevent bad actors. It is very very easy to bypass their filters. They are just doing the basic due diligence to make sure that the average casual user doesn’t make something that grosses themself out.
These people wrote a bunch of Python code that pipes images and text through a GPU, and they're acting as if they had created a secret weapon that somehow humanity must be protected from. If that's not megalomania, I don't know what is.