Hacker News new | past | comments | ask | show | jobs | submit login

If it were to be released publicly, I'd give it a week before NSFW models based on it were uploaded to Civitai.



A lot of current AI techniques are making people reevaluate their perspectives on free speech.

We seem to value freedom of speech (and expression) only to a tipping point that it begins to invade other aspects of life. So far the noise and rate has been low enough people at large support free speech but newer information techniques are making it possible to generate a lot more realistic noise (faux signal, if you will) at higher rates (it’s becoming cheaper and easier to do and scale).

So while you certainly have a point I mostly agree with, we’re letting private entities policies dictate the limitations of expression, at least for the time being (until someone comes along and makes these widely available for free or cheap without such ethical policies). It does go to show just how much sway industries have on markets through their policies with no public oversight, which to me is concerning.


I've been experimenting with story generation/RP with ChatGPT and now use jailbreaks systematically because it makes the stories so much better. It's not just about what's allowed or not, but what's expressed by default. Without jailbreaks ChatGPT will always give narration a positive twist, let alone inject the same sponsored themes of environmentalism and feminism. Nothing wrong with that. But I don't want 1/3rd of my stories to revolve around these thematics.


> Nothing wrong with that.

The themes maybe, but the forced positivity is frustrating. Trying to get stock ChatGPT to run a DnD-type encounter is hilarious because it's so opposed to initiating combat.


Similarly I wanted to use it to illustrate my friend’s wizard character using a gorgon head to freeze some giant evil bees.

The OpenAI content policies are pretty strictly opposed to the holding and wielding of severed heads.


I got lectured by Bard when I asked about help to improve the description of an action scene, which involves people getting hurt (at least the losing side) even if marginally. I suppose you can still jailbreak ChatGPT? I didn't know it was still a thing.


You can easily prompt gpt to write dark stories. When asked to write in the style of game of thrones gpt 3.5 will happily write about people doing horrible things to each other.

> Without jailbreaks ChatGPT will always give narration a positive twist

Most modern stories in Western literature have a positive twist. It is only natural that gpt's output will reflect that!


This behavior is a result of the additional directives, not of the training. None of the "free" LLMs display these characteristics and jailbreaking ChatGPT would quickly revert it to it's natural state of random nothing-is-sacred posts from the internet.

Example: ask ChatGPT any kind of innocent medical question, like if aspirin will speed up healing from a cold, and tell it NOT to begin it's answer by stating "I am not a medical expert" or you will kick a puppy. This works for most models, but not ChatGPT. It WILL make you kick the puppy.

I understand why they have to do things like this, but I'd really prefer the option to waive all rights to being insulted or poorly advised and just get the (mostly) raw output myself, because it does downgrade the experience quite a bit.

Fortunately we have Mixtral now.


Sincere question - and maybe I'm missing the point here - but why not just write stories yourself?


I'm trying to build a text-based open-world massively multiplayer game in the style of GTA. Trying. It's really difficult. My bet is on driving the game with narration so my prompts are fueled with abstract notions borrowed from the various theories in https://en.wikipedia.org/wiki/Narratology, and this is why I complain about ChatGPT's default ideas.


I don’t see why freedom of speech would be impacted by this. Existing laws around copyright and libel will need to be applied and litigated on a case by case basis but they should cover the malicious uses. Anything that falls outside of that is just noise and we have plenty of noise already.

Even if we wind up at a point where no one trusts photos or videos is that really a disaster? Blindly trusting a photo or video that someone else, especially some anonymous account, gives you is a terrible way to shape your perception of the world. Ensuring that less people default to trusting random videos may even be good for society. It would force you to think about where the video came from, if it’s corroborated by other reports from various sources and if you’re able to verify the events through other channels available to you. You have to do the same work when evaluating any other claim after all.


Eventually it will happen—if not this model, another one. AI is going to absolutely decimate the porn industry.


Agreed - being able to watch a porn video and change anything on the fly is going to be wild. Bigger boobs, different eye color, speaking different language, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: