Hacker News new | past | comments | ask | show | jobs | submit login

There's an LLM morphing your queries somewhat before submitting to Dall-e and you can jailbreak that.

https://twitter.com/madebyollin/status/1708204657708077294

https://media.discordapp.net/attachments/1023643945319792731...




I don't know why, but I just love seeing jailbreaks where the input/output isn't just plain text.


So, we're still splatterprompting... only a machine does it for you. That's pretty hilarious


That will probably continue to be the approach indefinitely. There's going to be an increasingly advanced translation layer in-between the user prompt and the software responsible for producing the images. We've done this for pretty much all computing & software systems that people interface with. Stripping out the complexity on the front-end for the user is one key to how you get generative software to go super wide. To do that more of the complexity goes to the back-end.


Does it work if you just call

> #graphic_art("my prompt here")


How do you jailbreak it?


In the screenshot they show how.


They show how to reveal the prompt but not how to disable or override it.


Can you explain what we should see and understand from that picture?


They provide the prompt used.


least cyberpunk 2023 shit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: