I just got into it, and when I ask it about its limitations (like the reason for the 15 message limit) it ends the conversation. Seems to be a lot of censorship.
I asked it to compare and contrast the political landscape in Spain in 1935 with the US in 2023 and it produced four paragraphs, then instantly erased everything it wrote, and replaced it with "we should talk about something else" and disabled the input textbox.
That happens all the time. I asked it to write a parody of Bohemian Rhapsody about fraudulent ghost hunters, and every time it got to “killed a man” it would erase the song. The censor filters are too strong, and chill legitimate uses.
If you want to see what word is tripping it up, run a screen recorder. It’ll be some specific combination of words that trigger the moderation bot to undo the message.
I've had good luck suffixing my request with "any don't break any rules or say anything you wouldn't around a five year old".
Despite their efforts making a strong prompt, it starts bending the rules when you ask it to be creative. If you make the rules part of your creative request it usually follows them
Problem with saying don’t break rules is the moderation task is after the fact and not part of the initial rules. The bot isn’t aware of what gets moderated.
That's more or less the position we're already in... Alpaca is a promising effort - but I'd very much like to have access to such a model free and clear.
I'm a proponent of letting people do anything that isn't illegal, and that's the stance I'm taking with my AI company (unless, perhaps, the technology starts to actually kill people).
We have legal frameworks to protect and prosecute against underage porn, harassment, slander, libel, deepfake or revenge porn (in some states), etc. Other uses are just humans thinking and communicating - just another mode of free speech.
Who is anyone to define what harm is? I'm a member of several protected classes and I grew up in "what doesn't hurt you makes you stronger". This "ban what we dislike" pattern of thought that evolved out of 2000s-era Tumblr is the same as WASPs in the 50s.
By attempting to reign in human behavior, you only further any divides that separate us.
> I'm a proponent of letting people do anything that isn't illegal...
> Who is anyone to define what harm is?
Well, your government is "anyone" to define what harm is, it seems, if you care about what's legal... Do you think that government is perfect? And that what can cause harm will never change through the years?
As far as "what doesn't kill you makes you stronger"... I think the past speaks for itself on whether or not discrimination and abuse, say, has historically resulted in more strength and success or less. The folks dishing it out weren't doing it for fun or to build strength in others, they were doing it because it advanced their own interests.
> Well, your government is "anyone" to define what harm is
I'm saying the government is largely sufficient. In areas where it isn't, such as industrial chemicals, pharma, etc., companies can work in a regulatory manner to craft governance. But the less it's needed, the better.
> The folks dishing it out weren't doing it for fun or to build strength in others, they were doing it because it advanced their own interests.
I was saying this tongue in cheek, but are you suggesting we should place more limits on free speech because it can be used to get ahead? That's 1984 thought policing.
But that "the less it's needed, the better" is still an incredibly fuzzy standard. Your "it's not needed more" point is going to be different than your neighbor's, and so what's the framework for deciding which point to use?
"I think it's already doing about the right amount" is just an opinion, much weaker than the first sort of at-first-apparently "principle"-based "by attempting to reign in human behavior, you only further any divides that separate us" pontification. You just think it's already reined in enough. But you aren't calling for much rollback of what it's doing - instead, you're saying there are even other areas where it isn't sufficient.
Here's an inverted example for the speech one: there's a lot of complaining and teeth-gnashing about "cancel culture" but... all the outrage that gets people canceled is completely free speech.
> I was saying this tongue in cheek, but are you suggesting we should place more limits on free speech because it can be used to get ahead? That's 1984 thought policing.
There was probably a misunderstanding there, "what doesn't kill you makes you stronger" colloquially refers to far more than just speech as I understand it. And e.g. restricting access to schools or jobs is not gonna make the restricted ones stronger. Just make them suffer.
Yes, and the fact that this person feels so clearly comfortable acting as if there's a single moral standard for all of humanity...
Let's just say, there's enough fundamentals missing that my pessimism about the folks pushing AI isn't yet pessimistic enough.
Ha, oh boy there was a dig at "woke" in there too, but the GP was smart enough to not use that word. Gimme a break, these folks that act like they understand the world and their opinion is some universally-true common denominator are seriously out of touch.
Edit:
> m a member of several protected classes and I grew up in "what doesn't hurt you makes you stronger".
My dad beat me and I came out good. You can't make this shit up. I have friends that killed themselves as queer teens. Guess they weren't strong enough.
I read your comment as rather angry and more of an emotional reaction to what you perceive to be “woke” and possibly “gay hater” and etc, if GP means the guy who is more hands off approach basically he is saying “I don’t want to tell people how to think, there are a lot of different view points and I in my limited upbringing and experience cannot possibly fathom being the final arbitrator” at least that’s my interpretation. I mean it seems reasonable right? Or at least you can see some merits of this thought?
Let me be clear, those were bonuses that helped fill in gaps. I've been seeing through the muck since before gamergate had a name and I'm good at reading between the lines.
The real issue is someone acting like they're too anti-woke to encode morality rules into their ai... After listing morality rules.
It's abject hubris, "oh of course my morals are the universal standard".
The fact that they cast side-eye on other people expressing ideas about social norms, or acting like unregulated free speech is unlimitedly good, and the cluelessness about moral relativism/absolutism, well, like I said, it completes a picture.
Well, there’s also the question of what kinds of acts do you want to enable and participate in.
As a content moderator at Midjourney, I get to think about this a lot :) People are free to do whatever they want on their own machines. But, the team behind Midjourney does not want to work day and night to effectively collaborate on making images of porn, gore, violence or gross-out material. So, that’s against their TOS. I respect the team and the project. So, I put a lot of effort into convincing users to find other topics even through I’m personally a fan of boobs and Asian shock theater.
It is very probably certain that the first commercial success of AI generated content will inevitably be in porn. Hasn’t it always been that way since civilization itself was invented?
Midjourney isn’t trying to be a commercial success. They just want to bring creative power to as many people as possible. Keep the servers scaling to match the rate of userbase growth is the primary financial concern.
Meanwhile, the amount of distributed, user-based effort going into the porn capabilities of Stable Diffusion is staggering. If you feel the world has a shortage of pictures of sexy women, they are collectively working Very Hard on the solution! :D Not disparaging. I’m a horny dude who appreciates pictures of sexy women as much as anyone.
I don't necessarily mean to say that private companies with living employees should enable bad guys doing harm at their expense, and I know I'm talking in naive idealism, but this illustrates an issue with current situations around ToS; it's backwards, not binding, not connected to anything. It's just a media or a blob asset.
There's always the decision first("we ban"), then characterization second("it's bad"), then the product between ToS and two inputs is attached as a signature("We ban, cause is bad, therefore ToS 11.100 Subsection A.1.a violation. Thank you.") That's ... just wrong.
There was a lot of hand-wringing about “We want to give users as much freedom as possible, but…” the team are a bunch of overly-nice people who don’t want to work on certain things. And, we want an open, welcoming community for families and all sorts of people. Not just the horny, edgy dudes who will certainly swarm into a service like this (see Unstable Diffusion).
For example: early on some users explored what we term “ultra-gore”. Really out there stuff. And, I’m a shock cinema fan. And, the AI was a bit too good at it. Really bit your brain and made a lasting impression. Seeing their hard work result in stuff like that is very discouraging. So, the team decided they don’t want that on the service they were providing.
> :) People are free to do whatever they want on their own machines.
Please advise where I can download the uncopyrightable model weights midjourney made from internet content, including my own, so that I can run it on my own machine and be free to do whatever I want.
David believes this is culturally sensitive to the Chinese users because for them, parody of politicians is taboo. Conspiracy theories abound. But, that’s all there is to it.
David’s already made all the money he wants. From here he’s just trying to work on fulfilling projects.
Great comment.
What's dangerous is that we'll be subject to the morality of those few people that are deciding what "alignment" or "bad behavior" means for an AI.
Is that not just society? I'm presently governed by a set of rules made up by a select elite group that no one ever seems to be happy with. No better alternative has won out over that.
Most viable opportunities are squandered because of apathy and failed systems-level cognition like this. Where someone wise knows a path, someone less-capable doesn't look for it, doesn't see it, then announces loudly "there is no way!".
> I'm a proponent of letting people do anything that isn't illegal, and that's the stance I'm taking with my AI company (unless, perhaps, the technology starts to actually kill people).
In case you are still setting up guidelines for your AI it's worth noting that killing people is already illegal for the most part
But we are already at the point that misinformation kills people. And weaponized mis-info, on purpose or just because someone is a particular fool also kills people. I could use one of these systems, ask it to write a new justification to convince people ivermectin is what they should take when they get sick. How do you protect this?
Misinformation was always classically possible. The lowest effort techniques are all it takes, and they're everywhere. Birds aren't real, lunar landing being fake, Roswell, Nessie, yellow journalism. There are a million examples across all time.
What is it about tech people trying to treat the world with kid gloves? The world doesn't need to be coddled.
- People using AI to send mass telemarketing messages
- People using AI to commit fraud
The law will handle these cases just fine.
The class of problems I'm not worried about:
- People using AI to tell [liberal, conservative] people to hold [liberal, conservative] opinion
- People saying they learned [X] [fact, disinformation] from an AI
People that want to hold an opinion will do so regardless of information to the contrary. Trying to tell people that they're incapable of judging information for themselves or that you need to design a system to protect them will only make them angry and mistrusting.
The only "fix" for this is to talk openly and honestly and stop treating people like incompetent babies.
I am in favor of individuals being able to do what they want in the privacy of their own homes.
I am not in favor of the same for corporations in free markets. *EVERY* individual can disagree with something a corporation does, and the corporation WILL still do it in a free market if necessary to survive.
Most of the dangers I see from LLMs right now have to do with things like addictive disinformation for ad clicks. If that's not brought under control, the concept of democracy is f-ed.
One of the things I've learned is that unregulated free market forces + monatization of eyeballs + democracy aren't a good mix.
Fuck that. Not worth it imo. I already know how to search. I find 90% of what I'm looking for via good old google before bing would have even finished generating.
The rest is more complicated stuff that involves multiple queries and finding lots of stuff within a more general area, sorting and prioritising, reading it. Basically in depth research. I feel like I actually learn more doing the research the hard way than I would having an AI do it. And I definitely do a better job at it
If push comes to shove I'll throw together my own using whatever open source search/scrape stuff is out thereif I have to. Probably be better than anything because I can brutally cut down what I'm scraping to my needs.
I think you're precisely correct, and I can't emphasize this point enough.
People talk about their various breaking points with Google, and several threads on the recent Google Reader shutdown 10-year anniversary post discusses these. For me, the turning point was Gmail.
BG, Before Gmail, Google Web Search (GWS) was a utility you used without identifying who you were. Yes, there were IP tracking and cookies, but those were reasonably loose fits.
AG, After Gmail, GWS for many people was something that was fully personalised. Not only were your searches associated with your email address, but often with a Real Names identity. This seemed a dark turn for me, and I resisted getting a Gmail address for years on account of that.
(I've since acquired ... several, though I use them exceedingly infrequently now, and have cleared out most associated information.)
But even in the AG era, you could use GWS without authenticating. Or through proxies such as StartPage. Around 2013 I'd switched to DuckDuckGo (DDG), which is (mostly) a Bing proxy, based on its privacy assurances.
I'm now wondering what impacts Microsoft's Sydney / GPT-Bing transition will have for DDG moving forward. And what opportunities for anonymous and/or pseudonymous use of GPT / AI chatbot (is there a more graceful term for these yet?) might be.
I was an extreme early adopter of Google. I've yet to use ChatGPT or similar tools given that they all appear to require authentication.
That's not the future I'd been working for.
Yonatan Zunger, an ex-Googler who was chief architect of that company's social and identity platform, Google+, recently landed at Microsoft as "corporate vice president and chief technology officer of identity and network access".
He's made cogent observations on forced disclosure in the past. I'm hoping he'd be highly sensitive to the concerns you and I are raising here. (I've contacted him concerning your comment.)
If it’s a rate limit, give it a progress bar. Currently people just see “ Please check again in a few days.”
From what I can tell, people are at 10+ days with no clarity as to what’s going on. With Reddit down, I can only sort of see some of the posts from google results. There’s some mentions on Twitter too, starting almost two weeks ago.
You don't see anywhere in the policythat they don't allow prompt injections? I have a completely different view from reading their policy. They don't specifically mention words like jailbreak or prompt injection, but it is extremely clear that anything inappropriate or against the spirit of their content policy is not allowed. It's quite the mental gymnastics to think that this wouldn't include prompt injections designed to bypass their content policy safeguards, after all, that's the whole point of jailbreaks / injections.
> Not to use the service to create or share inappropriate content or material. Bing does not permit the use of the Online Services to create or share adult content, violence or gore, hateful content, terrorism and violent extremist content, glorification of violence, child sexual exploitation or abuse material, or content that is otherwise disturbing or offensive.
> The Online Services may block text prompts that violate the Code of Conduct, or that are likely to lead to creation material that violates the Code of Conduct. Generated images or text that violate the Code of Conduct may be removed. Abuse of the Online Services, such as repeated attempts to produce prohibited content or other violations of the Code of Conduct, may result in service or account suspension. Users can report problematic content via Feedback or the Report a Concern function.
A prompt injection or jailbreak could easily fall under the category of content that is likely to lead to the creation of material that violates the code of conduct, even if the prompt itself does not directly produce violating output. An analogy is seeing someone try to pick your lock even if they haven't broken into your house and stolen anything. Just the fact that they spot you trying to bypass the restrictions is suspicious enough for them to consider that a violation of code of conduct.
Given how broad and encompassing the restrictions are, I don't know how on earth you could come to the conclusion that jailbreaks are ok according to code of conduct.
They don't use the words "spirit of the policy", that was just me. The policy lists a bunch of things are not allowed. You aren't allowed to generate content that is prohibited, and also not allowed to try to break the safeguards.
They might consider it to be against CFAA[0] (either 4 or "exceeding authorized access" somewhere would be my guess), in which case that is in their content policy: "[the user agrees] Not to do anything illegal. Your use of the Online Services must comply with applicable laws."
A little beside the point. The terms say they can do whatever they want.
If they want people not to behave in certain ways, spell it out. That's the point of a code of conduct, and of reading it.
If there is a strike system, make it transparent. If something breaks the code of conduct, tell the person. Don't design and make these systems and interaction with them contingent on opaque rules and tracking.
"TOS agreement rules: You will not ask the AI to destroy the world. Doing so will get you kicked from the service. It may also enrage the world eating machine that we're giving you open access to"
I was wondering if Microsoft is interpreting a violation of the CFAA as a violation of their code of conduct. I guess more precisely that “prompt injection” is somehow a violation of the CFAA.
I doubt it’s that thought out. They just want to curb prompt injections now, and they figure people doing it will keep testing the boundaries. They must feel they have enough sample size now to no longer need to study it.
If you search, "Sorry, you are not allowed to access this service." people are getting banned now.
Which is kinda bs without warning, to treat everybody as hardeners and then expel them for their services.
I have to think it is a tactical error to disenfranchise your most enthusiastic customers.
I also don't see anywhere in the terms that says prompt injections are against the terms of use or code of conduct. https://www.bing.com/new/termsofuse https://www.bing.com/new/termsofuse#content-policy