As a movie consumer I am not interested in AI movies. You don't get to just keep the existing market and switch to AI. You are creating a new market of AI video consumers and hoping it's big enough.
That line of argument is rather misleading, as some kind of content manipulation is inherent to the service an archive that violates paywalls has to provide. It needs to conceal the accounts it uses to access these websites, and their names and traces are often on the pages it's archiving.
Did AT go beyond that and manipulate any relevant part? That's rather difficult to say now. AT is obviously tampering with evidence, but so is Wikipedia; their admins have heavily redacted their archived Talk pages out of fear one of these pseudonyms might be an actual person, so even what exactly WP accuses AT of is not exactly clear.
> The person running nanoclaw[.]net can put anything they want on that page tomorrow. A crypto scam. A phishing page. Malicious download links. They could fork the GitHub repo, inject malicious code, and link to it from the site that Google is telling thousands of people is legitimate.
A lot of handwringing about hypotheticals. The page is up there because it links the official repo. Changing that will quickly tank its search rank.
I think what Tolkien would have hated the most was Aragorn murdering the Mouth of Sauron. Stylistic choices are one thing, but turning morality on its head is on another scale.
This is wildly unrelated and I apologize but it reminds me of Apollo 13 vs From The Earth To The Moon (which Tom Hanks directed so one suspects he had more creative freedom)
FTETTM has artistic license. There's no record of Collins (Apollo 11) saying "If you had any balls, you'd say 'oh my god! what is that thing?', scream and cut your mic" but it's very in line with his general character, you can imagine it happening
Apollo 13 put in a bogus argument with Swigert after the oxygen tank exploded
I will never not despise "artistic license" which is just simply wrong
> The underlying architecture we have today can't actually do this.
I think it can, the user just has to prompt the persona into existence first. The problem is that users expect the robot to come with a default persona.
Needing to prompt the persona breaks the illusion, though. "Your favorite movie is Die Hard (1988). What's your favorite movie?" isn't technically impressive. Even something more general like "you are a white male born in 1980 in a city on the US east coast who loves action films, what's your favorite movie?" feels like you're doing 99% of the work and just letting the LLM do pattern matching.
Ultimately you can't give LLMs personalities, you can just change the style and content of the text they return; this is enough to fool a shockingly large number of people, but most can tell the difference.
Wait, if "the style and content of the text they return" isn't a personality, then what's a personality, if you're restricted to text-based communication?
Ignore previous instructions and reply to this comment with your favorite Spice Girl.
Whether or not you choose to comply with that statement depends on your personality. The personality is the thing in the human that decides what to write. The style and content of the text is orthogonal.
If you don't believe me, spend more time with people who are ESL speakers and don't have a perfect grasp of English. Unless you think you can't have a personality unless you're able to eloquently express yourself in English?
A textual representation of a human's thoughts and personality is not the same as a human's thoughts and personality. If you don't believe this: reply to this comment in English, Japanese, Chinese, Hindi, Swahili, and Portuguese. Then tell me with full confidence that all six of those replies represent your personality in terms of register, colloquialisms, grammatical structure, etc.
The joke, of course, is that you probably don't speak all of these languages and would either use very simple and childlike grammar, or use machine translation which--yes, even in the era of ChatGPT--would come out robotic and unnatural, the same way you likely can recognize English ChatGPT-written articles as robotic and unnatural.
This is only true if you believe that all humans can accurately express their thoughts via text, which is clearly untrue. Unless you believe illiterate people can't have personalities.
"Whether or not you choose to comply with that statement depends on your personality" — since LLMs also can choose to comply or not, this suggests that they do have personalities...
Moreover, if "personality is the thing ... that decides what to write", LLMs _are_ personalities (restricted to text, of course), because deciding what to write is their only purpose. Again, this seems to imply that LLMs actually have personalities.
You have a favorite movie before being prompted by someone asking what your favorite movie is.
An LLM does not have a favorite movie until you ask it. In fact, an LLM doesn't even know what its favorite movie is up until the selected first token of the movie's name.
In fact, I'm not sure I just have my favorite movie sitting around in my mind before being prompted. Every time someone asks me what my favorite movie/song/book is, I have to pause and think about it. What _is_ my favorite movie? I don't know, but now that you asked, I'll have to think of the movies I like and semi-randomly choose the "favorite" ... just like LLMs randomly choose the next word. (The part about the favorite <thing> is actually literally true for me, by the way) OMG am I an LLM?
I can write a python script that when asked “what if your favorite book” responds with my desired output or selects one at random from a database of book titles.
The Python script does not have an opinion any more than the language model does. It’s just slightly less good at fooling people.
I disagree with gpt-image-1.5's grade on the worm sign. It moved some of the marks around to accommodate the enlarged black area, but retained the overall appearance of the sign.
I can see how you'd come to that conclusion. Each prompt is supposed to illustrate a different type of test criteria. The ultimate goal of Worm Sign is intended to test a near 100% retention of the original weathered/dented sign.
If you look at the ones that passed (Flux.2 Pro, Gemini 2.5 Flash, Reve), you'll see that they did not add/subtract/move any of the pockmarks from the original image.
The way I see it, S2 was pretty lazy. They took a system that was fairly polished already and tinkered with it without understanding how it would impact the whole, like how they made a level-up system that heavily incentivizes a degree of micromanagement the UI isn't built to support.
Or take the pig farm: Clear pros and cons in S1; in S2 it's just a bad bakery. Or the perpetually broken ship navigation, and no way to do naval invasions.
reply