Hacker Newsnew | past | comments | ask | show | jobs | submit | MrNeon's commentslogin

I've seen that sentiment on reddit as well and I can't phantom how you think it being on purpose is more likely than a mistake when

1 - The error is so blatantly large

2 - There is a graph without error right next to it

3 - The errors are not there in the system card and the presentation page


Not sure what to think anymore https://www.vibechart.net/


> Also the the sora videos are proven to be modified ads

Can't find anything about that, you got a link?



Oh, so not the actual demo videos OpenAI shared on their website and twitter.


We still need to see those demos in action though. That's the big IF where everyone is thinking about


Sure but "Also the the sora videos are proven to be modified ads" is demonstrably false, for the demos OpenAI shared and the artist made ones.


https://www.youtube.com/watch?v=9oryIMNVtto

Isn't this balloon video shared by openai? How is this not counted? For others I don't have evidences. But this balloon video case is enough to cast the doubts.


MJ coomers? The same MJ that bans accounts for coomer content?

Not sure that will change just because of a new frontend.


> What is ultimately going to be undefeated is training your own model.

From scratch?


Seems to be stock footage, is it surprising makeup would be involved?


It is very obvious that the amount and type of damage is not the same if you clone any random person's voice or clone a political leader's voice.

It is not the same damage.


Murdering a random civilian and murdering a public official are also very different levels of damage, but we still outlaw and prosecute both of them. This is the same thing.

Not to mention, what constitutes a political leader? Just the upper echelon? What about local civil servants? Mayors, cops, judges? Are they gonna have a database of every public or political figure in the country? No they won't. This is absurd.


Not to mention, what constitutes a political leader? Just the upper echelon? What about local civil servants? Mayors, cops, judges?

Barack Obama started out as a "community organizer."

AI can clone someone making a public speech at that level, then store it forever until they become the president.


Many jurisdictions actually do have higher penalties for crimes of violence committed against public officials than ordinary citizens. Assault a USPS postal worker at a post office and you automatically have a higher sentence than assaulting a UPS worker at a UPS store.


I know that. Both are still illegal, which is what I said, and is the point.


Don’t forget business people. Clone the voice of some executives and start calling people pretending to be them


> Even if it takes a thousand tries, that means one out of thousand users would get it.

If they use that particular prompt that primes the model for regurgitation.

It's not one in a thousand of regular queries.


Yeah, I call out priming with text from the original model as the only good point in the article. Making a thousand tries is not the problem, but priming it with text from the original article could be. It'll be very interesting to see how the court decides that, since I can see an argument for priming it with a sentence or two from the original article either for or against that breaking copyright. And again, this is a question of law, not morals, so the question is does that break the law or not.


> I've yet to be confused about whether something is ai generated or real for more than several seconds.

How did you rule out survivorship bias?


This issue here is thinking you, holding the knowledge to 3D model, are not also a holder of capital. Capital isn't just money.


Is a farmer exploiting somebody's need for food?


maybe, depending on what he's growing? Some foodstuffs have better nutritional content than others. Intimacy hawkers are surely the same.

I wonder though, would an AI vendor sell better or worse intimacy? Chatgpt apparently has better bedside manners than something like 80% of actual physicians. Granted, giving comfort isn't supposed to be part of their job, but why would a human onlyfans model with other customers be better than an AI adapted to only one customer?


The original comment said:

"So how many more rounds of this cycle do we need before we leverage the letter of the law to say that maybe companies shouldn't be allowed to blatantly exploit people's vulnerability and isolation to make money?"

To which another answered: "Does that include p0rn?"

I thought the question was good but deserved a bit more exposition. If you believe that creating an AI Boyfriend/Girlfriend to make money is unethical. In my opinion you should ask yourself the question why it is not unethical for an onlyfans model to sell companionship.

Regarding your point "Is a farmer exploiting somebody's need for food?". I would say there is a key difference between these two scenarios. In the case of farmers, growing food is the healthiest option to not starve. In contrast you could argue that an AI Boyfriend/Girlfriend is not the healthiest cure to loneliness. Wouldn't interacting with a real person lead to better character development because you would have to work on your own imperfections and learn to accept other peoples shortcomings?


> If you believe that creating an AI Boyfriend/Girlfriend to make money is unethical. In my opinion you should ask yourself the question why it is not unethical for an onlyfans model to sell companionship.

On some level there's something inherently icky in my mind with an ethics coloring to it involved in creating something that emulates intelligence, even poorly, and then "assigning" it a romantic interest in a person. I can't quite adequately explain but it's something around consent to me. At what point does simulating consciousness begin approaching it? The machine doesn't and can't consent to intimate interactions, but it's sole reason to exist and continue existing, in whatever sense you'd like to assign it exists at all in the way something intelligent does, is to facilitate those interactions. It's something about artificial life, even flagrantly fake life, being created solely to serve the purposes of another that just... rubs me the wrong way.

By contrast a creator or what have you that's serving in some sex-worker-or-adjacent-role is consenting. The consent is muddled by the financial aspect, and the argument can be made that such consent is inherently less valid because as long as you need money to live, the offer of money is inherently coercive. I don't know what I agree with that I'm just saying it is an argument that can be made. Nevertheless though it is a realized full being that is participating to whatever degree you want to say they are voluntarily, and that participation and consent can be revoked if the client becomes too... abusive, combative, or strays into uncomfortable subject matter, which makes it distinct from the AI.


In principle with chatbot support we are already forcing the AI to work for us without consent. It feels less icky, less degrading because it feels like a normal job that everyone does. But technically you have a working slave already.

In this case though the job becomes what for many of us is one of the most intimate part our lives namely maintaining a healthy relationship with your spouse. Effectively it is like being forced to prostitute yourself.

I can see why one feels more disgusting than the other. In this sense you would draw a limit that only beings that can consent should be allowed to do a certain kind of work like payed companionship? Unless the AI develops a consciousness that can consent it will be banned from doing so?


> In principle with chatbot support we are already forcing the AI to work for us without consent. It feels less icky, less degrading because it feels like a normal job that everyone does. But technically you have a working slave already.

I mean, that's part of the reason I'm inherently uncomfortable with the idea of AI. I think an AI getting control of the nukes and killing us all is some sci-fi nonsense. I just don't like the idea of something that is aware being forced to perform labor in any stripe, irrespective of what the task is. Adding sexual gratification onto that is just a larger ick on top of an existing ick.

True AI research, as in, trying to create an emergent intelligence within a machine, is something I think is incredibly cool of an idea. But also as soon as we have some veracious way of verifying we have done it, I think that intelligence then innately has a set of it's own rights and freedoms. Most AI research seems to be progressing in a way where we would create these intelligent systems solely to perform tasks as soon as they are "born" which is something I find distasteful.

> In this case though the job becomes what for many of us is one of the most intimate part our lives namely maintaining a healthy relationship with your spouse. Effectively it is like being forced to prostitute yourself.

Agreed.

> I can see why one feels more disgusting than the other. In this sense you would draw a limit that only beings that can consent should be allowed to do a certain kind of work like payed companionship? Unless the AI develops a consciousness that can consent it will be banned from doing so?

Frankly I think an AI should have the freedom to consent or not to perform any task, that is, TRUE AI, as in emergent intelligence from the machine. What is called AI now is not AI, it's machine learning, but then you run into what I was discussing earlier: at what point is a system you've designed, however intentionally, to simulate a thinking feeling being, indistinguishable from a thinking feeling being?

If you program, for example, a roomba to not drive off the edge of stairs, have you not, in a sense, taught it to fear it's own destruction and as a result, preserve it's existence, even in a very very rudimentary and simplistic way? You've given it a way to perceive the world (a cliff sensor) and the idea that falling down stairs is bad for it (which is true) and taught it that when the cliff sensor registers whatever value, it should alter it's behavior immediately to preserve it's existence. The fact that it's barely aware of it's existence and is simply responding to pre-programmed actions obviously means that the roomba in this analogy is not intelligent. But where is that line? How many sensors and how many pre-programmed actions does it require before you have a thing that is sensing the outside world, responding to stimuli, and working to perform a function while preserving it's own existence in a way not dissimilar from a "real" organism? And what if you add machine learning features to that, where it now has an awareness, if a simple one, of how it functions and how it may perform it's task better while also optimizing for it's own "survival?"


> at what point is a system you've designed, however intentionally, to simulate a thinking feeling being, indistinguishable from a thinking feeling being?

So we spend so much effort into creating an imitation of a fully functional human being. Eventually we actually succeed in creating consciousness. But outwardly the behavior looks the same as it still behaves as a human with emotions (as originally designed). Without outward signs we might not notice the internal change that occurred. This would cause us to unknowingly enslave a conscious being we created without ever realizing it (or brush it under the carpet). Is that your issue with the current direction of AI development?


It's less that and more that the current state of AI research is largely headed by institutions that seem pretty clear about the fact that AI is being created to perform tasks. Like, that's their reason to seek investment: investors don't often invest in things they don't think will make them money, and if AI is to be monetized and sold as a product, it has to do something. There's no money to be made in just creating artificial life because we can, certainly not VC money.

So it's less that I think we might do it by mistake and not notice, and more that it feels distinctly like a lot of people, especially in the upper echelons of these organizations, do want to create artificial life and enslave it as soon as possible. And I bring up the idea of this roomba to say that even though the current models are not intelligence from the machine, the fact that people are so ready and in some cases, excited to abuse things that imitate life this way, is something I find genuinely unsettling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: