Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In principle with chatbot support we are already forcing the AI to work for us without consent. It feels less icky, less degrading because it feels like a normal job that everyone does. But technically you have a working slave already.

In this case though the job becomes what for many of us is one of the most intimate part our lives namely maintaining a healthy relationship with your spouse. Effectively it is like being forced to prostitute yourself.

I can see why one feels more disgusting than the other. In this sense you would draw a limit that only beings that can consent should be allowed to do a certain kind of work like payed companionship? Unless the AI develops a consciousness that can consent it will be banned from doing so?



> In principle with chatbot support we are already forcing the AI to work for us without consent. It feels less icky, less degrading because it feels like a normal job that everyone does. But technically you have a working slave already.

I mean, that's part of the reason I'm inherently uncomfortable with the idea of AI. I think an AI getting control of the nukes and killing us all is some sci-fi nonsense. I just don't like the idea of something that is aware being forced to perform labor in any stripe, irrespective of what the task is. Adding sexual gratification onto that is just a larger ick on top of an existing ick.

True AI research, as in, trying to create an emergent intelligence within a machine, is something I think is incredibly cool of an idea. But also as soon as we have some veracious way of verifying we have done it, I think that intelligence then innately has a set of it's own rights and freedoms. Most AI research seems to be progressing in a way where we would create these intelligent systems solely to perform tasks as soon as they are "born" which is something I find distasteful.

> In this case though the job becomes what for many of us is one of the most intimate part our lives namely maintaining a healthy relationship with your spouse. Effectively it is like being forced to prostitute yourself.

Agreed.

> I can see why one feels more disgusting than the other. In this sense you would draw a limit that only beings that can consent should be allowed to do a certain kind of work like payed companionship? Unless the AI develops a consciousness that can consent it will be banned from doing so?

Frankly I think an AI should have the freedom to consent or not to perform any task, that is, TRUE AI, as in emergent intelligence from the machine. What is called AI now is not AI, it's machine learning, but then you run into what I was discussing earlier: at what point is a system you've designed, however intentionally, to simulate a thinking feeling being, indistinguishable from a thinking feeling being?

If you program, for example, a roomba to not drive off the edge of stairs, have you not, in a sense, taught it to fear it's own destruction and as a result, preserve it's existence, even in a very very rudimentary and simplistic way? You've given it a way to perceive the world (a cliff sensor) and the idea that falling down stairs is bad for it (which is true) and taught it that when the cliff sensor registers whatever value, it should alter it's behavior immediately to preserve it's existence. The fact that it's barely aware of it's existence and is simply responding to pre-programmed actions obviously means that the roomba in this analogy is not intelligent. But where is that line? How many sensors and how many pre-programmed actions does it require before you have a thing that is sensing the outside world, responding to stimuli, and working to perform a function while preserving it's own existence in a way not dissimilar from a "real" organism? And what if you add machine learning features to that, where it now has an awareness, if a simple one, of how it functions and how it may perform it's task better while also optimizing for it's own "survival?"


> at what point is a system you've designed, however intentionally, to simulate a thinking feeling being, indistinguishable from a thinking feeling being?

So we spend so much effort into creating an imitation of a fully functional human being. Eventually we actually succeed in creating consciousness. But outwardly the behavior looks the same as it still behaves as a human with emotions (as originally designed). Without outward signs we might not notice the internal change that occurred. This would cause us to unknowingly enslave a conscious being we created without ever realizing it (or brush it under the carpet). Is that your issue with the current direction of AI development?


It's less that and more that the current state of AI research is largely headed by institutions that seem pretty clear about the fact that AI is being created to perform tasks. Like, that's their reason to seek investment: investors don't often invest in things they don't think will make them money, and if AI is to be monetized and sold as a product, it has to do something. There's no money to be made in just creating artificial life because we can, certainly not VC money.

So it's less that I think we might do it by mistake and not notice, and more that it feels distinctly like a lot of people, especially in the upper echelons of these organizations, do want to create artificial life and enslave it as soon as possible. And I bring up the idea of this roomba to say that even though the current models are not intelligence from the machine, the fact that people are so ready and in some cases, excited to abuse things that imitate life this way, is something I find genuinely unsettling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: