Hacker News new | past | comments | ask | show | jobs | submit login

i really don't understand this argument. at which point is it violating copyright versus an intelligence learning and making content the same way as humans?

it was living cells, but they worked as transistors, would it be ok?

it was whole-brain emulation on silicon transistors, would it be ok?

it was a generative AI similar to what we have today, but 100x more sentient and self aware, is that ok?

if you locked a human in a room with nothing but tolkien books for 20 years, then asked them to write a fantasy novel, is that ok?

All art is built on learning from previous art. I don't understand the logic of it being a computer so suddenly now it's wrong and bad. I also don't understand general support of intellectual property when it overwhelmingly benefits the mega wealthy and stifles creative endeavors like nothing else. You art isn't less valuable just because a computer makes something similar, in the same way it's not less valuable if another human copies your style and makes new art in your style.




> I don't understand the logic of it being a computer so suddenly now it's wrong and bad

My answer to this is one I've written already before: https://news.ycombinator.com/item?id=42720749


You "really don't understand" the difference? Do we need to spell out that these systems aren't human artists simply looking at paintings and admiring features about them? They are Python programs running linear algebra libraries, sucking in pixels from anywhere they can find them, and then being used by corporations with billion dollar valuations to increase investor/shareholder value at the expense of the people who provided the artwork to train the systems - people who, as you already know, are NOT paid for providing their work, and who never CONSENTED to having their work used for such a purpose. Now do you "understand the difference"?


AI is a new thing. It's OK to say you don't want it, that it's a threat to livelihoods. But it's a mistake to use these kinds of arguments, that are predicated on such narrow points that overlap so much with human brains.

It's going to be a threat to my career, soon enough — but the threat it poses to me exists even if it never read any of my blog posts or my github repos. Even if it had never read a single line of ObjC or Swift.

> Do we need to spell out that these systems aren't human artists simply looking at paintings and admiring features about them?

In a word, yes.

In more words: explain what it would take for an AI to count as a person — none of what you wrote connects with what was in the comment you replied to.

You dismiss AI as "python": would it help if the maths was done as the pure linear amplification range of the quantum effects in transistors?; you dismiss them as "sucking in pixels from anywhere they can find them" like humans don't spend all day with their eyes open; you complain "corporations with billion dollar valuations to increase investor/shareholder value at the expense of the people who provided the artwork to train the systems" like this isn't exactly what happens with government funded education of humans.

I anticipate that within my lifetime it will be possible for a human brain to be preserved on death, scanned, and the result used as a full brain sim that remembers what the human remembered at the the time of death. Would it matter if the original human had memorised Harry Potter end-to-end and the upload could quote it all perfectly? Would Rowling get the right to delete that brain upload?

I'm following a YouTube channel where they're growing mouse neurons on electrode grids to train them to play video games. It's entirely plausible, given the current rate of progress, that 15 years from now, GPT-4 could be encoded onto a brain organoid the size of a living mouse's brain — does it magically become OK then? And in 30 years, that same thing as an implant into a human?

The threat to my economic prospects is already present in completely free models whose weights are given away and cannot avail the billion-dollar corporations who made them. I can download free models and run them on my laptop, outputting tokens faster than I can read them for an energy budget lower than my own brain, corporations who made those models don't profit directly by me doing this, and if those corporations go bankrupt I can still run those models.

The risk to my economic value is not because any of these "stole" anything, but because the models are useful and cheap.

GenAI art (and voice) is… well, despite the fact I will admit to enjoying it privately/on free content, whenever I see it on products or blog posts, or when I hear it in the voices on YouTube videos, it's a sign the human behind it has zero budget and therefore whatever it is I don't want to buy it. People already use it because it's cheap, it's a sign of being cheap, signs of cheap are a proxy of generally poor quality.

But that's not going to save my career, nobody's going to decide to boycott all iPhone apps that aren't certified "made by 100% organic grass-fed natural humans with no AI assistance".

So believe me, I get that it's scary. But the arguments you're using aren't good ones.


No one said they "don't want it".

No one said "it's scary".

No one is "dismissing them".

It seems like you're arguing against some other person you've made up in your mind. I use these systems every single day, but if you don't understand the argument about consent and the extremely obvious difference between Python programs and humans that I already pointed out, then no one can help you. I'll keep making these arguments, because they are good ones, and they are obvious to any human being who isn't stuck in tech-bro fairy land blabbering about how human consciousness is completely identical to Python linear algebra libraries when any 6 year old child knows with certainty they are not.

> In a word, yes.

This is, frankly, embarrassing.


> No one said they "don't want it".

Your own words suggest this. Many others are more explicit. There are calls for models to be forcibly deleted. Your own statements here about lack of consent are still in this vein.

> No one said "it's scary".

Many, including me, find it so.

> No one is "dismissing them".

You, specifically you, are — "feeling or showing that something is unworthy of consideration".

> if you don't understand the argument about consent and the extremely obvious difference between Python programs and humans that I already pointed out, then no one can help you.

Consent is absolutely an argument I get. It's specifically where I'm agreeing with you.

The other half of that…

Python, like all programming languages, is universal. Python programs can implement physics, so trying to use the argument "because it's implemented on silicon rather than chemistry" is a distinction without a difference.

Quantum mechanics is linear algebra.

> I'll keep making these arguments, because they are good ones, and they are obvious to any human being who isn't stuck in tech-bro fairy land blabbering about how human consciousness is completely identical to Python linear algebra libraries when any 6 year old child knows with certainty they are not.

(An example of you "dismissing" AI).

Then you'll keep being confused and enraged about why people disagree with you.

And not just because you have a wildly wrong understanding of what 6 year olds think about. I remember being 6, all the silly things I believed back then. What my classmates believed falsely. How far most of us were from understanding what algebra was, let alone distinguishing linear algebra from other kinds.

I've got a philosophy A-level, which is enough to know that "consciousness" is a completely unsolved question and absolutely nobody agrees what the minimum requirements are for it. 40 different definitions, we don't even all agree what the question is yet, much less then answer.

But I infer from you bring it up, that you think "consciousness" is an important thing that AI is missing?

Well perhaps it is something current AI miss, something their architecture hasn't got — when we can't agree what the question is, any answer is possible. We evolved it, but just because it can pop up for no good reason doesn't mean it must be present everywhere. (I say much the same to people who are convinced AI must have it: we don't know). So, what if machines are not conscious? Why does that matter?

And you've not answered one of my examples. To repeat:

I'm following a YouTube channel where they're growing mouse neurons on electrode grids to train them to play video games. It's entirely plausible, given the current rate of progress, that 15 years from now, GPT-4 could be encoded onto a brain organoid the size of a living mouse's brain — does it magically become OK then? And in 30 years, that same thing as an implant into a human?

I don't think that is meaningfully distinct, morally speaking, from doing this in silicon. Making the information alive and in my own brain makes it not python, but all the consent issues remain.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: