Hacker News new | past | comments | ask | show | jobs | submit login

Does anyone actually find these arguments persuasive?

There is really no reason to believe that what chatGPT or stable diffusion does is anything like what "your brain" does--except in the most superficial, inconsequential way.

Second, try applying this logic to literally anything else and you'll see why it's absurd:

"You can't ban cars from driving on sidewalks! If it's acceptable for people to walk on sidewalks, then it has to be acceptable for cars to drive on sidewalks, since it's just automated walking"

"You can't ban airplanes from landing in ponds. They fly 'just like' ducks fly! So if it's acceptable for them, it must be acceptable for airplanes too"




Yes, and: why shouldn’t it matter that in one case it is a person and in another it is a computer program?

Why would it be incoherent to say “I’m okay with a person reading, synthesizing, and then utilizing this synthesis—but I’m not okay with a company profiting off of a computer doing the same thing.” What’s wrong with that?

But again, like you and others have said, it’s really not the same thing at all! All ChatGPT (or any other deep learning model) is capable of doing is synthesizing “in the most superficial way.” What a person does is completely different, much more interesting.


I find the argument pretty persuasive.

I also agree it's not the only argument and ultimate proof.

I don't at, this point, have an answer. I'm sure this miraculous new technology will survive the luddite attacks, but there will probably be some tense moments, and some jurisdictions will choose to be left behind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: