Obvious that these authors listed ChatGPT to get attention. The papers themselves are very short. The medxriv preprint [1] should be a blogpost. The paper in Oncoscience is nonsense [2]. It was submitted on Dec 14th and accepted on Dec 15th 2022, apparently contravening the journal’s stated peer review policy (find me an article anywhere that gets peer reviewed in under 24 hours). The journal presumably realised the clickbait value and rushed it through.
Suffice to say, the authors’ cheap strategy has worked. The medrxiv preprint has an impressive view and retweet count. Expect a flood of ChatGPT authored papers.
This is actually a case of credulous laypeople drawing attention to something academics know enough to dismiss (and to communicate to others that it is dismissal-worthy)
That would be like suggesting hackernews works well, just don't check out the front page, you and your friends have a curated list of articles that are actually worthwhile reading.
And yet it's been published in a journal with an expedited peer review to gather drama and clicks which undermines everything academics claim to stand for. What does that do to credibility of other papers?
> “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.
Exactly, came here to write this. Disclosing the contribution of ChatGPT is quite important though from a transparency standpoint. I wouldn't mind seeing it in the methods section either.
True, but I still think chatGPT is just a tool, maybe more powerful than grammarly, but I think it’s generally still fairly low level in terms of Bloom’s taxonomy.
Authorship is supposed to be for intellectual contribution. If LLMs get good enough to start identifying weaknesses in my methods section or suggesting novel analyses to answer scientific questions, then maybe I’ll reconsider my view
Though there’s still the argument made in this article that authors should be able to take responsibility for the work, which under our current legal framework programs can’t do
What we're witnessing here are the last gasps of the old world order. Powerful people are going to be tripping over each other in the next few years, lawyering and philosophizing around, trying to define "intelligence", "consciousness", "authorship" etc. to serve their interests, but it will all be for nothing.
Very soon you'll be able to tell an AI to "write a vampire romance novel", "design an Earth observation satellite", or "compose a symphony in the style of Beethoven", and the result will be plainly superior to what any human engaging in the same pursuit could produce. At that point, the whole "sentience" debate will fall apart, along with much of society as even the smartest humans are made essentially obsolete.
> soon you'll be able to tell an AI to "write a vampire romance novel", "design an Earth observation satellite", or "compose a symphony in the style of Beethoven", and the result will be plainly superior to what any human engaging in the same pursuit could produce
This overdramatises a simple, obvious amendment to copyright via case law. No debates around sentience. If a human didn’t create it, it isn’t IP.
You're thinking way too small. Nobody is going to care about copyright 15 years from now.
"MovieGPT, make me an epic fantasy film trilogy that's better than Lord of the Rings, with myself as the hero."
How could intellectual property possibly matter in a world like that? Nobody is going to still read books or watch films made by humans, because AI-generated stuff will be much better, much more plentiful, and even cater to individual desires.
> nobody is going to care about copyright 15 years from now
This is a perennial prediction. Absent substantiation, it’s hard to take it seriously.
> "MovieGPT, make me an epic fantasy film trilogy that's better than Lord of the Rings, with myself as the hero."
This output isn’t protected. The Lord of the Rings is. We shift to your AI paying when it references the content and move on. Nobody likes licensing fees. Magicking them away isn’t a solution.
> I think this end of intellectual property was predicted when the Constitution was signed.
True. And in 1890 it was predicted that cities would be drowning in horse droppings by 1960. But just because they were wrong then doesn't mean similar predictions today are also wrong.
> We shift to your AI paying when it references the content and move on.
Nope. Because the AI isn't "referencing" LotR, anymore than a human author is when writing a generic fantasy novel about elves and dwarfs. Ideas aren't protected by copyright. Content, characters etc. are. But AI-generated works aren't going to "copy" LotR story or characters. They're going to do what humans have been doing since forever: Draw inspiration from the entire body of human (and AI) creative works in existence. Copyright doesn't prevent that.
> going to do what humans have been doing since forever: Draw inspiration from the entire body of human (and AI) creative works in existence. Copyright doesn't prevent that
It’s not a human and copyright can. Again, simple shift. Could probably slip it into a budget bill.
Like I said, the last gasps. Maybe some legislature is indeed going to try that. It won't matter once AI turns the fabric of society inside out. Laws as we know them will stop meaning anything in practice. Everyone (including legislatures) will be forced to turn most decision making over to AIs, because AIs will be much better at it than humans and not using them means being conquered by those who do.
I feel like Netflix has been doing some form of this for years. Probably not generating entire scripts using AI, but almost certainly manually using algorithms for what to include.
While the output is "entertaining", it's not like you can compare it to actually good films. Netflix will never make something like The Godfather. What they make is like food without nutrition. That's fine, everybody likes sweets once in a while, but it won't keep you alive.
Of course I can't prove Netflix is doing this, but the argument would still hold true for AI generated content.
People will care because disney et al will use their legal power and money power through lobbying to try to make laws blocking these new competitor, the ai generated IP (the movies, books etc).
I believe using large language models to generate paper will be the trend eventually. Being an English as a second language speaker, writing these papers cost too much time. If I am a native speaker, my output could be at least 2-4x. These AI models give me a lot of hope. Now there are just so many results sitting there in the hard drives collecting dust, but I don't have the time/energy to write a paper for them. This AI authorship issue needs to get resolved one day.
I believe the word should be format and not generate. The model is just formatting your content to fit the proper English "template". It's not generating its own content in this case
Mostly true. From what i observe the processes could will be like give hint to AI to create a bunch of template of some sort, then humans select / rearrange / put in number and detail then AI fix grammar.
What a camera does is a translative operation of something already existing in front of the operator, it cannot conjure things on its own (whether original or derivative) nor scale due to the aforementioned limitation. It cannot understand instructions that are beyond the processing of whatever it is pointed at (leaving the rampant post-processing becoming the norm in handheld devices aside) and act upon them to significantly change the content of its output (whose object light last reflected from and how). Isn't this a very reductionist analogy? I genuinely do not understand its purpose.
All analogies are reductionist along some axis, in this example, an AI will just sit there unless pushed by the human to do something.
Also, after the AI shutter button is pushed, there are a lot of human decisions made to refine the output. In the SD realm, people are sorting through hundreds of image outputs to find the gems amongst the nonsense.
Which is kind of amazing considering that film can still look better, and requires a rudimentary camera. Literally a beer can with a pinhole, and film inside can take a picture.
I think this is totally legitimate. ChatGPT represents an aggregate intelligence and the crowd that built it, willingly or not, deserve recognition, and condemnation all the more so, depending on what ChatGPT spits out.
Everyone wants to appear fair and pragmatic. Instead of coming up with 'scholarly' justifications for why AI can't be authors, why can't they simply update the policy to say that 'to be author, you must be eligible for human rights' or something along the line?
Its worth noting that global patent offices have rejected AI listed as the inventor on patents - this has gone through trial and appeal an been heard by a panel of five judges at the highest level, who struck down the idea.
A patent is a contract between the government and the inventor. The government grants a protected monopoly over the patented invention. In exchange, the inventor turns over all details of the invention to the public. The primary purpose of patents is to get as many ideas out to the general public as soon as agreeable, circulating in the public technology space. For this to make sense, the patented idea must be "non-obvious". That is, it must be something that one would reasonably expect could be kept secret — this is what patents protect against. If an idea can plop out of an AI, and that AI is generally available, then it is "obvious" in the sense that it's accessible to anyone, and thus the inventor does not deserve any special protection, because the people are not getting any meaningful exchange for what could be a real secret.
That's because if you said AI invented something and there was fraud or it was stolen from someone else the people who were making money off the patent would just say "Oops! nobody should get fined or go to prison because the computer did it all by itself." Imagine if Elizabeth Holmes said the AI told her the Theranos machine worked. It lied, so you have to put the AI in prison now.
This is going to get really interesting really soon I think, whether an 'AI' can be capable of inventive "thought":
Could, for instance, one teach an AI to come up with the concept of a screw[0] from training it on inclined planes and nails? Or at least, everything but screws/bolts, and any other form of rotary inclined plane. If AI can come up with a former novelty, there is no reason AI couldn't come up with a new one.
Suffice to say, the authors’ cheap strategy has worked. The medrxiv preprint has an impressive view and retweet count. Expect a flood of ChatGPT authored papers.
[1] https://www.medrxiv.org/content/10.1101/2022.12.19.22283643v...
[2] https://www.oncoscience.us/article/571/text/