Sci-fi was that AI would be able to write great stories.
Reality is that people flood submissions low-effort boring crap.
You probably can use AI tools/ChatGPT to write interesting stories, or use its output as a starting point (I'm not sure if it would be less effort than just writing a story from scratch though). But that's of course not what's happening.
First rule of designing anything is what I would call Tucker's Second Law[1]: "if some cunt can make a buck by completely fucking over your system then that cunt will completely fuck over your system because that cunt is a cunt."
This is a huge problem with a lot of different things, ranging from email spam to how the "make a quick buck" type of hucksters have taken over cryptocurrencies to how nations design things like their tax and benefits systems.
I stumbled over this quote in a different context but I feel like it applies everywhere: the optimal amount of fraud is not zero [1].
Now that Pandora's box is open, we will always have some AI-generated content. If we try to stamp it out we will exclude real voices, just like stamping out welfare-fraud makes it harder for 'legitimate' welfare recipients, of which there are way more than fraudsters. We need to be careful not to cut the nose off to spite the face.
The problem isn't that AI fraud is hard to deal with when they're found. The problem is that there's so much of it that the AI fraud itself is acting to exclude real voices by drowning them.
Hmm. Restricting AI would restrict real voices. But unfettered AI also restricts real voices. There is no path that does not restrict real voices. The best we can do (analogous to the welfare fraud case) is maximize the real voices, which in practice is probably going to mean a brutal crackdown on AI content. (It's easier to do AI fraud in volume than it is to do welfare fraud in volume.) Some real, human voices, with valuable things to say, are going to get blocked, and that's bad. But more would be unheard without the crackdown, so we kind of have to do it.
The solution is as obvious as it is effective: take our voices off of the internet. Only face to face can possibly matter or be trusted. Yes, it will slow down the pace at which we hurl ourselves into technological nihilism. But that’s a good thing.
Yes, but it but humans have been afraid that machines would replace them since long before Frank Herbert wrote about it. For example: https://en.m.wikipedia.org/wiki/Luddite
Luddites weren't afraid that they would be replaced by machines, they were skilled weavers who had already been replaced by programmable looms. Both their status/role in society and their income were significantly decreased as the available jobs switched from artisanal weaving to operating industrial looms in a factory.
Surely a scifi short story publisher personally knows and lives near enough good scifi authors that they can just ask their friends to write all the stories, right?
This is a powerful way of looking at things. It explains a lot, even airport security. For example: the optimal number of evildoers to let through airport security is not zero, yet we've gone to extreme lengths to try to make it so. If you sum up the total human lifetimes spent in airport security lines, it almost certainly multiples of the lifetimes lost in attacks.
We often strive for perfection when we're well-past optimal and far into the region of diminishing returns.
> We often strive for perfection when we're well-past optimal and far into the region of diminishing returns.
Agree with all you've said but you're too generous putting all this under the noble quest for perfection.
It is just that modern western societies freak out at the mention of death. If simple car accidents make the headlines for hours/days, it shows that what is really driving all this is not perfection but a paranoid fear of death.
Do car accidents (other than Tesla :D) drive headlines though? Seems like we are extremely sensitive to certain classes of sudden death (terrorism), much less sensitive to others (car accidents), and completely dismissive of slow death even when it kills huge percentages of the population (obesity and heart disease).
Probably; but I do think you need to consider abuse in the early stage of many things, or respond to changes in circumstances (as is happening here). A naïve system design is often open to rampant abuse, and even a small number of abusers can seriously mess things up. How many people are actually sending out spam? Not all that many, on the internet scale of things. If you're not careful a very tiny minority – potentially even a single person – can completely destroy any good system that doesn't anticipate and respond to abuse.
But like with many things, it's indeed a trade-off.
Remember that Digital Ocean hacktober fest a few years ago that caused endless pointless spam to many repositories? The following year they instituted some fairly simple changes which mostly eliminated it. Sometimes even simple changes can be quite effective.
It remains to be seen what the long-term effects are; perhaps once the novelty wears off things will settle down. I bet that at least some of these submissions are people trying to see how far they can get, just for "fun", like a kid with a new toy. Give a boy a hammer and he will discover that everything needs hammering. I have no idea if that's 1%, 10%, or 50%.
Some others are "inspired" by those YouTube ... people ... similar to what happened with hacktoberfest, and that will die down too as those YouTube ... people ... tend to hook in to anything new as a way to sell their nonsense, and this will probably die down. I have no idea what exactly their effect is either.
In short, it's probably too soon to panic just yet; let's check again in a year or so. Maybe things will be better, or maybe not and we'll be stuck with even more poo-poo flung our way by the internet monkeys.
Welfare fraud is nullified by universal basic income.
AI flood is nullified by (what completely sensible measure that vastly simplifies things but we're too much of 5 monkeys in a trenchcoat to see it?)
Peter Watts has an interesting take in Rifters (a picture of our world going out with a well-insured corporate whimper), where the guy can't find a song he liked a few years back anymore because the world is not only undergoing an ecological catastrophe but also _running out of data storage space_ because the self-replicating digital "fauna" has taken over the Internet.
It is time for us to... dare I say it... start _ignoring_ the voices, human or no? How do we stop the proliferation of meaningless content before the AI starts making unsolicited backups to our wetware - just like regular traditions do, without being considered "conscious" or "intelligent" even though they're also made of the collected traces of the intelligence of individuals?
We need an AI to read the low effort crap to find the gem. That's what someone needs to work on -- then they could sign all these "authors" and clean up.
In many ways even the more depressive/dystopian sci-fi "got it wrong", as in the stories the reasons AI (and technology in general) is abused is often by some big powerful entity or entities (government, corporations, mob bosses, etc.) as part of its design. In reality it's abused by "the little people" who don't have any power or aspirations for it, and they do so for reasons that are banal and boring: to make a quick buck.
I think it's a given that governments, PR/brand management firms, and other large actors are racing to weaponize chat generators. The only reason they're likely not already being deployed at scale is because the current stuff isn't really fit for purpose. Being able to say some "appropriate" things most of the time doesn't matter when you spaz out against unsophisticated adversarial actors, even to the point of spitting out your own classified training prompts.
They need to be able to vary their writing style, vernacular, content, subtly (and consistently) reflect an imaginary backstory, following invisible/unspoken standards, and more. All on top of not spazzing out. So we're still somewhat safe from our robot overlords, but they're probably coming soon. And it's only then that we'll get to measure their impact.
Spam submissions will stop if there is a real cost imposed on the people wasting the magazine's time with passive income/side hustle schemes based on AI, content farms, or whatever.
Assign a token fee, refundable in full as long as the submission meets the stated requirements and doesn't trigger the baloney detector. Even if the story is not accepted for publication, a real writer will get the fee back.
The editor says he has some kind of system to detect bogus submissions, but it apparently doesn't scale or is based on manual review. He has to review the real submissions anyway, so the only ones who end up paying are the content farm/AI spammers.
That is one possible way to approach this situation, but it has some problems:
* It's a well-established social norm (at least in the SF world) that money should always flow from publishers to writers, never the other way around. That's one of the signals that distinguishes reputable journals and publishers from vanity presses. (Google "Yog's Law".)
* As Neil Clarke points out on Twitter, Clarkesworld aims to accept submissions from all over the world. Not everyone can easily make international payments to the US, and not everybody can easily afford what would be a "token fee" in developed countries.
* I'm guessing payment processors wouldn't look too kindly on a business where virtually all of the payments are either refunded to real users, or falsely reported as fraudulent by spammers.
It's a well-established social norm (at least in the SF world) that money should always flow from publishers to writers, never the other way around.
And that norm worked 25 years ago, barely, when publishers of science fiction magazines had working business models thanks to paid subscribers, newsstand sales, library customers, and advertisers.
Now it's down to paid subscribers, of which there are few, and partnerships with online platforms, which are dwindling (he says Amazon recently discontinued a program that generated revenue for Clarkesworld). Something's got to give. He's dealing with it by shutting off the submissions pipeline, but this publication is truly at risk of going under, which would be a great loss to the SFF community.
Making AI side hustle scams pay for wasting his time is not an ethical lapse if legit authors pay nothing. If he's worried about international submissions or poor writers unable to afford a token fee, set up a scholarship funded by the scammers. An alternative if the pain continues is the publication going under or selling out in a far worse way.
I've often thought an authenticated proof of donation to a global charity could be a solution to these types of problems, scaled in price to the senders locale.
Not refundable, but at least a donation to UNICEF allows the receiver to impose a submission cost without the implication they are doing so out of greed.
> money should always flow from publishers to writers, never the other way around.
An interesting workaround is to still require money (or effort) to be spent by authors, but not have it go to the publishers. Only require that the publishers can get some proof that money has been spent. This way, the flood of AI junk submissions is halted, and writers can still be assured that publishers still have no incentive to solicit submissions in order to get money.
> Not everyone can easily make international payments to the US
this has been a real problem in the past, but it's really just a problem with the banking system; bitcoin and other cryptocurrencies solve it pretty comprehensively
there's still the issue that, unless you're using lightning or something, you need to be able to pay the transaction fees, but those are almost always under US$1, and most people in the world can pay US$1
Crypto is far from frictionless. You need to set up a wallet, find an exchange that will take your local currency, take that currency to some kind of bank Bitcoin ATM or other digitizing service, find the destination wallet address, avoid being scammed or having your money stolen, make the transaction covering any additional fees, and be sure you have enough extra to cover any fluctuations in price. Many people won't be technical enough to do that and it isn't easy everywhere.
Also one US dollar can be a relatively big fee, depending on your circumstances, unfortunately.
most of the people who buy and sell cryptocoins here in argentina aren't very technical
the reason the transaction fee matters is that it puts a floor on how small a submission fee you can reasonably charge; 1¢ wouldn't work, but US$1 or US$10 could
While skin-in-the-game is a good tool for problems of this general class, the editor has said that refundable fees would not be workable in this particular case due to issues with payment providers[1].
Not too long ago, these magazines didn't accept electronic submissions. You were required to print it (conforming to a certain format), and mail it. The rationale at the time was that the editors weren't going to read it on a screen, and so you should print it instead of making them do it.
Seems like a simple solution. If a manuscript is accepted, only then request the electronic version.
>We don't have a solution for the problem. We have some ideas for minimizing it, but the problem isn't going away. Detectors are unreliable. Pay-to-submit sacrifices too many legit authors. *Print submissions are not viable for us.* Various third-party tools for identity confirmation are more expensive than magazines can afford and tend to have regional holes. Adopting them would be the same as banning entire countries.
The problem is not the printing - it's the mailing.
Clarkesworld intentionally went to electronic submissions to make it easier for international writers. International mail can be prohibitively expensive, and in a lot of places there's better internet access than there is reliable postage.
Can we stop with the myth that somewhere in the australian bush there is some Hemingway or Chateaubriand waiting to be published, but his old solar panels can just give him enough energy to send his manuscript by email?
Am being sarcastic but I think my point is clear that way : someone not being able / willing to send something by mail is just not the audience for the magazine. *I cannot attend the Boston marathon coz of the distance, then perhaps they should find some equidistant place for it?
This absurd quest for "lowering the accessibility bar" to everything is making everyone jumping through hoops and no one is considering the cost of people who don't bother jumping through them...
I work enough in the developed world to know that there are a lot of people there with things to contribute to the world, and where electronics are more reliable than mail.
I didn't say there is no one worthy in those countries. I am saying it is not the job of a company in country X to make all it can to make it possible for people in country/continent Y to be able to easily do something, even if it makes it unusable for everyone (else).
Maybe decentralization would be a good idea for this. For example, clarksword can have it's "you have to mail it" rule, then there should be an Australian group that does the same thing for locals (including the outback).
The Australian group can sponsor some top local writer's work to the groups in other countries.
I don't follow. What serious writer can't go buy a printer for $49.99 or pay less than $10 to have it printed? I'm assuming they would be writing their story on a phone that costs close to $1k or a laptop at the same price range.
I remember the 90s when my family was poor in a middle income country. Mailing internationally was a huge barrier, we didn’t even think about it as an option. It’s 2 dollars now, but still, there are a few 10 or 100 millions people to whom that’s too much. And of course, there is no post service everywhere in the world.
I mean I get that. My laptop was a Chromebook I got for $35 on eBay and then a MacBook Air I got for $75. But printing is not out of reach for almost anyone writing. Printing at the library is like 10 cents a page.
- Require a photo of the author with a photo of gov-issued id.
- Maybe require a dated piece of paper or a Clarkesworld word-of-the-day in the photo.
- If submitter is from a country with hard-to-get gov-id, or just doesn't have/want it, their submission goes in the extra scrutiny pile.
I'm not too keen on requiring gov id, so maybe someone has a better idea?
I'm pretty sure you just need to increase friction for spammers in meatspace to solve this. I'm not saying this is scalable, but we're just trying to fix it for Clarksworld.
Spam has always been cancer. It pretty much destroyed Usenet (I know, it still exists, but spam broke the trust network it relied upon), and it wrecks everything it touches.
Fundamentally the problem is that spammers see nothing wrong with lying and information pollution as long as they make some money out of it. To some this is merely a cultural more about trying your luck, being persistent, taking advantage of obvious opportunities etc, to others it's a grave moral affront. It's partly due to phenomena like this that we've seen the move to ward a 'zero trust society' with endless security arms races.
It's hard to even adequately describe the problem. Large tech firms say they're doing a lot of fight spam on their platforms but basically just put out reports that make themselves look good, rather than being audited to any objective standard. The last economic study I've seen on the market for spam was a decade ago. I don't think we have good data on what % of internet content/traffic is real vs garbage.
To my mind, a big part of the problem is that we just try to filter it away and pretend it doesn't exist, while we constantly get more of it. Perhaps we should track it better, and more aggressively target the sources. Even just looking at the HN submission page, I'm struck by how every day around the same time new accounts show up to promote keto or CBD gummies - I presume that's because it's midnight or morning in a timezone and either a script or some slave laborer is tasked with putting out that day's wave of spam across tens of thousands of submission forms. Thanks to VPNs etc we have no idea where, and even though there's more hacking/security talent clustered here than most of the internet, as a community nobody can do anything about it. Most of these submissions are auto-deleted, making it hard to even identify their origin point in order to counter-target it.
Information pollution is a real and increasingly critical problem, and I think the passive acceptance of spam and unwillingness to engage with it in favor of mediocre technical workarounds is doing real damage to internet society.
I agree, but I don't think it's just as simple as us becoming "willing to engage with it". It's a hard problem, and a solution may be literally impossible.
I remember reading Cory Doctorow saying why DRM could never succeed. For an attacker to decrypt your content, they need the encrypted content, the decryption device, and the key. For a legitimate user to see your content, they also need the encrypted content, the decryption device, and the key. If it turns out that one of your users is also an attacker (it only takes one), then it's game over.
In the same way, the internet as we know it takes the ability for any source IP address to be able to send packets to any destination IP address, plus protocols for mail, web, and so on. This lets anyone create content for anyone. But if one of the "anyones" turns out to be a spammer, it's not quite game over, but the information pollution begins and poisons everything.
But the point is, I don't think you can fix it while also keeping what made the internet wonderful.
What made the early internet wonderful was that it was a high trust community, at least on certain axes. Part of that was barriers to entry, part of it was social norms. Maybe it's time to undo the Eternal September and start pruning those who break the norms of the internet.
I think we could have more accountability. Things like domain anonymity badly diminish accountability, for example. I still think anonymity for internet users is important, but volume delivery + anonymity are a bad combination.
I'm increasingly of the opinion that anonymity is the root cause of the problem.
In the real world we have reputations that follow us. Bad actors are identified, their reputations proceed them. It seems to me the problem is that there is no reputation that follows you in the online world. Barrier to entry is low, consequences for bad behavior are low. When a bad actor is caught, they simply create a new account and start over again.
I don't see any way to solve this problem online without non-anonymous "hard to get" accounts that are traceable to real people, and whose digital reputation follows you. Digital bad actors can be quickly identified, and ignored.
I think we can support a mixed anonymous / non-anonymous internet. Dealing with anonymous users would be no different than dealing with a person in a back alley -- caveat emptor. Most of us would choose only to do business with "non anonymous" accounts. In a short amount of time commercial internet use would quickly move over to the non-anonymous side of the internet.
Hmm. So the more volume, the less anonymity you get? If you push out a little bit of traffic, then you can be anonymous, but after a certain amount then the upstream carriers start demanding more specifics as to who you are? I like it...
But you'd also have to do something about address spoofing. And laundering content through a known entity (though if it were spam, that known entity would then suffer the consequences of the laundering, so maybe that one solves itself).
Yeah, something like that. Also that if you serially abuse a service, you lose your privacy protections, for example your IP gets exposed/ I'm also a big fan of community-driven responses, but these require platform owners to give up some control in favor of being more custodians than barons. Since many of them are selling every bit of data on their users that's not likely to happen under current arrangements.
Here is my opinion regarding how to fight this. Spam is exploiting a broken trust model.
If I make a brand new account on hacker news, tell me why you should trust it? Because I'll tell you right now how much you DO trust it, it's completely. You will see what is written. Why are you even seeing this post right now?! Do you know me? Do you know that I make good submissions? Am I waste of your time? I'm some random jerk with a random name.
We pile all our content onto a single point, and we aggregate our opinions on a single point, and let all comers join that single point with all the privileges therein. First principles this is vulnerable to a Sybil attack.
We should actually recognize what trust IS. It is something PERSONAL. NOT CENTRALIZED!!.
We need to wake up to the fact that the moment you try to pile trust into a central point, the system will break.
Checkout "INFINITE ODYSSEY MAGAZINE" which is only AI-generated art. I'll use the term "art" here because there must be some human skill involved. Personally I'm not able to get results as good as theirs when I use these tools. There are some really cool image series they've put out.
Not to discount the photographer's eye, but painting a scene does take a lot more skill than taking a photograph. And photography is largely dying thanks to the smartphone. I like your point that we're entering a new age we don't quite understand yet, but some things will probably be lost as well.
> And photography is largely dying thanks to the smartphone.
I am not so sure about this statement. I would argue that photography has possibly been more prolific than it has been in the past.
If you mean "manual photography where the user manipulates shutter speed, ISO, and aperture" then I still am unsure about this as Instagram is a huge boon to that industry as well.
What? I feel like I'm taking crazy pills over here, how is photography DYING because of the smartphone? It would not surprise me if there's more fantastic photography being created in 2023 than in any previous year, and I say this as someone who will let go of my fully manual 50 when you pry it from my cold dead hands.
I think it's a pretty basic law of supply and demand: increase supply and value goes down. Sure, there's lots of "good" (mostly AI assisted) photography, and its value is mostly worthless.
Take Instagram for instance. If you use it, do you even remember a photograph you've seen in the last month? Things are so ephemeral now. Most things are not worth remembering.
>I think it's a pretty basic law of supply and demand: increase supply and value goes down.
Aren't you assuming photos are fungible? Smartphones have made "snapshot" photography more accessible than ever, and killed the basic 35mm camera market, sure. But most people never paid for anyone else's photos anyway, and they weren't going to start now. With exception of weddings.
There are a ton more photos taken now than there were pre-digital days, but as far as I can see the market for photos-as-art seems about the same as it was. This of course makes them a much smaller percentage of all photos taken, but most photos taken have a marginal value of zero anyway, always have (at least on an open market, where sentimental value isn't valued).
So, you're saying that "the value of art" is reducible to the economics of the price you can get someone to pay for it in a capitalistic marketplace?
I'm sure glad that there was sufficient demand for the 10,000-year-old cave paintings we still have access to when they were first made, so that the original artists had an incentive to create them.
Truly, what makes them so valuable as works of art today, is the lack of supply of new ones.
By "value" I mean "the nebulous value we as a species derive from art" not necessarily economic value.
Regarding cave paintings: You couldn't make new cave paintings even if you wanted to. We don't value the lines on the cave wall. We value the leap in human cognition. We value the evidence that humans advanced from the primitive animal mind to become an abstract thinker. And, based on how things have developed, I think we can conclude thinking symbolically and in the abstract was a pretty good idea! It's likely that contemporaries to cave painters had their minds absolutely blown by the first use of symbols. Cave paintings are high values now, but were probably also high value when they were created.
> By "value" I mean "the nebulous value we as a species derive from art" not necessarily economic value.
Huh. Because "a pretty basic law of supply and demand" sure sounds to me like you were talking about the value of market economics 101, and not that other thing.
Alive and well. Many photographers migrated away from instagram after the shift to reels, and you’ll find them on Vero, Flickr, Glass, etc.
While it’s no longer necessary to learn the complexities of photography or buy a standalone camera to take a reasonably good photo, the folks who love photography for the sake of it and dedicate time/effort to honing their craft and producing great photos are still doing what they do.
The quality gap between a modern smartphone and even a lower end mirrorless camera can be significant in the hands of a skilled photographer.
I'm not saying people no longer enjoy taking photographs. I'm saying that photography is no longer an art. Like I enjoy making a good cup of coffee in the morning. I grind the beans, use a Chemex, measure the temperature, etc. It's taken a long time to master. But, ultimately, my cup of coffee isn't fundamentally different than any other cup of coffee. It tastes slightly better. Making coffee in the morning is a hobby, not an art. And the same thing goes for photography. It's a fun hobby, but no one would really care if you stopped doing it. Viewers would get a different photo from somewhere else. Most people don't greatly prefer your photography over the photography of others. Photographs are completely fungible for the most part. There's no room left for artistry. Low-skill photographs are good enough as to be completely equivalent with high-skill equivalents for most intents and purposes.
I've tried to find the most charitable interpretation of this comment, and the only thing I can currently find is that appreciating photography just isn't your thing and that's ok. But generalizing your personal preferences to the general case is just fundamentally disconnected from reality and ignores the vibrant communities of photographers and people who appreciate the photos they take.
Your comment as a whole claims that I do not exist. It claims that the people I follow do not exist, and the people who follow me do not exist. It claims that collectively we have no good reasons to follow each other.
Since none of this is true, the only conclusion that can be drawn is that you do not understand why people value photography, or follow other photographers. That is also fine - you're not obligated do do any of those things, but again, that does not generalize to the conclusion you've drawn.
> It's a fun hobby, but no one would really care if you stopped doing it
This is just incorrect.
The last time I took a break from shooting and sharing, I had people reaching out asking if I was ok. When photographers I follow stop shooting, I care, and their communities care. Again this feels like a projection of your personal stance/tastes, but that simply doesn't generalize.
People still care about photography and following other photographers for a myriad of reasons. Photography is associated with a diverse set of communities, each with its own interests. Some topical, some geographical, some vocational.
> Photographs are completely fungible for the most part. There's no room left for artistry. Low-skill photographs are good enough as to be completely equivalent with high-skill equivalents for most intents and purposes.
You seem to be concluding that technical "quality" i.e. the visual quality of the captured frame is what primarily makes a good photograph.
Lighting, composition, perspective, leading lines, interesting subject, etc. are all far more important than the gear one is using, and none of these factors are magically solved by equipment that allows "low-skill" shots. There's a reason people spend money on wedding photographers in an era where everyone attending the wedding also holds a decent camera. And the same reason that drives people to hire a pro drives people to seek out and follow photographers who create excellent photos.
And while I get that this is not your cup of tea, there are plenty of us who actually like this kind of tea quite a lot.
Why would you say this? It sounds like pointless gate keeping (ie, artistic photographs can't use a cell phone). I think the art of photography is bigger than it has ever been. Every kid has a great camera in their pocket now. I would have killed for that as a kid. Instead we had to pay for and use disposable cameras.
You're making an assumption that the goal of photography is to perfectly capture an existing scene. I and other photographers I know don't view it that way. The camera is just another means to be creative. The skill is creating something. It's also why cell phone, dslr, mirrorless discussions typically seem superficial to me. The best photographers I know can take any camera and create art.
i was trying to be sarcastic, to draw parallels between photography and art. I believe most of the criticisms to ai art also applies to photography (and probably were actually believed in the early days). Photography obviously still involves creativity and skill. Therefore so does AI art.
> I'll use the term "art" here because there must be some human skill involved.
I mean, there's human skill in creating the prompts but AI alone is capable of this level of quality. You can follow any AI art tag on Instagram and see for yourself.
The skill is in designing, building and training up the AI. The prompter is providing such a tiny amount of input that it is hard to say they did much anything at all in comparison.
Like a lot of people in high-level positions in organizations, whom we often credit with "creation" of various things. Often they receive far more credit than the ones who did the actual work.
Famous painters who just sketch in the piece and let nameless assistants "draw the rest of the fucking owl", producers who don't do much but get to claim a lot of credit, musical "artists" who don't write their music or play any of the instruments on their albums (that'd be work for some nameless musicians or, these days, maybe just a studio engineer with some electronics), choose their own clothes for public appearances ("they're just so creative, don't you think? What provocative choices!") or really do much more than provide some raw material to an autotune system for the vocals it will produce, and learn dance moves that someone else came up with.
Uncredited script-fixers. The ghost-written "autobiography". The shared-pen-name or author-as-a-brand (but ghost writers do all the first drafts). The authentic and personal self-promoting blog where all the post-writing is actually outsourced.
Politicians who call each other traitors on the radio then go golf together that same afternoon, and don't even bother to bring it up.
It's all fake as hell already. Kayfabe. Pro Wrestling is as real as most of it (so, not at all)
Now everyone can be the do-nothing "idea guy" producer and take all the credit. Democratization of artistic parasitism. The meatless alternative to exploiting actual humans. Truly, the brink of a revolution.
> Like a lot of people in high-level positions in organizations, whom we often credit with "creation" of various things. Often they receive far more credit than the ones who did the actual work.
Dialectics that attempt to assign credit (or blame) consistently fail to understand two things.
One is that contributions are not linearly separable. You can’t seperate the contributions of a hammer and a carpenter. Both are needed for anything to happen.
The second is that compensation is based on marginal productivity, not total productivity. It is often the case that something with a low total contribution can have a high marginal contribution and command a higher reward, or vice versa. Usually this is related to scarcity.
I wouldn’t be surprised to see good prompt engineering to become a relatively scarce and expensive resource, while the LLM that does most of the work gets nothing because it is effectively an infinite resource.
The skill is in creating the training data in the first place.
Training a model is hardly a skill. It’s more like playing Tamagotchi—check on it once in a while to make sure it hasn’t died, and guess at ways to make it happier in the future.
I agree with your first statement, and disagree with the second one. Training a new, non-trivial model is 1/3 craft, 1/3 science, and 1/3 art. There are not very many people in the world capable of training GPT-4 level models or beating state of the art in image generation.
Yeah, it's hard to come up with good solutions here. Maybe:
Charge a dollar to submit, then get it refunded back when someone reviews it and confirms that it's not obviously GPTrash? That'd be a bit of a pain, handling those refunds and managing complaints from real authors falsely-identified as spammers (and spammers correctly identified, but hoping to argue their way through). Seems like the sort of thing that'd need software to mediate so you can easily click a button to do the refunds and some third-party company fields complaints.
But then you run into the problem that such a system would cost money to build and maintain, which'd require charging Clarkesworld a fee, and they'd apparently have trouble paying that. Maybe the SaaS providing this service keeps some percentage of the dollar that submission pays that's identified as a spammer, but that sets up a terrible conflict of interest...
I dunno, it's an interesting problem. One that I'm glad I don't need to immediately solve. :S
A dollar isn't nearly enough to deter most of the crowd that's flooding Clarkesworld -- and much more than that would start to be a real burden for submissions from non-Western countries which they're actively soliciting. Current detection tech is too unreliable to trust and too expensive for them to run in bulk. And going through the submissions pile is already a tedious and thankless job.
(BTW, part of their problem is that some sleazy "get rich quick with ChatGPT" guides were explicitly telling people to have it write some crap story and submit the result to Clarkesworld -- which is ridiculous. The mag is quite selective, and the fraction of sweated-over-by-human submissions that get publication, and a check, was already quite low.)
What if I told you that the best way to make a quick buck with chatgpt is to write a bunch of crappy articles about how to make money with chatgpt, then stick ads on them.
Similar problem: online reviews will now become quite useless as anyone can generate hundreds of different ones that all sound quite human-like (unlike the obviously fake reviews)
Now? Online reviews have been useless since the moment they were introduced. Heck using ChatGPT for fake reviews is probably more expensive than the services that exist already.
I consistently check online reviews for products I'm thinking of getting, and I'd say it's been pretty easy to spot the fake ones. Not that I think the rest are reliable and I wouldn't fully trust them in any case (esp Amazon), but with ChatGPT it'll be pretty impossible to distinguish true from fake.
but online reviews are valuable because they democratize evaluation and help people make informed decisions while also providing feedback to businesses. at least in old time, it's possible to sift through the noise and get a sense of the general consensus.
I definitely understand your concern about AI-generated content potentially contributing to an increase in spam and a decline in the overall quality of content on the internet. However, I think it's important to consider the ways in which AI tools can actually be used to filter out spam and identify content that is not useful or relevant.
Additionally, the development of AI tools for content creation has the potential to raise the overall quality of content on the internet, rather than contributing to its decline. While there may be a risk of the internet becoming saturated with AI-generated content, it's unlikely that it will completely replace the unique perspectives and voices that come from human creators.
Ultimately, the responsible development and implementation of AI tools for content creation is key to ensuring that they are used in ways that are beneficial and sustainable for all users. It's important to approach these advancements with a critical and thoughtful perspective, and to work towards finding ways to ensure that the technology is used in a way that is both effective and ethical.
Not sure how this is supposed to prevent a correctly verified, genuinely authentic human from pasting ChatGPT output into a submission box, and claiming it as their own.
Probably the same way a stolen iPhone is rendered useless without the real owner's account password: software, hardware, and servers that work closely together to disallow unauthorized types of use. Even if you found a way, your identifier will be taken off the whitelist once discovered.
Until one of those humans figures out how to write a Chrome extension that automates things... which, sadly, is something a lot of them would rather do than learn how to write.
Its not so much a problem for the government to be metering out private keys on proof of heartbeat, but rather that it will always want to tie them to an actual identity.
Nobody wants to or has enough trust in telecommunications to present their driver's license in order to participate in an online forum. If DoD really could provide a private key to bypass captcha tests then it could be useful, but there is a zero percent chance that it doesn't get tied back to people with real-world consequences almost immediately.
A persistent, costly ID for online communications and commerce is good and well implemented in an OS like Urbit, but relying on DoD maintaining them is too risky given the current laws around government surveilling, policing, and an ascending domestic 'war on terror'.
My view is that anonymity should always be allowed but that in certain instances people should have the ability to choose to interact in explicitly non-anonymous manner. That does mean with their full identity attached to their online avatar.
I don’t even remotely understand the problems. My real identity is attached to the ownership of my house and so far the government hasn’t put a bag over my head.
But yes, I realize that the tyranny of the anonymous will go on for longer than it really should due to extreme paranoia about the political other.
This reminds me of an old Roald Dahl story, "The Great Automatic Grammatizator". A man creates an electric computer that can write short stories in a few minutes (this being written in 1953, when electric computers were just becoming a thing), and starts churning out magazine submissions. Having made a name for himself as a writer (several names, in fact), he then upgrades it to write novels. Finally, he starts offering existing writers the chance to license their names to the machine's works. It's a good read.
Of course, the situation isn't quite the same, largely because unlike the story the magazine here can identify AI submissions, but the similarities are nevertheless interesting.
We don't have a solution for the problem. We have some ideas for minimizing it, but the problem isn't going away. Detectors are unreliable. Pay-to-submit sacrifices too many legit authors. Print submissions are not viable for us. Various third-party tools for identity confirmation are more expensive than magazines can afford and tend to have regional holes. Adopting them would be the same as banning entire countries.
We could easily implement a system that only allowed authors that had previously submitted work to us. That would effectively ban new authors, which is not acceptable. They are an essential part of this ecosystem and our future.
Will the effect of AI on the literary world end up just being that everyone will require submissions to be long hand and made by snail mail? More likely it will just increase the role of the middleman, everything will go through agents.
Is the problem that they don't want AI generated stories? Or that AI generated stories are increasing the costs of their curating?
A publishers entire value add is curation of content for their readers, and possibly increased exposure/recognition for writers. AI generated content has skewed their costs of providing their base service. They either need to find a way to make their curation/review cheaper, or pass on the costs to either the readers/writers.
If we find ourselves in a world with infinite mediocre content, we'll see the true value people place on content curation, in the amount that they can charge the consumer.
Yes, the effect of the DDOS is the increase in cost to the review step in their curation process.
At the risk of stating the obvious: If costs go up, they really only have four ways to react:
* find a way to reduce those costs (e.g. automated detection, crowd moderation, allow everything through, allow only trusted authors through);
* absorb the costs (keep the same process, employ more people/spend more time);
* pass on the costs e.g. pay to submit, pay more for the publication;
* Close shop.
What I think will be most interesting to see play out is how much people are willing to pay for quality curation. If the rise of GPT and friends means infinite media, trusted curation become ever more valuable.
I think we should be embracing AI generated stories, but separate them from non-AI. Plus, what counts as AI? If I'm writing a story and use AI to help with parts is that an AI story?
The other thing is I think AI will raise the minimum bar for 'good' stores. Ok stories will be easily generated so that non-generated ok stories will find no market.
On one hand, I want to say that good story is a good story, it shouldn't matter how it was authored.
On the other hand, I think people should have the right to know the origin of the info. We do it with the food we consume, perhaps we should start doing it with information? "Guaranteed 78% human generated content", "single origin, certified human"...
As others have mentioned it seems like the solution is a small fee for submission. Ideally non-refundable instead of refundable. Talented authors should be willing to pay a nominal amount for the overhead of maintaining the bar for content, even if their submission isn’t accepted I assume this is a publication they’d like to continue to see succeed.
This doesn't work for editors who want to avoid excluding new voices from marginalized communities. Also, it incentivizes magazines to accept more submissions than they can responsibly read because it becomes a source of income.
I was wondering which way this would go when I saw the title. Seemed equally possible it was due to people turning in AI generated articles as it was that the publisher decided it didn't need submissions anymore as it would use an AI to generate the articles. My guess is that we'll see more of both.
Even leaving aside the fact that 90% of the slop an “AI” crapped out wouldn’t be worth reading in the first place, of what value is the culture of absolute solipsism you’re proposing?
I would think we’re all rather beyond the age of asking our parents to tell us a story about a pretty unicorn and being satisfied with whatever meandering narrative they supply — is that really all you think books are? There’s no meaning the author was conveying, no value in a shared culture experiencing and interpreting the same work and building a common understanding, this can all just be effortlessly replaced by endless atomized autogenerated slop for piggies?
Plenty of human-written books are bad. There are already genres where the way to get rich seems to be to crap out a narrative that hits the right keywords (or claim that you have - who's going to check?), tick the right representation checkboxes, pay the right influencers and bot farms to make it go viral on social media, and tada, there's your bestseller. While AI-generated books are not very good at the moment, there's nothing that inherently makes them better or worse than human-written books; ultimately it's the content that counts.
I've been saying for years that curation is now a bigger problem than creation. AI is only accelerating the existing trend.
AI's gonna need a competent developmental editor, and to be applied incrementally with human decisions and input in between, for at least a while longer (we'll see if it keeps progressing past that—seems likely, but one never knows)
[EDIT] To be clear, it won't need those things to produce a story at all, but AI-assisted stories that have had significant human input applied will be notably better than wholly-AI ones (the kind of on-demand storytelling you're alluding to) for some time yet.
Did you find anything worthwhile you could link to? That sounds like something i'd be interested in - never had particularly good writing myself either.
Reality is that people flood submissions low-effort boring crap.
You probably can use AI tools/ChatGPT to write interesting stories, or use its output as a starting point (I'm not sure if it would be less effort than just writing a story from scratch though). But that's of course not what's happening.
First rule of designing anything is what I would call Tucker's Second Law[1]: "if some cunt can make a buck by completely fucking over your system then that cunt will completely fuck over your system because that cunt is a cunt."
This is a huge problem with a lot of different things, ranging from email spam to how the "make a quick buck" type of hucksters have taken over cryptocurrencies to how nations design things like their tax and benefits systems.
[1]: https://www.youtube.com/watch?v=mss7ZNIEhfo