Hacker News new | past | comments | ask | show | jobs | submit login

Money quote:

> There’s also a social contract: when we create an account in an online community, we do it with the expectation that people we are going to interact with are primarily people. Oh, there will be shills, and bots, and advertisers, but the agreement between the users and the community provider is that they are going to try to defend us from that, and that in exchange we will provide our engagement and content. This is why the recent experiments from Meta with AI generated users are both ridiculous and sickening. When you might be interacting with something masquerading as a human, providing at best, tepid garbage, the value of human interaction via the internet is lost.

It is a disaster. I have no idea how to solve this issue, I can't see a future where artificially generated slop doesn't eventually overwhelm every part of the internet and make it unusable. The UGC era of the internet is probably over.






Oh, there are solutions. One is a kind of a socialized trust system. I know that Lyn Alden that I follow on Nostr is actually her not only because she says so, but also because a bunch of other people follow her too. There are bot accounts that impersonate her, but it’s easy to block those, a it’s pretty obvious from the follower count. And once you know a public key that Lyn posts under, I’m sure it’s her.

She could start posting LLM nonsense, but people will be quick to point it out, and start unfollowing. An important part is that there’s no algorithm deciding what I see in my feed (unless I choose so), so random LLM stuff can’t really get into my feed, unless I chose so.

Another option is zero knowledge identity proofs that can be used to attest that you’re a human without exposing PII, or relying on a some centralized server being up to “sign you in on your behalf”

https://zksync.mirror.xyz/kWRhD81C7il4YWGrkDplfhIZcmViisRe3l...


How can ZK approaches prevent people from renting out their human identity to AI slop producers?

By just making it more expensive. We’re never going to get rid of spam fully, but the higher we can raise the costs, the less spam we get.

EDIT: Sorry, I didn’t answer your question directly. So it doesn’t, but makes spam more expensive.


Well, the end of open, public UGC content anyway.

I have heard of Discord servers where admins won't assign you roles giving you access to all channels unless you've personally met them, someone in the group can vouch for you, or you have a video chat with them and "verify."

This is the future. We need something like Discord that also has a webpage-like mechanism built into it (a space for a whole collection of documents, not just posts) and is accessible via a browser.

Of course, depending on discovery mechanisms, this means this new "Internet" is no longer an easy escape from a given reality or place, and that was a major driver of its use in the 90's and 00's - curious people wanting to explore new things not available in their local communities. To be honest, the old, reliable Google was probably the major driver of that.

And it sucks for truly anti-social people who simply don't want to deal with other people for anything, but maybe those types will flourish with AI everywhere.

If the gated hubs of a possible new group-x-group human Internet maintain open lobbies, maybe the best of both worlds can be had.


This strange reliance on Discord as some sort of "escape from web3.0" is silly to anyone who knows what Discord is(modern AOL) and how centralized it is. Its just the same corporate walled garden with more echochambery isolation.

Discord, or the Death of Lore :

https://news.ycombinator.com/item?id=35050858

(Even when something like a wiki exists, most of actual information will still be contained in the lore, itself blackholed by a deep web platform like Discord.)


Invite only forums or forums with actual identity checking of some sort. Google and Facebook are in prime position to actually provide real online identity services to other websites, which makes Facebook itself developing bots even funnier. Maybe we'll eventually get bank/government issued online identity verification.

Online identity verification is the obvious solution, the only problem is that we would lose the last bits of privacy we have on the internet. I guess if everyone was forced to post under our real name and identity, we might treat each other with better etiquette, but...

> I guess if everyone was forced to post under our real name and identity, we might treat each other with better etiquette, but...

But Facebook already proved otherwise.


Optimistically, if all you want to do is prove you are, in fact, a person, and not prove that you are a specific person, there's no real reason to need to lose privacy. A service could vouch that you are a real person, verified on their end, and provide no context to the site owner as to what person you are.

That doesn't stop Verified Humans(TM) from copying and pasting AI slop into text boxes and pressing "Post." If there's really good pseudonymity, and Verified Humans can have as many pseudonyms as they like and they aren't connected to each other, one human could build an entire social network of fake pseudonyms talking to each other in LLM text but impeccable Verified Human labels.

The identity provider doesn't need to tell the forum that you are 50 different people. They could have a system where if the forum bans you the forum would know it's the same person they banned on reapplication. As far as people making a real person account then using that to do Ai stuff yeah there will have to be a way to persistently ban someone through anonymous verification, but thats possible. Both the identity verifier and forum will be incentivized to play nice with each other. If a identity provider is allowing one person to make 50 spam accounts the forum can stop accepting verification from that provider.

I just want to semi-hijack this thread to note that you can actually peek into the future on this issue, by just looking at the present chess community.

For readers who are not among the cognoscenti on the topic: in 1997 supercomputers started playing chess at around the same level as top grandmasters, and some PCs were also able to be competitive (most notably, Fritz beat Deep Blue in 1995 before the Kasparov games, and Fritz was not a supercomputer). From around 2005, if you were interested in chess, you could have an engine on your computer that was more powerful than either you or your opponent. Since about 2010, there's been a decent online scene of people playing chess.

So the chess world is kinda what the GPT world will be, in maybe 30ish years? (It's hard to compare two different technology growths, but this assumes that they've both hit the end of their "exponential increase" sections at around the same time and then have shifted to "incremental improvements" at around the same rate. This is also assuming that in 5-10 years we'll get to the "Deep Blue defeats Kasparov" thing where transformer-based machine learning will be actually better at answering questions than, say, some university professors.)

The first thing is, proving that someone is a person, in general, is small potatoes. Whatever you do to prove that someone is a real person, they might be farming some or all of their thought process out to GPT.

The community that cares about "interacting with real humans" will be more interested in continuous interactions rather than "post something and see what answers I get," because long latencies are the places where GPT will answer your question and GPT will give you a better answer anyways. So if you care about real humanity, that's gonna be realtime interaction. The chess version is, "it's much harder to cheat at Rapid or Blitz chess."

The second thing is, privacy and nonprivacy coexist. The people who are at the top of their information-spouting games, will deanonymize themselves. Magnus Carlsen just has a profile on chess.com, you can follow his games.

Detection of GPT will look roughly like this: you will be chatting with someone who putatively has a real name and a physics pedigree, and you ask them to answer physics questions, and they appear to have a really vast physics knowledge, but then when you ask them a simple question like "and because the force is larger the accelerations will tend to be larger, right?" they take an unusually long time to say "yep, F = m a, and all that." And that's how you know this person is pasting your questions to a GPT prompt and pasting the answers back at you. This is basically what grandmasters look for when calling out cheating in online chess; on the one hand there's "okay that's just a really risky way to play 4D chess when you have a solid advantage and can just build on it with more normal moves" -- but the chess engine sees 20 moves down the road beyond what any human sees, so it knows that these moves aren't actually risky -- and on the other hand there's "okay there's only one reason you could possibly have played the last Rook move, and it's if the follow up was to take the knight with the bishop, otherwise you're just losing. You foresaw all of this, right?" and yet the "person" is still thinking (because the actual human didn't understand why the computer was making that rook move, and now needs the computer to tell them that the knight has to be taken with the bishop as appropriate follow-up).


> you will be chatting with someone who putatively has a real name and a physics pedigree, and you ask them to answer physics questions, and they appear to have a really vast physics knowledge, but then when you ask them a simple question like "and because the force is larger the accelerations will tend to be larger, right?" they take an unusually long time to say "yep, F = m a, and all that." And that's how you know this person is pasting your questions to a GPT prompt and pasting the answers back at you.

Honestly, (even) in my area of expertise, if the "abstraction/skill level" or the kind of wording (in your example: much less scientifically precise wording, "more like a 10 year old child asks"), it often takes me quite some time to adjust (it completely takes me out of my flow).

So, your criterion would yield an insane amount of false positives on me.


My parents use a lot of Facebook - and things some people say under their real name are really mindblowing.

Posting with IRL identity removes the option to back down after a mistake and leads to much worse escalations, because public reputations will be at stake by default.

> with actual identity checking of some sort

I am hoping OpenID4VCI[0] will fill this role. It looks to be flexible enough to preserve public privacy on forums while still verifying you are the holder of a credential issued to a person. The credential could be issued from an issuer that can verify you are an adult (banks) for example. Then a site or forum etc, that works with a verifier that can verify whatever combination of data of one or more credentials presented. I haven't dug into the full details of implementation and am skimming over a lot but that appears to be the gist of it.

[0] https://openid.net/specs/openid-4-verifiable-credential-issu...


Ironically, on Facebook itself I am only friends with people I actually know in real life. So, most of the stuff I see in my feed is from them.

I’m only friends with people I know on Facebook, so I’m mostly see ads on that site. There’s a feed to just see stuff your friends post, but for some reason the site defaults to this awful garbage ad spam feed (no surprise really).

Do people still post things on Facebook? I don't know because I haven't used it, ever, but I've heard that Meta has turned it into a platform mostly for passively consuming algorithmically-driven content instead of sharing your day on your News Feed.

The posts from my friends are all politics and babies, which is not really interesting. But I guess I can’t really complain, that’s what’s going on in their lives.

I suspect that the honest outcome will be that platforms where AI content is allowed/encouraged will begin to appear like a video game. If everyone in school is ai-media famous - then no one is. There is most assuredly a market for a game where you are immediately influencer famous, but it's certainly much smaller than the market for social media.

Cool, does that mean we can go out more and talk to real humans again? Can't wait tbh.

For the tech discussions I'm interested in burning cpu/GPU cycles for proof of work is a good way to make replies expensive enough that only people who care will post then.

Another option is a web of trust.

It's finally the year of gpg!


If you think about historical parallels like advertising and the industrialisation of entertainment, where the communication is sufficiently human-like to function but deeply insincere and manipulative, I think you'll find that you absolutely can see such a future and how it might turn out.

A lot or most of people will adapt, accept these conditions because compared to the constant threat of misery and precarity of work, or whatever other way to sustenance and housing, it will be very tolerable. Similar to how so called talk shows flourished, where fake personas pretend to get to know other fake personas they are already very well acquainted with and so on, while selling slop, anxieties or something. Like Oprah, the billionaire.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: