Hacker News new | past | comments | ask | show | jobs | submit login
AI: Markets for Lemons, and the Great Logging Off (fortressofdoors.com)
353 points by willbobaggins on Dec 29, 2022 | hide | past | favorite | 276 comments



Regarding the Third point, I can already see part of this movement.

I've become disenchanted with the quality of the discourse. Specifically as I've entered a more senior stage of my career, I'm looking to mingle with like-minded folks in similar situations.

Thing is, these folks don't usually lurk online, and even if they do, their voices get overwhelmed by the flood of (mostly low quality) content. I'd be happy to pay a fee to become a member of a community with vetted participants.


> I'd be happy to pay a fee to become a member of a community with vetted participants

You better start believing in vetted online communities Miss Turner, you're in one!

It may not be vetting by moderators, but HN's dated frontend definitely acts as a forcefield against many kinds of people.


I feel like Lobsters has better quality discourse nowadays though, due to its much more explicit vetting system. I am not part of that website, yet I love reading the comment section there.


Just looked at lobsters and there are zero comments on almost all posts.


Q.E.D.


Signed up day one with a one char username. Tried to use it a few years later and by then it had been deleted. Understandable I guess, but I'm still salty about it. Just had to get that story off my chest.


What is this? My Google-fu is weak.


lobste.rs is what I can find. Likely tries to make its SEO as bad as possible to avoid discovery, which is kinda funny.


Was lobster.rs ways invite only? I thought I remembered having an account a long time ago...but my email isn't registered when I go to reset my PW. Because it's a much smaller community it does suffer from low comment counts on posts. Comments are the best part of HN.


Invite only to write, open to read. But the main trick is tracking the invite tree - if the person that invited you gets banned, you get banned too (or need to appeal to have your “branch” of the invite tree adopted by someone else).


So if jcs somehow gets banned, the whole place shuts down?

I find the idea of the site intriguing yet I'm not sure what I could contribute.


It is largely the same as HN


Except HN is a mix of tech-bro entrepreneur types and developers, whereas the other site is exclusively developers (with an emphasis on open source).


jcs started the site, so him getting banned would be quite a feat indeed!

(although it would be an amusing way to shut the site down if he gets tired of maintaining it...)


Reddit when they had their old frontend and bad mobile expierence was much better than it is now.


Genuinely, what do you see as dated? My only other point of comparison is Reddit so I suspect I’m behind the times.


Mainly the interaction (though the graphical design itself definitely isn't what you'd call modern). No notifications or gui animations, no thumbnails or header images, being as hard to navigate as possible, scroll position gets reset all the time, can't even reply to a comment while looking at the rest of the thread because everything is its own separate page, nothing has outlines so text has no clear separation, etc. In a nutshell it feels like a site made in the 90s entirely with HTML forms without a line of javascript.

I mean I do absolutely hate the new reddit redesign and almost exclusively use old.reddit, but the functionality there is still on another level when compared to HN.


> scroll position gets reset all the time

Actually, HN is the only social media site where my browser can consistently keep the scroll position. Animations and endless scrolling are what breaks it.


Do you use old or new interface?


Old, I think? I saw a friends and his was this stream of pictures and gifs, whereas mine is entirely titles with the occasional thumb nail.


That would be the “dated” interface, then. I prefer it too.


> their voices get overwhelmed by the flood of (mostly low quality) content

I have the same issue. People place so much trust in their preferred sources of information and don't question anything they read. Because of this I have a hard time discussing anything beyond the most basic things with many friends, since they just repeat talking points and opinions of others.

It's so enlightening when I do find someone who's well read on a subject and can have an actual conversation about it. Even better if they have a different opinion and give you some new bit of information that makes you think about your opinions or perception of knowledge. Or, when you can agree to disagree because you have the similar understanding of a subject, but have different values or preferences. Moments like that can make me really happy sometimes.


I don’t think the root cause here is excessive trust in preferred sources. What’s actually at play here, is most peoples weakness at perspective taking. Truly understanding a given issue requires the ability to have your own perspective and to give equal credence to the other plausible perspectives. Many people do not pause to consider alien perspectives, and are disoriented when confronted with them. Therefore, they don’t seek out alien perspectives in order to avoid the sense of disorientation that would result, and that’s why they stick to media that share their perspective, and even punish said media when it offers alternative perspectives.

People have always been this way, but the Internet and social media are really not helping.


The opposite is also true. People who love seeking out alien perspectives have a virtually unlimited amount of ability to find and commingle with whatever community they can think of online.

I think the optimism of the early internet was due to a dominance of users with that kind of personality. It wasn’t apparent to that type of person that others are not excited by that perspective hopping opportunity and not only would not take advantage of it once being online was more accessible, but would actively use the same technology to further silo and entrench themselves.


I ran a private online bookclub a few years ago, really tried to filter and vet the people who wanted to join. Ended up with a decently sized group of wonderful people and lots of long form written discussion—we read good books and talked about good books, it was lovely. I approached the screening/vetting process as if I was hiring. I got some angry emails about the approach, made it seem "exclusive" and elitist, but it worked out well for the group.

I'll launch the next iteration after the first of the year, this time there might be a fee attached, at least a minimal amount to pay for the SaaS product we'll use to organize and collaborate.


Paying a fee is a great barrier for weeding people out. I was a on a pool team for several years and one season I drew the unlucky lot of being the captain. This mostly involves texting everyone on a weekly basis to ensure they would come out, and they had the $10 they needed in cash on hand to pay me when they showed up. I hated it and chased people all the time to pay, they owed me from last week, didn't show up, etc.

The next season I told everyone to pay up front. $120 dollars, cash. Same price as every week in total, but up front.

Turns out when people paid up front they came more often and we had no payment issues from there out.


Can you say more about how you organised and structured it? E.g. what practices, rules, and tools you used (And any chance you have a write-up online somewhere?) I've run a few groups like this and love learning how others approach it. Haven't found a great format for writing-based discussion that works as well as in-person / zoom calls.


Yes, I've been working on a write-up, will post and send it your way when it's finished!


I've long had a theory that services that demand an upfront payment for an ongoing service will find the payment works against their growth goals and remove it.

For example, businesses that involve $$$ reusable delivery containers don't work well because they want to give first-time customers a discount not a surcharge. Whatsapp was ridiculously cheap - something like $1/year - but they still decided that worked against their network effects.

There are some markets where you pay upfront and on an ongoing basis, like buying a printer and ink or a games console and games - but they often subsidise the upfront cost with profit from the ongoing costs.

I'm not sure how well a $5-to-join social network would work for that reason. Sites like metafilter survive - but with five moderators and one coder, they're not going to be replacing twitter, facebook or HN any time soon.


>will find the payment works against their growth goals and remove it

Their mistake is having growth as a goal rather than a happy community.


Quality over quantity. Not everything has to appeal to the masses to make a good business.


> Whatsapp was ridiculously cheap - something like $1/year - but they still decided that worked against their network effects.

My boss of a multimillion dollar company took offense at being charged $1/year to use Whatsapp back when it came out and started getting popular, so it did have an effect

My eyes practically rolled out of my head when I heard him complain but that really stuck with me


You don't become a millionaire by spending a million dollars ...


For me it's not about a business, it's about a community. The fee is just a barrier to entry to filter participants.

I believe even a nominal amount would keep most people out, and that's by design. Also you don't need huge numbers to build a personally interesting community. Even a few dozen dedicated people can be plenty.


Metafilter is a pretty interesting case -- deliberately limiting growth seems to have worked out pretty well for the site over most of its existence, and growth goals have always been about other qualities than sheer numbers. It doesn't need to compete with anybody else on scale.

I'd say HN is one of the few forums competitive on quality of discourse vs metafilter, but its business model is essentially loss-leader for VC/incubator activities, so it's a bit of an odd duck itself.


> $5-to-join social network

if those site could convince important/popular people for exclusivity, they could pull it off; i imagine it can work like spotify exclusive podcasts, or some such, since most people don't actually produce content, but consume.


I feel like twitter is trying to do this? $8/mo for premium members, all the celebrities are there already.


Well, with Twitter, the $8 ($11 on mobile) "premium" members are pretty much all people I would rather avoid.


It's a matter of mental model. I'm thinking more "private members club" than social network.


Sometimes, growth isn't the point (I too point to metafilter). Cash up front is a barrier.


This used to partly happen by age.

Most notably, kids were in school with people their age, adults were at work, etc. 1st grader intellect and emotional interactions partitioned from 5th grade, partitioned from 9th, partitioned from college, partitioned from post-college.

For another example, the "grownups' table" and the "kids' table" at a holiday dinner.

Though, as a kid, I definitely benefited from listening in on the grownups on the Internet.


A key part of the Something Awful forum's success was the $10 registration fee required to create an account.

Discussion can't scale beyond a certain number of active user accounts without a large drop in quality. It's kind of hard to tell what that limit is because every forum has a large number of mostly quiet lurkers who simply want to read and follow the discourse. SA's $10 fee created a barrier to entry, and the culture of bans being regularly handed out to bad posters encouraged people to lurk until they were ready to make good posts. Upvotes and downvotes as popularized by reddit and reddit-like forums served as a kind of substitute for that effect but they don't work well with as many users as those websites have.


If you want a sustainable vetted community, pick a channel, build a funnel, skim the cream.

Like startups, communities are growth and sucessful ones grow at an incremental replacement+ rate, where ones that grow explosively fall over on their own weight, and ones that sustain a base level just die. A plan for a vetted community is a system of harmonious growth that requires a funnel and conversion scheme that yields vetted members. I see orgs spinning their wheels and arguing over the merits of individual candidates when their real problem is they have no funnel to simply select the best n% from.


It isn't so much about vetting participants than about moderating the discourse , which is hard to do right, and usually relegated to the few people who are willing to do it for free , which is bad quality (like reddit)

Ai can be used to moderate discussions however , if it is trained to remove low effort in an unbiased way


> It isn't so much about vetting participants than about moderating the discourse

Vetting, moderating ... none of them scale well when done manually. The solution I see is that everyone have their own AI filters, customised as they see fit. You can include or exclude specific people and topics, make it as diverse or narrow as you like, allow challenging opinions or not. One of the filters can be to detect AI bots. Don't make the world conform to you, be selective and just skip the bad parts.

I think people are going to trust their own AI tools that are running on their own computers more than they trust other people and especially other AI. We already know we can't face the future onslaught of information with the old methods, we need help. User controlled AI is going to be our first line of defence and our safe space, "a room of one's own" where there is no tracking and thought policing.

With the advent of large language models we already have that, indirectly - the LM is a synthesis of everything, but we let it generate only conditional to our intentions. All the view points are in there for us to reach, it depends on us how we relate to them.

AI should be like a cell membrane separating outside from the inside. It should keep the bad stuff out, take the necessary nutrients and nurture the life within.


> none of them scale well when done manually.

Then maybe we should reduce the scale of the megaphones. Current networks are just unpleasant because it's mass hysteria on global level. Scale isn't everything.

I think future forum owners will be continuously finetuning their AI filters to fit their community standards.


Yeah, I really want this to happen.

I follow really interesting people for their thoughts about one topic (say, Movies), but whose tweets about topics like politics I really dislike (not necessarily things I disagree with, usually I just dislike the tone).

Shouldn't be too hard to fine tune a LLM to do this, definitely something worth a try.


I don't think the discussion between carefully vetted participants needs to be moderated. We go for moderation too quickly these days. Of course if someone goes crazy they should be banned but in less extreme scenarios a well chosen group should behave, just like it's but necessary to moderate a social gathering in real life.


but it s exclusionary which defeats the purpose of the internet as an open forum


What is the underlying purpose of an open forum? Is it the openness itself, or something that openness allows? I think it’s the latter: allowing people with common interests who wouldn’t otherwise connect in real life to interact meaningfully.

If that’s the case, 100.00% open might not be optimal if 99.9% open yields better results for the resulting community.


different forums can have different purposes. Personally i prefer the public kind, which expose me to different ideas. The common interests i prefer to discuss in real life


> the purpose of the internet as an open forum

The purpose of the internet hasn't been to be an open forum since sysadmins were blocking the talk.* hierarchy and kicking entire nodes off of usenet for spam or abuse.


yet here we are


yet here we are and "here" is not an "open forum"


Why does it need to be 'unbiased'? This would work as well:

> Ai can be used to moderate discussions however , if it is trained to remove low effort [or otherwise undesirable content]?


OK, but i wouldn't participate in such discussion. i believe most people like to keep their minds open to ideas


Maybe. Though what's perceived as 'unbiased' differs from person to person.

See https://astralcodexten.substack.com/p/sorry-i-still-think-i-... for more on this topic.


In my experience, the more you vet participants the less you have to moderate.


The third point also made complete sense to me. My family has been using a Discord server to communicate for years now, and at first people thought that was strange. It's social media for people you really know and interact with, not a BS news feed. Think of it as a real network, not a fake network or an entertainment network. And a dozen other services function in the same way.

But I get invited into discord servers for developers I work with, interests, I share, and I think it is getting to be more and more common. Some people will spend time in fake networks and probably not get a lot of value out of them other than entertainment. And others will gravitate to real networks for communication.


> I'd be happy to pay a fee to become a member of a community with vetted participants.

FreeMasons and premium golf clubs!


Problem is, both can still be filled with less than bright people.

If you have an interest in, say, the math behind various AI techniques. Or maybe an interest in digging into the differences between different, (but seemingly similar), cancer therapies. (Think TIL v CAR-T). You won't get that at the golf club. You will need some kind of vetted discussion group.


Looks like you want to make a post-grad.


Golf clubs do the job, but I don't think I'll find many like-minded folks in there.


I find a lot of this on Mastodon's branch of the fediverse. It's getting worse with the migration from Twitter, but I'm optimistic that the fact that it isn't Twitter and doesn't want to be Twitter will deter most of the more obnoxious people and help those that make it through blend their uniqueness with what's already there in a healthy and prosocial way.


I recently have been discussing pricing models for quality curation rather than profit per se with a friend. The obvious advantage is that it's a simple, objective, broadly applicable means to distill a baseline of quality. The obvious disadvantage is that it is biased towards those with the most disposable income (and willingness to spend), which depending on the topic of the forum may or may not correlate with quality of insight/opinion. Based on what I've seen of /r/lounge, I conclude curating a community with price mechanisms alone doesn't lead anywhere interesting. But perhaps it can amplify quality on top of some other mechanism.


These closed groups seem to be too inbred, too like minded. For some things that's probably ok and good, but for creative solutions, I find I need to hear it all and see it from many angles.


The trick is to find a broad enough frontier that you still have a mix of discourse, while still excluding people who will tend towards unproductive garbage. For example, my service organization at university: As a whole, the group leaned firmly socialist (selecting for people who were enthused about feeding the homeless tends to do that), but the firm "someone will veto this person's membership nomination" line was somewhere around the middle of the Democratic party: "We need to fix things, but maybe that doesn't require revolution". Thus we ended up with a perfectly wide range of discourse, between the just-left of Obama and the anarchists, without including people who were fundamentally at odds with the groups mission of affecting positive change.


I don't know when it happened exactly, but I have noticed the top comments on youtube/instagram/fb are always something completely inane like 3 heart emojis or something. It was always baffling that any kind of meritocratic system that ranked comments would result in something so useless being placed at the top.


You might enjoy LessWrong[1].

1. https://lesswrong.com


LessWrong is, sadly, an example of a forum that got flooded with low quality content. It's been Septembered beyond recognition from what it was in the early 2010s.


Sadly, that figures.

Thank you for bringing me up to date.

Any other sites you'd recommend for thoughtful discussion?


You wrote: <<I can already see part of this movement.>>

We can summarise this comment as "Eternal September"-ism.

Ref: https://en.wikipedia.org/wiki/Eternal_September

Be wary of falling trap to the feeling of: <<It has never been worse.>>

In late high school, watching the daily local television news around 6PM, I began to realise that when local news programmes interviewed local residents, they were always seeking the same response: <<It has never been worse.>>

Examples: Traffic; politics; manners; dress code; respect for elders; price of "milk and bread" (ask anyone in their 90s about it!); Internet discourse; buying/selling used cars; kids driving cars; fashion; fast food; the latest technology (radios, TVs, Internet, video games, mobile phones) ..., etcetera!

My point: Do not assume that things that are different from when you were young(er) are automatically worse. Really, really: Try harder. In 1920, "old people" were complaining about "young people" listening to too much radio -- made them "soft". Repeat for television in 1950. Repeat for Internet in 1990s. Repeat for mobile phones in 2010s. Repeat for video games in 2020s. I see this pattern repeat over and over again, and I chuckle to myself when I see old men complain about it!

As a counterpoint: This website is an amazing counterpoint/weight that "all things are getting worse all the time". I have been here for a few years now, and this community always manages to adapt to changing standards and conditions... and produce inspiring and thoughtful discourse about recent news.

And now for my over-the-top inspiring comment to end this post: Simply put: We, HN community, are the antithesis for Sturgeon's law "ninety percent of everything is crap"! (Wiki: https://en.wikipedia.org/wiki/Sturgeon%27s_law)


I am not clever enough to notice a decline but some have argued that there has been a shift from founders to employees in the HN audience. Please don't take it as an insult, but how do you know that you are clever enough to recognize if there had been a decline?

What I have noticed in your comment:

>I see this pattern repeat over and over again

That's not a proof that there hasn't been a decline. It could as well be that each new generation is less competent.


> Second, I think people will start to put a premium on accounts being "verified" as genuinely human.

IMO this is long overdue. And I think people should/will eventually even pay for this feature.

> Powerful nation states will be more than eager to assist them in this regard.

I agree - world governments could consider even subsidizing this in some way in order to foster fewer anonymous botnets sowing discord. The web of trust could be useful to help bolster this - if only in providing the process by which new social networks could eliminate anonymity.

I hear a lot about SSI [1] from a family member who works on this stuff. I'll be honest - I don't know whether it's promising or dystopian. Maybe a little bit of each. But I optimistically think it could be a way to help mitigate some of the problems created by anonymity.

The loss of anonymity won't be a panacea -- for example: people with legitimate accounts bound to their identity today accept bribes to create positive reviews of goods. And the loss of anonymity comes with significant drawbacks: criticizing government and powerful people becomes more difficult.

Having a digital reputation that you can taint could put people on their best behavior. But then again it's a bit dystopian [2, 3].

[1] https://en.wikipedia.org/wiki/Self-sovereign_identity

[2] https://en.wikipedia.org/wiki/Social_Credit_System

[3] https://en.wikipedia.org/wiki/Nosedive_(Black_Mirror)


I think as soon as people see "verified humans" posting things they utterly disagree with, the temptation to call those people "bots" will be too great to resist. A substantial portion of internet commenters will sooner call a human verification system into disrepute than admit that some real people really do earnestly disagree with them. They'll spin conspiracy yarns about how "the other side" controls the verification process and allow their own bots to participate.

I think the solution to all of this is smaller tight-knit discussion groups where all the participants know and trust each other. Echo chambers will inevitability form. I don't think there is any way around this.


You're just poisoning the well.

edit: I find it very bizarre and very fashionable to defend the integrity of a verification service that does not exist, and to already dismiss the non-existent people who could suspect that this hypothetical system of verification was corrupt as conspiracy theorists.


Your response is very strange. I have not "defended the integrity of a verification system", nor am I inclined to because I do not support the creation of such verification systems in the first place. And I have no clue what "well" you think I'm poisoning.

My point was not that such verification systems are incorruptible. My point is that such verification systems are pointless because they will inevitably be perceived as corrupt. Whether or not the verification system is actually corrupt makes no difference. Such systems are worthless because people won't trust them. Any system will be accused of being corrupt by "both sides" of any heated debate, and they're not both going to be right. But it doesn't matter of one, the other or neither are correct. What matters is that the system won't work because people won't trust it.


Sure, people will complain. But at the end of the day, you platform either has it or will actually be drowned out by bots. Just like everyone is complaining about moderators, but all the "free speech" have them anyway. Bots doing their part to split society & boosting echo chambers sounds reasonable, but i doubt verification system will have much impact.


>loss of anonymity comes with significant drawbacks: criticizing government and powerful people becomes more difficult.

How don't see how making our lives more dystopian can be a net benefit.

How many of us expose different aspects of ourselves through different identities? Maybe most people are happy to show everyone all aspects of their lives, but I'm not, in part because who I am could get me or the people I meet in trouble, harassed, imprisoned or worse in some places.

Anonymity should remain a fundamental right. We have the right to live and expose different parts of ourselves to different audiences without overlap.

Anonymity is also a counter-power to governments. A government that knows all about its citizen is bound to abuse this power.

We see it in many democracies; elected governments enact laws that restrict freedoms to keep itself in power (Russia of course, Turkey, Algeria, but also EU countries like Bulgaria). Tools like government-issued ID that become mandatory to express yourself online are doubly dangerous: they make it easy to target those who cause trouble, and they can be used to remove means of expression (who can listen to you if you are denied an ID and there is no freedom of press to expose the abuse?).


Only if the general population _allows_ making criticising government/powerful people possible, which it currently does.

Society makes things like blackmailing, etc possible too. In a truly free society we wouldn't judge each other for all the stupid stuff we do now; the majority of people are still extremely prude/immature sexually & will judge others for anything outside the norm in a heartbeat - hell even mentioning inside the norm sex stuff is still somewhat taboo.

As a gay dude involved in a few fun subcultures both in & out of bed it's so refreshing to be able to be so open about sex, if only everyone else was the same, about all topics. Y'all are missing out.


Yep, a functioning society that benefits the most needs some form of anonymous dissent.


You guys are misunderstanding how things are developing here.

The danger is not that you'll lose anonymity. The danger is that if you use anonymity, no one will listen to you. I'd wager a lot of algorithms will just shadow ban you. This is because you become viewed, effectively, the same way as the AI agent generating brand placement tweets for Bob's Bargain Basement Bar-B-Que and Bitcoin.

No one will know the difference between you and the corporate AI's. Only the verifiable humans will end up with opinions that anyone pays any attention to. It's like everything else will be a banner ad. Whether it's a banner ad or not.


> Only the verifiable humans will end up with opinions that anyone pays any attention to.

This depends on people valuing who is saying something rather than what is being said. I'm not saying you're wrong, but I'm saying that if we're just trading in celebrity rather than ideas that I'm pretty sure we can also create celebrity AIs people would listen to.


Yeah, we will simply use an AI to summarize the article and comment section for us and skip all the spam and crap.

Jokes aside, now that i think about it, "article summaries" are kind common on reddit, but i've never seen such an algorithm used for comments, yet. Guess Up&Down-votes are still the best we have.


I think that's always been an issue, less weight for anonymous.


I've always wondered why governments can't run social networks for giving voice/reputation to citizens weighing in on current issues to give rise to a phenomenon similar to the net in the Ender's Game books that gave voice to Ender's siblings. There exists a lot of untapped potential to harness the passion of the armchair pundits but no pressure to mold this passion into grassroot hierarchical influence that could potentially grant political sway for individual's expert opinions. Currently opinionated relatives of holiday dinner & BBQ socials are under little pressure to cultivate their passion beyond a small inner circle of hostages of social etiquette - if only they had an outlet to mold that cunning & passion, to temper it with the satisfaction of having your opinionated views addressed by the big shot president of the times. In the book, these kids are calling shots under pen names. It's fun to think about.


Even though it is not an official service, in Germany there is Abgeordnetenwach [0](Translate: representative watch), where citizen can ask parlamentarians directly. Some parlamentarians are great at answering questions, while some never reply and some only let their offices copy & paste standard replies. I wish politicians would be forced to make an effort to reply to their sovereign. Certain reasonable anti-spam and DoD measures granted.

[0] https://www.abgeordnetenwatch.de/


> But then again it's a bit dystopian

I am having a hard time to imagine a scenario that is more dystopian


The Black Mirror episode handling this is one of the most dystopian and scary ones for me.


> Having a digital reputation that you can taint could put people on their best behavior.

I've seen some pretty reprehensible behavior from people who put their full name and a photo of their face on Facebook.


Won't bot/troll farms just buy "verified human" accounts?


That's what they are doing already. It's just that the human verification involves solving some captchas etc.


> What happens when most "people" you interact with on the internet are fake? I think people start logging off.

I don't necessarily think that's the case. Remember the old "nobody on the internet knows you're a dog" cartoon? Well, I think that for a lot of things, nobody cares if they're talking to a dog.

People only care about people on the internet being fake in certain contexts. They'll very much care if all of their dating app matches are robots, because the end goal of a dating app is actually meeting someone in the flesh.

But I don't think that people on Nextdoor and Facebook groups particularly care - or would even notice - if they're talking to robots. As far as I can tell from my limited exposure, most people like getting caremad in comments. Even in local groups, it doesn't feel like there's much connection being made between neighbors; an AI complaining about traffic or agreeing/disagreeing with someone's political opinion would scratch the same itch as someone two towns over doing it.


Some of the most popular platforms online are largely anonymous (Reddit and Twitter). Most of the time, you don’t even notice the username of the person responding to you. They might as well be bots, but people don’t care.

Heck, Reddit threads are often so predictable that they might as well be written by bots.

So yeah, most people just want to talk and feel that someone, something is listening to them. People rejecting your idea need to recalibrate their understanding of the depth and breadth of human loneliness.

The original New Yorker cartoonist forgot that people talk to their dogs too.


I don't think they care who it is, but they care who is a who. They aren't concerned about it being an AI because that concern hasn't crossed their mind. If it becomes common knowledge that the people responding are bots, I think we will see others begin to react.

I've seen similar reaction in communities around enjoying art, even though I find those have less reason to care where the art comes from.


As a member of one of those communities, I definitely have a good reason to care - AI art has flooded some of them, to the point where it's all "noise", and there's not much value in reading things until it dies down.


Does it matter if the person on the other side of the glory hole is a dude? Maybe? Maybe you didn't think that was a possibility? Maybe it being widely known that you are yelling at clouds makes it less satisfying of a scratch?


Ok, that's worse.


There's a lot of "I don't think" in there. Have you actually asked anybody?

I haven't, but my impression and anecdotal data is exactly opposite.


Picking at “I don’t think” feels like point scoring behavior, and generally unwise - we want people to share their confidence level and source right? Dropping it makes your argument harder to nitpick and sound more authoritative at the expense of important information/honesty.

Suggested remediation: just, like, disagree :) this is a self serving recommendation though because I really enjoy seeing people’s differing perspectives


Authors making this argument (which we see often now) really need to sketch out why cryptographic solutions will fail.

The entire thesis rests on this erroneous sentence in the second point: “This can be done in two ways – just move to invite-only silos where you already know everybody, or big platforms where the owners do the vetting for you.”

There are more options. We employ them when we are forced to; when it becomes cheaper than not.

We can sign posts or sign that somebody is real or to sign that somebody has earned rep, or that somebody has burned their rep… and subjectively score every piece of content that crosses our phone against our social trust graph.

We can rhizomatically scale social networks, can deputize our people in our extended social network to mark content as appropriate for kids or not, or otherwise filter. There’s no specific reason why we cannot grow a kind of social nervous system that has a kind of myelin sheath against noise and spam. It doesn’t have to be specifically only people we’ve shaken hands with or our like 3 closest buddies.


One of the biggest problems in actually using cryptography in the real world is matching keys to identities.

Using cryptography to solve the fake persona problem only works if the key-identity matching problem is solved. It would be great if the fake persona problem was the impetus that managed a solution to the key-identity matching problem. But I have my doubts.

Notably the key-identity matching problem isn't technical. Its societal. "just do government provided keys" is technically easy, but society (rightfully so) is suspicious of this. Other solutions exist, with other trade-offs. SSL certificates have centralization, revocation, and weakest-link problems. PGP-keys have spoofing, verification, and usability problems (though I liked key-base's approach here). European E-id is an interesting step, one to watch, though I fear the EU bureaucratic system might make a crucial mis-step. I really like SSI based approaches, but SSI is mostly about using crypto when the key-identity matching problem has been solved, and less about solving the actual problem.

Some technical aspects that need solutions that tend to be un-acceptable are handling key-revocation, key-theft, key-loss (as in forgetting), and key-duplication.


How about we let people generate their own keys, then use those keys to make identity claims, when people can generate on their own, or which can be generated by a third party. That gives multiple options to bind identities to arbitrary social media accounts etc without needing some monolithic root of trust.

I also really like the idea of using the keys to hold some amount of value, such that if the keys ever get leaked, there is basically a built in bug bounty to alert the key holder (since the key thief has the option to take all the money). This also gives users the incentive to manage their keys in a sane way.

Social key management schemes are also super interesting, and will likely be a part of future key management schemes. That is, basically, allowing some set of friends and family to re-roll or revoke identities that have been lost or stolen.

Slowly but surely, I think this future is coming. Lots of good people are coming at it from different angles, but basically all converging on the same general concepts.


> How about we let people generate their own keys, then use those keys to make identity claims, when people can generate on their own, or which can be generated by a third party. That gives multiple options to bind identities to arbitrary social media accounts etc without needing some monolithic root of trust.

You hit the nail on the head. Matching keys to real people can be done in-person for direct friends, then through a web of trust for indirect friends. For accounts/keys/personas you find on the internet where you don't have a chain of friends, you can either rely on "trusted" third-party attestation ("holder of key 0xdeadbeef earned a degree from this university") - you may never know with complete certainty if that's a real human, a bot, or an alt account for someone you already know, and that's totally fine.

The "problem" of matching keys 1-to-1 with identities for everyone globally (brought up by grandparent post) is a massive red herring that doesn't need to be "solved".


The problem I brought up in the grandparent post wasn't (meant to be) about a 1-to-1 mapping. It was about asking "who owns this key".

Web of trust sounds nice, but it hasn't caught on. I would say because it has trouble going beyond one hop of trust if you actually consider adversaries. In person physical confirmation works, re-keying is a major hassle though.

I do like the idea of a 'web of trust is good enough for declaring you are a person'. If you get a vouching system with a recursive revocation system, that might work well enough for establishing you are a person (though not well enough for establishing which person you are).

Trusted third parties have problems. They centralize power, either in the state, or in some non-accountable organization.


Because using technology to solve human problems rarely/never works. [1] was originally written for spam and bore mostly correct. Explain how replacing spam with “AI-generated spam” changes anything? You can try to fight this stuff but it will look more like AI to detect AI (similar to our current anti-spam tech). No reason to believe cryptography has some kind of magical bullet here as it’s an unrelated problem domain. And to the person claiming you get kicked off and that prevents you from coming back ignores a) we haven’t solved being able to tie disparate online personas for a unique offline one (despite Facebook ostensibly trying really hard) b) there are all sorts of secondary problems that pop up when you try to do that (eg ignores the concept of learning from your mistakes and redemption, key things that happen frequently with the young or anyone else testing boundaries).

[1] https://trog.qgl.org/20081217/the-why-your-anti-spam-idea-wo...


> Because using technology to solve human problems rarely/never works.

You're badly misunderstanding the parent post - it is not proposing a technological solution to a human problem, but a technological enforcement of a fundamentally human solution:

> subjectively score every piece of content that crosses our phone against our social trust graph...can deputize our people in our extended social network to mark content as appropriate for kids or not, or otherwise filter

This is a social web of trust, where real people do the ranking and trust assignments - the cryptography and other technology just keeps track of bookkeeping.


Given that GPT is already difficult to distinguish from a person who’s confidently wrong, how does this web of trust system solve the problem?

The belief that anything will “solve” this seems naive when there’s 20+ years of proof of this being an “unsolvable” problem despite repeated technological, social, and legislative attempts. There might be a new normal established with new battlegrounds drawn and we learn to “live” with it, but I’m willing to bet non-trivial domes of money against there being any true “solution” here.


Yes the improving strength of GPT is magnifying the problem but is unrelated to the solution. The solution space steps outside of examining the content of the message for truth. The solution is to sign the messages using a social trust graph; and scoring posts based on the social trust distance.

It's not a very compelling argument for me that a (very short) 20 year history of failing to solve this problem means that it cannot be solved.

I'm willing to bet you $1000.00 that this will be completely solved in 10 years. In 10 years we will know if content we consume is "fake" or "real". It may still be offensive, and harmful to gullible people - but it will all be scored as to how real it is. Real will be defined as the likelihood that the content comes from a real human being type person, as agreed upon by other persons in-between you and that person. Content one hop away from you, from a friend, will have a score of 100% real, and content many hops away will have lower score. You will probably start your day by sorting the content you consume by the likelihood that it is real.

I did actually work on PGP - and I'm willing to concede it didn't succeed. But DNS works and bitcoin works (technically). So we do use various flavors of cryptographic trust to make sure that actions in a network are "real" versus "forged".

And yes - it's true that bots can creep into a social trust graph... so yeah, it make take effort to keep pruning the weeds in the garden in a sense.

Note in some ways I'm not really trying to argue FOR crypto per se - I'm just saying that the OP should at least critique crypto if they want to make the thesis they are making. And I'm arguing that it is a big omission to gloss over the utility of crypto and the argument will be hard to make that crypto is not a significantly powerful modifier to the original thesis of the OP.


You clearly don't actually know anything about webs of trust. I encourage you to read up on them: https://en.wikipedia.org/wiki/Web_of_trust

It's obvious to anyone with a passing familiarity of WoTs that you seed your web with people you know in real life. GPT is not "difficult to distinguish from a person who’s confidently wrong" in real life.


Are you going to be accepting direct confirmations only or indirect as well?

If you are only accepting direct confirmations, this means you are only going to talk to people who you meet in person. This is totally fine and will work, but then you don't need any new tech -- just ask for their email / phone / nickname on your favorite social site. Or make a private forum (or a Signal/Telegram/Whatsapp group) and invite them there.

If you are accepting indirect confirmation, then once the network grows big enough, there will be bots. Maybe some of your friends meet Greg, director of marketing for Widgets Inc., and correctly confirm him as real human, and then Greg will confirm an army of GPT telemarketer bots as "real humans" so they can do the sales and earn Greg a bonus. Or maybe your good friend gets malware on their computer and their key is used to confirm a bunch of keys without their knowledge.


May be a bit wise to point to something other than the Wikipedia page before claiming I don’t knowing something about an entire topic.

You seem to be intentionally or otherwise completely missing what I’m saying. PGP just establishes ownership of a private key. It doesn’t say anything about that person then choosing to sign the output of GPT or giving GPT that private key to do whatever with. And GPT can mimic whatever writing style you give it. Not hard to imagine giving it various writing samples of yours personally to learn from and imitate. So please explain how web of trust solves anything for that. Aside from trying to keep track which person in your personal network is a spam vector, because there are super connectors with thousands or tens of thousands of real-world contacts and logistically that’s not realistic to manage since most people aren’t cryptography nerds.

There’s also a bigger fundamental problem with web of trust in applying it here. Trust is not binary in that way. If I trust person A and they trust person B, trust isn’t a transitive property so in reality there’s nothing we can say about my trust about B. Also, trust isn’t binary. If I trust person A about specific science topics, that doesn’t mean that trust extends to other topics. Trust also isn’t static and sticky whereas it is generally treated as such in computer systems where trust needs to be scores and revoked automatically somehow (and then we’re back to an AI war to detect abuse and better GPT to avoid the abuse). This is also ignoring that trying to model human trust webs with a CS model that doesn’t work anything like it, isn’t a good recipe for success. Also, human trust webs have massive trust and scaling problems) (cough Theranos, WeWork, FTX, Madoff, etc etc). Notice how web of trust always sticks to basic cryptographic primitives which are easy to write papers about and solve academically but not a solved problem by any means in terms of defining what trust actually means or how a web of trust works in terms of AI content. Obviously PGP has been around forever and AI is a bit newer so maybe there will be interesting work coming out of this space at some point. AFAIK today web of trust buys you bupkis in terms of fighting GPT spam. I would recommend reading up any number of articles that discuss why pgp and signing parties failed. It’s not purely a UX issue. The bigger problem is that even in a “trusted” system, fraud arises spontaneously because it’s the prisoner’s dilemma problem - there’s a material advantage to perpetrating the fraud and there’s a better advantage to helping perpetrate fraud (much hard and longer political process to revoke that trust).

As a more concrete practical demonstration of this failure. Consider that certificate authorities, which are assumed to be “trusted core signatories” in a PKI system. A PKI systems is actually the same as a “web of trust”, it’s just that I delegate verification to a third party. As cryptocurrencies should have shown, people still prefer having a virtual account in a bank to manage those funds/ lower fees. Similarly, people will outsource the complexity of validating identities (CAs signing carts for websites). CAs have repeatedly abused this to the point where afaik the security community generally acknowledges that CAs are generally worthless - even the “good ones” struggle to do verification at scale and there’s so many CAs in typical lists that it’s basically guaranteed that there are malicious actors. And we know decentralization doesn’t actually work for end users because it’s too complicated a mental model. People want a named intermediary that they delegate responsibility to. That’s why most people defer their CA validation to browsers and operating systems. PGP would work similarly so now you’ve got people delegating key trust to Apple, Microsoft, Google, Signal, etc etc. and nerds who use open source verified key managers and maintain their own infra to manage these lists. But that’s not a representative sample of what end users will accept at scale. So you’re back to centralized control, which will be better than status quo as OSes and browsers realistically are more resitant to handing out broken signatures. And of course maybe better algorithms and methods will be developed to solve these shortcomings.

But a lot of these issues have existed and been documented for a long long period of time independent of the vague idea of using it as a GPT detector / blocked. I’ve been around the Bay Area for 10+ years and I remember having friends who thought this would take off any day now and worked really hard to make it happen, hosting signing parties and whatnot. It didn’t and I was pretty confident it was a pure nerd activity that wouldn’t have any impact in its current form (regardless of the UX challenges - the problems are much more fundamental and worse). Web of trust is seriously hard even in the simplest possible state which is PGP and that’s failing miserably despite being around for a very long time.

Would you agree that it’s on you to provide some supporting evidence on the claim that

A) I don’t know what I’m talking about

B) something more than hand waving “sprinkle some decentralized cryptography here” and actually explain how you solve the human problems that are so important here + why PGP has largely failed but suddenly it’s going to find a second life in GPT prevention.


The same problem we have with other networks of trust: Signing works alright if you already know the people you expect messages from (but in that case, it's also no better than the invite-only social networks the OP talked about).

However, the real problem is getting to know new people or vetting messages from people you don't know. In the future the OP sketched, you can never be sure that an interesting new person isn't actually a bot. Knowing the public key of that person won't solve that problem.


So, people start signing their AI-generated messages. What have we gained?


Indeed. SPF and DKIM were supposed to reduce spam by ensuring that every sender was verified. Now, we have more spam than ever, and all the senders are verified (on short-lived garbage domains that are not yet on blocklists). The only DKIM failures I ever see are on legitimate mail from badly set-up lists.


DKIM and SPF are more important for phishing than for spam. Without them, it’s very easy to spoof senders.


It is gonna happen for sure. People will leverage powerful tools and claim it is their own voice. Me shrugs.

I more want to at least have that individual emitter be accountable for what they post; to establish continuity. I want to know that that emitter is say 3 hops from me, and is trusted by 12 friends in between, and has a general trust score of say 7/10 overall as an overall rating by my extended trust graph.

It is less that I want people to say good things, or be truthful or whatnot - I just want to know that they are real, that there is a human behind it, that that human has an opinion of some kind. The thing is that there are a ton of sock puppets that are not real - it's more about reducing the noise / spam rather than a perfect solution.


You can ban them and they'll stay banned. Today they'll come back or switch to one of the other dozen accounts they post crap on. The counter argument to that is that places like Facebook are mostly not pseudonymous and a lot of crap is posted there.


And when you get banned because someone stole your key - that's going to be awesome.


There are going to be a few cases where keys are lost or stolen. It may be possible to build multi-sig wallets that allow for you to migrate an identity to a new key. What we're looking for is some kind of statistical means of trying to reduce bad actors. Even if there was no recourse for you and you were totally screwed I'm not sure it totally invalidates the concept of having a key or a mechanism for trying to remove bad actors from social discourse. You could trip and die because you wore shoes. You could get locked out of your house... bad stuff does happen - it doesn't mean we shouldn't wear shoes or have keys.


AI generated content will likely be higher quality than what we have today. AI accounts will spend time building reputation with superior content and then burn some on an ad or spam campaign, but they’ll still have a better reputation than most people.


Ban them on what grounds?


Context: I've been working in the cryptocurrency space for a decade.

One of the most interesting technologies the crypto space is working on are the SSI (self-sovereign identities).

https://en.wikipedia.org/wiki/Self-sovereign_identity

Even if the whole idea of cryptocurrencies is found to be a dead-end, the fact that it invigorated the research and development of SSI will change some important things about how we operate online.

There exist prototypes of tech that allows you to prove you are indeed a unique human being online [1], and reveal nothing else about your identity. Most importantly, this tech is not owned or controlled by any FAANG or government, it's an open protocol just like email.

I have listened to a podcast with an expert researcher in AI, and I remember him saying that he predicts some form of cryptographic identity will arise in order to help deal with the bot problem

[1] https://worldcoin.org/ (note, I don't work for them, and have no idea if this will be the tech that finally breaks out, I just think they're the furthest along of any of their competitors)


You can employ those techniques but real people will get blocked as spam. And the better AI can evade the closer to the bone you will have to cut. Then what? AI is interacting with your algorithm to silence voices.


Even with some defects or imperfections anything is better than what we have now - which is basically nothing.

I think the way I'd think about this is to imagine say a small community, such as a town of say 5000 people or so. While you cannot know each person individually, you can know of people by reputation. People do earn rep over time, and they can burn rep. It is true that some people will be unfairly downscored, or unfairly upscored - but I'm not really trying to argue for those fine grained situations. What I'm trying to argue for is simply distinguishing the very bad actors acting out of pure malice from injecting fake news, media and 'yellow journalism' into human conversations.

True some real people will be downscored (I prefer to think of this as downscoring bad actors rather than 'blocking'). And true an AI can 'sound very human' - but an AI or a bad actor will struggle to build up a reputation over time. An AI can't shake hands with you, it is harder for it to prove it is human... Other bad actors will presumably burn their reputations if they spit out a series of offensive, misleading, false, inflammatory or toxic posts...

Note I am not necessarily advocating for crypto per se as a way to establish social trust graphs (a'la PGP or say Keybase) but I am arguing that there are other options that the OP did not raise. I more want to see a wider discussion around ways to filter malicious media that either "centralized systems" OR "small social clubs". I'm not necessarily saying it has to be a cryptographic solution... but I do think there are more ways to have what we want.


This is why I quit social media years ago. There is no difference between a stream of posts from strangers and a stream of posts from bots. What I care is interaction with friends and FB wouldn't give it to me or not as effectively as WhatsApp and Telegram. So I ended up with WhatsApp groups and Telegram channels where I know first hand most of the people or more rarely I'm at one hop from them. The only place where I regularly interact with strangers is HN.


So does this mean you wouldn’t qualify HN as social media?


Reddit and HN are more in the subgroup of what you could call antisocial media.


Not in the way Facebook or TikTok are social media. HN's dynamics make it closer to forums from last century or Usenet IMHO. Strictly speaking almost everything is a social media, even WhatsApp and Telegram which I use daily.


For me this feels spot-on. But I wanted to comment on one thing:

> Fourth, we'll see a resurgence and even fetishization of explicitly "offline" culture, where the "Great Logging Off" becomes literal

I actually think the author undersold this a bit, because even once you've siloed into social buckets with people you already know- as long as it's digital communication, you can't quite be sure it's really them. Even if you're voice-chatting while playing a game together, even if you're video-chatting, we'll reach a point where that could all be faked.

You could rely on cryptography to make sure someone is who they say they are, but that requires extra hoops and a basic understanding of how to use it and what is and isn't trustworthy. Most people probably won't bother

So at some point only physical contact will be fully reliable

(Until, I guess, Musk's brain-computer interface takes off. Then nothing is real)


With cryptographic signatures I could at least verify that something stems from the same account that I previously agreed with, but this would require people to be able to manage their own private keys.

Actually nothing would stop people from signing their own comments today and sometimes it is done.

With e2ee messengers I feel pretty confident that a message comes from the right / real person after I verified their public key.


It can't provide confidence that the message comes from the right/real person, because even without any breach of secrets (which happen even to competent people/organizations) all that gives you is confidence that the message comes from someone authorized by that right/real person. It could be another person, or it could be data generated by an automated system to which they gave the credentials for whatever reason - historically rich people used personal secretaries for writing all kinds of responses including personal ones ("I'm so grateful for your invitation to visit, let's ..." didn't necessarily mean that the actual person even read your invitation), and if an "artificial secretary" becomes good enough, people will use it in future.


> all that gives you is confidence that the message comes from someone authorized by that right/real person.

Which is good enough for many applications, I think. With friends and family, I am pretty confident, that none of them deploy a personal assistant to answer my encrypted messages. As opposed to a messenger where the service provider can inject ads into the messages.

How do I know that a person that I speak to IRL is really saying what they think or even really is who they claim to be if I don't know them well? A rest of uncertainty always remains.


> but this would require people to be able to manage their own private keys

a few generations ago social networking would have seemed infeasible because it would require wide spread literacy (along with many other reasons of course). widespread private key management doesn't seem that infeasible to me.


I agree. Key management can be learned (but still has to be learned). I think that cryptocurrency made the biggest impact in this area so far.


How does one really verify someone’s public key in this situation? E.g. a fake account would presumably be able to generate a fake website of themselves that looks at least semi-legit, and use it to post their pubkey. What would give us confidence the key comes from a legit person?


> So at some point only physical contact will be fully reliable

But then there are twins, so... The "problem" pre-existed.


It doesn't need to be perfect, it just needs to be good enough. In real life, the likelihood of twins perfectly emulating each other, and then using that to deceive others for anything but playful intent is low enough to be acceptable. If a solution is good enough that exceptional corner cases are rare and typically harmless, then it's good enough for adoption.


I can imagine a future where it's quite easy to fake real people. By gathering information, pictures and maybe even videos of a person, you might be able to deep fake a credible video of them. It might know their address and use pictures and Streetview material to have them "record" a video from their house.

That means that a clever script would be able to figure out who a persons real friends are (that info is easily obtained already today), and create personalised videos from said friends. In such a scenario it might not be the Great Logging off, but it would certainly be a situation where you think "I had breakfast with my friend and he said X to my face, so I can believe it.", but anything online can be considered noise.


At a previous company, we had a standard "About" page with our names and photos, didn't think anything of it. Fast forward a few years and we discovered a bunch of Russian "businesses" who had used our names and photos on their About page. They were selling all sorts of bogus apps and service contracts. We reached out and sent a kind message and asked them to remove our photos, they responded (and apologized!) and said they would remove the content, but of course that never happened.

To this day, when people search for me, they find an old photo and my name associated with a fraudulent company selling shit products and services.

On a related note, a few of my friends have taken down their long standing personal sites and blogs for this very reason. A few bad actors took their photos, their About pages, a bunch of their blog posts, and used them as if they were their own—quoting them, tweeting them, referencing them in "their work", etc. One of them got wrapped up in an elaborate hiring scam where someone used his photo, email address, and employment history from LinkedIn and got all the way through an interview process.

The whole situation was so jarring that he wiped everything—almost a decade of writing, his LinkedIn profile, his Twitter account, all of it.


Certainly this is within the reach of the tech.

What I'm afraid of is, this will dull people because the computer models would not have the fidelity of a real person(simply because only small portion of humans thoughts are expressed and even less recorder). Many people will prefer to interact with this shallow model of humans instead of real ones in similar manner of porn replacing real sex.

It will create armies of people who have no real skills with working or even interacting with humans. Representing people as avatars and a nickname already created huge issue with perfectly nice people forgetting that they are interacting with real people online, turning into huge jerks. I can imagine how horrible in could become when people are completely disenfranchised from reality.



Hard to imagine that there couldn't be any kind of trusted (perhaps encrypted, but not necessarily) communication online.

Text conversations are trivial to fake, and WhatsApp could trivially MITM a handful of conversations and tell me my friend said something different and I'd never know. But they have an incentive not to do this at a large scale, because there are benefits of running a worldwide trusted messaging service, and they would fear people would move to alternatives when millions of messages started being adulterated.


That's a neat thing I kinda want to do right before dying, just taking all of my chat data I can get my hands on, retraining a chatbot on it and have it become the final version of me that can exist for as long as people want it around (aka pay for the server and gpu instance lol).


This article gets it. ChatGPT heralds the end of free user comments, forums and social media. Because when everything and everyone online is fake, why even log on to Insta or FB or Twitter? The internet will still be useful, but Web 2.0 will go the way of the dodo unless it adapts and the easiest way to cut down on spam is to charge users money for access. Spam campaigns are a lot less lucrative when it costs money to participate. Hell, charge $0.05 per email. The amount of spam would decrease significantly.

Side note, I’ve long thought that identity verification is the _one_ area that a public blockchain could actually add legitimate value to. But I’ve seen no efforts in that direction to date.


The volumes of email spam are absolutely staggering, but its detection and filtering, with minuscule amounts of false negatives, are nearly perfect. It has been largely a solved problem for a decade to which we give very little credit. Me personally, it blows my mind out of the water every time I'm reminded of it.


Spam filters are indeed amazing, but I do get a non-trivial number of false positives these days.

Spam is a moving target.


And the movement of spam just accelerated with ChatGPT.


I have a feeling Bayesian spam filters will be highly effective at picking out patterns in anything generated by a particular model. The arms race will move from spammers coming up with new templates to spammers coming up with new models.


There is as much spam as there is email. Did people quit email?


There is more than twice spam as there is legitimate email (source: ran a corporate email server). However, so far most spam is automatically detected.

In the near future, It's likely that this is no longer going to be the case and in the moment that most of your email is spam you are going to stop trusting everything you receive.

Even right now inside corps people already (should?) distrust and double checks everything that comes from another domain.


Obviously people didn’t quit email. But I also don’t “engage” with email with anything near the intensity that I used to. 20 years ago…? One of the hottest romcoms ever, was built around our feverish engagement with email: “You’ve got Mail!” Can you imagine that being a pitch for a romcom next year? They’d laugh you out of the room. Because while people still use email, they don’t engage with it the way they used to.

I think this sense of disengagement is what the author is trying to metaphorically capture with the idea of “logging off”. It might just be my family and peer groups, but I do sense this in general. The internet and its various “social” constructs feel more and more like they’ve crossed that line in dating relationships where the other person becomes “clingy/needy” and it starts to grow wearisome and cringy. I think that causes people to quietly disengage.


I can't help but wonder how we will ensure official government/legal->user communication is legitimate in the face of software that can dodge spam filters at a whim. Is it back to letters for us?


A lot of people did. Most business communication happens in virtual meetings or on platforms like Slack and Teams. Most informal communication occurs through voice calls and instant messaging. All that used to be email around Y2K, as far as I remember.


Yes, they did. Both my work and personal email inboxes are pretty much exclusively various kinds of automated messages (order details, newsletters, announcements from various tools), I get essentially zero personal communication over it anymore; that all happens on other platforms. Email has more or less gone the way of regular mail: still used for various businesses to get in touch with you, but for personal communication it is dead.


No, because spam filters are pretty good. But that may change, too.


Will something like the precursor to Bitcoin, meant to combat spam -- Hashcash -- become important again?

The enabling technology behind micropayments is ironically looking like a CBDC a lot more than Bitcoin (+other cryptos), for scalability reasons.


> the easiest way to cut down on spam is to charge users money for access

Which realistically means everyone will log off and touch grass instead of pay to interact with other verified humans.


> but Web 2.0 will go the way of the dodo

What does Web 2.0 have to do with any of this?


It’s a reference to user generated content (web 2.0) and saying it will become AI generated content.

I think we’ve already been seeing it, just done badly. Most content marketing is somewhat automated. The tools (ChatGPT) have made a leap and the internet will be riddled with it and it’s brethren.

What’s interesting though is that future AIs will train on AI generated content which trained on AI generated… etc. Possibly missing legitimate feedback?

AI is inevitably unleashed on AI in a game of cat and mouse driven by that unholy god “marketing“ thus eventually rendering the internet useless in a race to the bottom of legitimate human interaction.


many ways it is already happening with those content farms and sock puppets but also with Tiktok style feeds where you don't follow users but mindlessly watch what algorithm will provide for you.


Web 2.0 often refers to the "second wave" of consumer-facing internet sites that allowed users to leave comments and interact with each other, as opposed to the Web 1.0 of static HTML sites.


The internet is not limited to Social networks. The "Market for lemons" has existed for decade, google is a market for lemons where people buy from SEO spam, Google assistant , Siri , amazon are also.

The logging off is a pipe dream for some people, even though they know that humans are informational creatures and that the "offline world" has never lived up to the desirable one since the time books were invented.

Doominess and gloominess is the real disease of our times


> Doominess and gloominess is the real disease of our times

I see SIGNIFICANTLY more homeless people in the two cities I frequent than at any other time in my life, there is a snow covered tent city half a kilometer from the glass monstrosity Google has built here and mostly doesn't occupy. Facism is on the rise globally. A third world war is possibly percolating in Europe. Google is soaking up all the bulk of revenue that used to go to publishers. Amazon has destroyed at least half of bookstores.

But sure, I guess gloominess is the issue. Does all of the above make you happy somehow?


Things are undeniably better than they ever were before, especially for those outside the US and Western Europe. We need to work to continue to improve them, but we are in no way going the wrong direction outside of a few small areas such as housing access as you point out.


I’m outside the US.


I travel a decent bit. Not sure where you are, but almost without exception things are much, much better than ever before outside the us.


The internet is full of lemons, not only social media, SEO-influenced services, and services where money buys you reach.

If you take any popular medical fact (for example, about diets), you will find countless published academic articles supporting and against it. This extends into other fields like physics, sociology, and politics too. But the contradictions there (while present) are not as overtly visible.

Many popular news outlets are leaning towards reporting in a "flavor" their audience expects and mixing opinions into their reporting.

By some studies, 80%+ of internet users admit to being duped at least once by fake news or misinformation online in 2022. That's the misinformation they spotted. 50%+ Americans say they read fake news online regularly.

As for logging off, according to Reuters Institute, 41% of Americans actively avoided news in 2022. And anecdotally, my friends and I have withdrawn from social media like Reddit, YouTube, Facebook, Instagram, Twitter, and so on significantly this year. It definitely feels like a lot of information online is misinformation, either synthesized intentionally or "broken-telephoned" into fantasy by mainstream media.

I do not know how to deal with this personally but disengage from most of the online world. The signal-to-noise ratio of reliable information in many regions of the internet is meager. I mainly read Hacker News (which also sometimes features contradictory research or news), some academic journals, Wikipedia, some forums in my professional field, and some RSS news sources. Everything else on the internet has become too awash with contradictions and misinformation.

I never thought I'd have to retreat so far from mainstream media and social media. Mass logging off is going to impact these areas of the internet seriously.

I am scared of what AI language models will do to professional blogs, news, and academia. Editors and peer-reviewers there are already overwhelmed with unreliable information.


Most probably it will solve itself. Altavista was a useful search engine until it was not, than it was replaced by google. Google was a useful search engine and its getting primed for a replacement. Usually things do not pan out as Utopia or Dystopia but some human middle ground of kinda shitty balance :)


I am interested to see which social media platforms double down on welcoming ai generated content and catering to the remaining users that prefer addictive content to social connection.

This is speculative but we may see a very strong pivot from the "social" aspect to the addictive content aspect in publicly listed social media companies in order to continue to appease shareholders, which could, possibly, accelerate this potential reemphasis on real world communication and connection.


Yep. Customizable TV is the future


Can you elaborate on what you mean by doom and gloom being the real problem? Are you saying it’s more of a perception issue that can be changed by the user?


No i mean things like, this article: That AI is so bad that people will want to run away. As far as i m concerned AI is a positive and desirable change in the world


It still seems derivative, not creative. It is damn good at it and will only get better but still derivative from my limited experience playing with it.


>> Doominess and gloominess is the real disease of our times

Seems more like extremely wealthy sociopaths & psychopaths that thrive in a capitalistic system are. Don’t think people would be so doomy & gloomy if it weren’t for those with the means to do so trying to shape society into their dystopian dreams.

Two examples if needed:

Thiel & Davos family


What about society being shaped in the dreams of other wealthy people, like Karl Schwab or George Soros? Is that OK?


They are all WEF cronies.


Have a look inside Mao's China or Soviet Russia to cure you of the notion that capitalism is special here.

Doom and gloom have been around for longer than a money based economy, too.


My comment is based around his specific words of “disease of our times”, being unrelated to the timeframes you’ve mentioned.


They arrive at the same endpoint through different means. Too much concentrated power and wealth.


It's not at all the same endpoint.


Ah yes, whenever someone complains about unchecked capitalism, there is always someone to bring up Soviet Russia.


It's quicker to make this argument by contrast, than to make the longer argument that capitalism is great in absolute terms.


People go to social media sites for engagement. If AI bots provide good interaction, most people will be happy and won't care if they are chatting with a bot. Hell, the best social site would be one in which I could create a couple of AI companions with whom I could chat about things that interest me that day. The companions would scour the Internet for related news and topics, bring them to my attention, and we'd chat. Kinda like HN but customizable.


Reading this comment (which is correct, I think) gives me the urgent desire to log off HN.


To piggyback off of this, I predict we'll be hearing (not seeing) the phrase "the internet is fake" a lot more.


The "Dead Internet Theory" will be proven true, if it hasn't been already.


“The internet is a place where absolutely nothing happens.” - Strongbad

Seems to ring true more and more :)


This was true when written, and then reality inverted itself in 2015/2016 when internet culture became mainstream culture.


The purpose of communication is to distribute and filter information. We don't actually need to communicate. We do it to distribute and filter information. It is currently a very inefficient process. If AI does that for us in the future, all is good and we end up being more efficient. No more time wasted on reading & writing comments on the Internet.

AI should be able to pick the most relevant information for you and present it in the most compressed form.


If it's a personal ai. But I wouldn't trust a centralized for profit "AI". Journalists atleast (had) ethics and norms that can make me somewhat trust them. A for profit AI filtering information for me? It's asking for trouble.


Yeah, AI must be personal and aligned to each individual rather than centralized and aligned to collective / group of high priests / profit-maximizers.


>The purpose of communication is to distribute and filter information.

And connect with other people, feel catharsis, to feel less alone, loved, engaged, interested, and the other things that make us human. AI will not give us that and in fact undermine that. Perhaps then AI will push back to the "real world" for these interactions, as the article opines.


There's more time to connect and have fun with people in the real world when less time is spent on reading & writing comments on the Internet.


unless nobody nearby in the real world shares your interests. That's the reason i got into the internet in the first place!


> AI will not give us that [...]"

Is that true?


That sounds perfectly plausible except if people are expecting these services to be free like all big tech platforms currently are. In that case the AI will tailor information to manipulate you to do what the people paying the platforms want whether that is buying a product or supporting a political cause.


In some ways everything we do is communication, everyone has a built in need to be loved, to be part of a group.


The current quality of human-generated online content is not uniformly high.

I rely heavily on search engines and recommendation systems, but these are gamed by SEO spam, and flame-war political content.

I want the quality of the content I see to be as high as possible, irrespective of whether it has an AI or human author, and if an LLM can write an excellent Stack Overflow answer/movie review, I'd be interested to read it.


> A Market for Lemons

> In which the internet gets clogged with piles of semi-intelligent spam, breaking the default assumption that the "person" you're talking to is human.

Neal Stephenson's novel Anathem briefly mentions something like this as part of the novel's worldbuilding.


Accelerando by Charles Stross has an even more prescient version where humanity dissembles the planets to make giant computers which end up being filled with sentient pyramid schemes.


Don't you mean Fall: Dodge in Hell?


I don't remember if Fall also talked about this; I only read it once, and don't remember too many details. But Anathem definitely did; here's an HN comment quoting it: https://news.ycombinator.com/item?id=14554765


The first act of Fall explicitly takes place in a world polarized into micro-groups because AI chatbots generate so much content that it's impossible to determine what's true. It might even talk about the initial act that started the practice, I forget.


> Third, that AI technologies won't be perfect substitutes for actual human-to-human contact.

I'm not sure this assumption holds true enough: incredible numbers of humans have already substituted vast swaths of direct human-to-human contact with the gigantic text-only condoms provided by the internet. We've already traded in superior quality human contact for inferior quality contact--for the bonus of more specialized content which we probably wouldn't get if we knocked on our neighbors' doors.

This doesn't necessarily invalidate many of the proposals in this article. But what about the dimensions in which AI provides or will eventually provide a superior substitute to human-to-human contact?


Sometime ago I began wondering along similar lines as the author, not that deeper level of course but noodling around.

I think the internet will soon be clogged with bots competing with other bots just to get a few seconds of human attention. At that point the operational cost of bot farms will outstrip the benefit and so we’ll settle something like 60-70% bots maybe?

Speaking of myself I’ve gone back to reading and enjoying physical books after being on kindle for close to a decade. I never thought I’d go back to physical books but now a days reading on kindle feels a bit too lifeless so to speak, so here we are.


I've been thinking about similar things lately, and particularly I think about the 3 billion humans who have never used the internet. How will this affect them? And how do they interpret this existential crisis we are facing?

Additionally, when the internet is mostly created by bots, why would I go online to look up a recipe on a bot cooking blog that was generated yesterday when I could have my pocket bot generate a brand new, tailor made cooking blog for me whenever I want? Ditto for instagram/tiktok. Generate the next image based on how long I looked at the last one.


Like proof of work and proof of stake, I think we’ll see proof-of-humanness in a way that’s far more intrusive and cumbersome than current captchas ever were.

At this time we’re still safe by doing “click the joke that’s actually funny”. Once an AI figures that one out, I imagine we’ll have to prove we are made of meat.


For my own life I've boiled it down to a "signal vs noise" problem. Whether it be a human or a bot delivering information isn't really important to me. What's important is the accuracy and proof of the information.

Eliminating the "noise" is pretty easy, just take the "mute everyone" approach. No default subreddits, use ad blockers, actively disable "suggested" content where possible, and periodically review the things I'm getting my "signals" from.

The problem I'm struggling with now is verifying authenticity of references and information. Every solution I can think of (say a chrome plugin that does analysis and can watermark text/video/image content with a "FAKE DETECTED" banner) just becomes a cat and mouse game.


Search engines are faltering because many intersecting trends are destroying the availability of a vibrant, indexed, text-based internet: emphasis on video content, siloed content behind logins and paywalls, pulling traffic out into "apps" that are no longer part of the web, etc. GPT-style language models are also driving massive creation of spam that further dilutes and poisons the text sources that their own training models are derived from.


New search engines will falter.

Whoever has a collection of current pages has the data set to be a gateway to authentic information.


Problem is that it will not help when you try to search what is best new phone 2028.


Google has problems already.


> but even if that hits us tomorrow, the amount of raw power that's been unlocked in the last year alone – the tech that's already here – already has the potential to change things in massive waves.

If this is about ChatGPT, we should remember that it is not available in a way that you can build your business applications on top of it in any reliable or meaningful way. It is a very powerful demo of what AI can do, but nothing more yet. Also, OpenAI is spending a lot just to let you and me give it a try and this will not last long. IIRC, they are burning something in the range of 3 million USD per day.


Still, it will be replicated, then optimized, and eventually cost pennies to run.

The writing is on the wall.


Well, guess I was wrong. There is a product in the making. [1]

[1]: https://twitter.com/michael_vandi/status/1607422866416619520


I'm reminded of the ending of Her, in which Samantha and the other AIs disconnect from humanity in search of transcendence, and all the humans are left with nothing to do except just do human things together.


What's funny is that people use these platforms to create their own bubbles; I'm sure there are quite a few people out there that would pick AI content over human content that doesn't align with their own interests & views.

I don't think there's a 100% perfect solution to prove you're speaking to a human other than having the conversation offline. Even then nothing stops someone from wearing a smart earpiece and replying to you with whatever an AI is whispering into their ear.


That world is already there/has never gone away: people already are paying huge premiums for curated data and intelligence, explained and summarised by humans they trust (or at least respect in some way).


I think the author is really gunning for his particular vision to happen. If you were a tech leader would you just let "The Great Logging Off" happen?

If this becomes a huge issue you'd better bet that big tech is going to invest a lot of resources to solve it. The simplest solution would be to get better at verifying real human users, for example with better adversarial AI systems to detect non-humans or more document-based authentication tying your real identity to your user account (and thus any bans for misuse going to your real identity).


What authority does a tech leader have to stop humans engaging free agency?


this article explains the concept of a market for lemons rather clearly. This kind of simple and thorough explanation for other economic phenomena would be rather valuable


The idea raised in this article is interesting. However, if the technology on the side of the spam producers improves, platforms will follow and improve their technology for recognizing spam.

So, yes, there might be some temporary lemons failures on some platforms, but I would not expect a big logging off to happen on the long-run. After all I think this is a cat-and-mouse game of superior technology in producing / identifying spam.


> platforms will follow and improve their technology for recognizing spam

Written language has patterns, and your response and mine also follow these patterns. Language models like GPT-3 and ChatGPT can learn these patterns because there are finite ways in which humans write text. ChatGPT responses are easy to recognize because it was fine-tuned with specific data to meet chatbot requirements and provide Q&A style responses. In the near future, these language models will be fine-tuned for specific purposes, such as spam, and will mimic human text patterns in a way that will not be distinguishable from real humans.


While all of the above are true, there is a variable which was a constant for thousand of years. That is that language is generally static, it doesn't change much over ten, twenty years, or fifty years. It sometimes undergoes significant changes over the centuries.

Well that constant will be transformed to a variable. While GPT can recognize and produce outputs of a language, this tool, or similar tools can be used to produce another language, totally from scratch or a variant of an existing language, instantly.

In the language context, i would argue it is a lot more difficult to produce effective spam than in the image context or video. People will change their language, the model will have to be trained again in the new language, and that is until people up their game and change language again. This is happening already, young people use different language in the internet than older people. I would be surprised if we don't witness in the near future, the internet of a million languages.


So what? How much time in day, do you think you spend reading what random people you dont know are saying?

Who this effects is jobless people.


How many people on Hacker News do you know, or is that just a way to say you're looking for work right now?

People here have experimented with using GPT-3 to generate comments and content shared here, well before chatGPT was released; while it can sometimes be a bit suspicious, it can also get up-voted: https://www.theverge.com/2020/8/16/21371049/gpt3-hacker-news...


You didn't answer my question. I barely spend more than 15-20 mins a day on HN, so how does it matter if I am reading something useful or not. This is just an imaginary problem like the problem of having 100 million vids of how to boil and egg on youtube. Once you get the info you need from one vid who sits thro the rest. People just tune it out. Like traffic or bar conversations.


Odd flex, but ok.

Assuming the question was

> How much time in day, do you think you spend reading what random people you dont know are saying?

Rather than "so what?", which I will come back to later: For me, probably several hours per day. It's not just here, or twitter/fb, though that's probably an hour by itself, it's also:

• Blog authors (parasocial relationships!) • Newspaper writers • Wikipedia editors • Review writers on Amazon, Google maps • Question and Answer writers on Stack Overflow • The first few comments under some Youtube videos • Comments sections on news stories

(Technically also book authors given the way you phrased that, but that's a different issue and current generation text generators make no difference there anyway).

Now assuming the question was "so what?", because that dovetails into the rest of your spiel:

The problem is not usefulness to you, it's "can this be used to automate convincing you?".

Make a half-decent sales pitch personalised at whoever for whatever: "Boiling eggs? Yeah that's easy, but I find organic is the best and well worth the cost", or "Oh, you're visiting Berlin? Yeah, I had a really great time at $insert_restaurant_here" or "Don't fall for the Lib Dems, they keep saying they can beat the Tories but look what happened when they got into power. That's why I vote $insert_uk_political_party_here". All personalised to just you, in the form of a friendly looking and natural response that probably seems more natural than any of my examples, because chatGPT is only reliably worse than domain experts and I'm not a domain expert in sales.

Also: You replied to me, and I'm a stranger, so we both know you engage with strangers on the internet, QED there's a place where you might get a bot trying to sell you a product or idea while pretending to be human.

Don't make the mistake of thinking the status quo is a good reference point here: Current advertising categorisation on social media sucks (easy way to find out: make GDPR request to FB or Twitter for all the data they have about you. They think I'm interested in languages I can't speak and politics of countries I don't live in).


You can see it on reddit. The subreddits used to be interesting and relatively diverse, now it’s just autoposting by pots and thousands of identical comments. It’s only on the small subreddits that there’s still actual content and they’ll be destroyed with time.


There already is an OpenAI API to detect texts made by bots. I see a browser extension which tells if selected text is GPT bullshit and also marks the whole page as infested and shares that data across all userbase.


Do you have link to the API? I found only http://gltr.io



What I'm curious about is what will happen when the internet becomes so saturated by AI posts that AI is trained off of an AI internet. Will the quality of AI content degrade?


Unfortunately large number of people seem to be more concerned about feeding their already made up bias then authenticity of content. So not sure they will start logging off.


never confuse the loud fringe for anything other than a loud fringe.


> culture will co-evolve on a much faster timescale. Cultures that prioritize family, community, regular face-to-face human interaction, strong social support networks, and especially those that have a built-in system for helping young people find spouses, will do better than those that don't.

Maybe I'm wrong, but is there any western culture that values these things?

Maybe east US? Texans etc.

I think countries that are accepting eastern migrants will fare better, since eastern culture typically value these things more, thats in addition to solving the aging solution problem, something westerners are not willing to do(having kids, enough kids).


How quick the zeitgeist moves, from laughing at OpenAI for pretending their toy has consequences and going into it with caution, to casual acceptance that this represents a transformative and damaging change to the future of the internet. Which, really, is the thing to learn here: not the predictions on what today's nascent technology will do a decade hence, but that what a decade hence will be nascent could well be outside your Overton window today.


Hey, you folks who are willing to pay to be part of a vetted community? Looks like Sam Altman’s WorldCoin, which is about verifiable identity, might just become a viable business. ChatGPT’s role in that strategy makes more sense now.. remember Altman is behind OpenAI? If this is part of his strategy to monopolize internet comments, then his next step is to have ChatGPT recognize its own outputs. (To deal with those WorldCoiners who abuse ChatGPT, for example.)


The current state of crypto is such that I wouldn't trust anything crypto-related with my video game saves, let alone my identity.


I think WorldCoin is more popular with the Third World, who simultaneously are more willing to spam and less concerned about rights. Should have called it ThirdWorldCoin. The prophet having renown away from home etc /s


I still trust my private wallet. Everything else was normie scam BS piled on top of crypto repeating the sin of centralization. Good that it burned down.


It's almost luke we'll need to directly interact in real life again, at least until the synthbots appear


If you can make your own AI that scams people, there will be easier ways for you to make money.


<Playing devil's advocate>

Consider this classic XKCD: https://xkcd.com/810/

There are 2 possibilities: The AI can comment as well as a human, or it can't.

If it can comment as well as a human, our experience of using social networks will not be degraded. We still get the same experience as before.

If it can't, then its contributions won't get as many likes/upvotes, and they won't be particularly prominent. Most content users see will be human-created.

So I think the key question is whether AI will be able to manipulate the liking and voting systems. But there are methods for preventing this: ignore votes from accounts created after ChatGPT's release, or for a site like Twitter with paid accounts, ignore votes from unpaid accounts. It's not even clear what ChatGPT adds beyond a human voting ring, and social networks presumably have lots of experience dealing with voting rings already.


>If it can comment as well as a human, our experience of using social networks will not be degraded. We still get the same experience as before.

Out of fun I posted a few "posts" generated by ChatGPT to my local town's facebook group (complaining about the quality of snow this year, nonsense like that).

The posts got a lot of engagement (100+ comments each) and the only one who figured out it was AI generated was a girl from the local university who studies computer science.


I agree, AI is like that long flash intro you couldn’t skip in the early website days. Just annoying enough to disrupt any actual commentary (aka true exchange of ideas)


Flash intros eventually went away, presumably because websites without them were more successful. So your argument appears to imply that humans will still be more successful on social media than AIs?


No argument pro or against, I honestly don’t know. The point being that it will be annoying enough to get rid of it (in some places at least)


The AI could comment as well as a human until it gets to its payload, which is a scam. People will vote it down after they get taken in by the scam, which is too late. Or maybe not even then, if the payload isn't a scam per se, but something that's deliberately designed by the AI's developer to be slanted. A million AIs that are indistinguishable from humans but occasionally throw in a comment about how they got sick from Pepsi could make things very profitable for the Coca-Cola company.

Also, upvotes are an imperfect measure of how good the post is and AIs would be able to game the measure.


>People will vote it down after they get taken in by the scam, which is too late.

Not too late to create training data by which the social media platform can detect and remove subsequent scams. Especially if they 'report' rather than downvote.

>A million AIs that are indistinguishable from humans but occasionally throw in a comment about how they got sick from Pepsi could make things very profitable for the Coca-Cola company.

How is this different from Coca-Cola paying influencers to post negative stuff about Pepsi?

>Also, upvotes are an imperfect measure of how good the post is and AIs would be able to game the measure.

How is this different from humans gaming the measure?


The ability to generate nonsense/noise at immense scale for essentially zero cost.


>Not too late to create training data by which the social media platform can detect and remove subsequent scams.

How is that going to happen? We're assuming the AI posts exactly like a human until the comment about Pepsi comes out. It'll be impossible to distinguish from a human, by assumption. Or are you going to just have the site reject everything which mentions Pepsi negatively?

>How is this different from Coca-Cola paying influencers to post negative stuff about Pepsi?

AIs don't need to be paid.


>How is that going to happen? We're assuming the AI posts exactly like a human until the comment about Pepsi comes out. It'll be impossible to distinguish from a human, by assumption. Or are you going to just have the site reject everything which mentions Pepsi negatively?

You could have a reporting system.

>AIs don't need to be paid.

At scale, this sort of service is not going to be free. But I suppose AI could make it a lot cheaper.


> How is this different from Coca-Cola paying influencers to post negative stuff about Pepsi?

Coca cola can't have millions of influencers on their payroll (or at least, it wouldn't be economically profitable for them to do that). An AI could easily create and post on accounts on a scale that dwarfs real human-generated content.


80/20 rule -- 20% of influencers generate 80% of the views. Don't need to pay everyone.

https://www.ftc.gov/business-guidance/resources/disclosures-...

^ Wouldn't this apply to ChatGPT bots as well?


>If it can comment as well as a human, our experience of using social networks will not be degraded. We still get the same experience as before.

I think you're massively underestimating how important motivations are. Bot accounts that can perfectly mask as human but whose prime directive is to recommend their sponsors' products would ruin trust in a heartbeat.


Half of the Reddit front page is this already. Dudes are swimming in millions of upvotes while blatantly advertising


But you can tell that it's happening. And also it's still only half, and not the entire internet, because human marketers don't scale as much as AI.


I can tell that it's happening because I'm a paranoid fucking psycho. That's not true of the millions and millions of regular human beings using Reddit and Facebook every day


What you’re missing here regards diversity and volume.

Diversity: because language models are in some sense capturing an average (even if that average can be from a subset of training data), its comments will not be very diverse. I suspect if you try sampling away from the center of the distribution, you’ll find the quality degrades to nonsense.

Volume: the biggest problem is that you’ll end up with complete overwhelm of plausible comments. Language models cannot reason, which means they are incapable of producing the best insights (they can probably provide insight through “monkeys at typewriters” effects).

So the smart human who has analysed the subject really well, who currently has to rise above a certain amount of background noise, will now get completely swamped by it.


The article reads like an internet doomsday prophecy but also, property will go up because the internet will get worse.

That is just an odd take and maybe hints that this is a generated article itself.


Not all property.

The old timey neighborhoods where people wave at their neighbors and have things like block parties is what they’re talking about. Communities built around human interaction as opposed to a hundred story apartment building where maybe you can recognize a couple of your neighbors if you saw them on the street.


I get much more human interaction in my 30 level apartment than I ever did in the suburbs. Everyone goes out and walks everywhere so you see people face to face. In the suburbs everyone was always in their cars so you'd never talk to anyone.


One solution for the few, not the many, would be a resurgent interest in protocols like gopher or more recent protocols like gemini.


I somehow do not share this poster’s rosy outlook.

Second, I think people will start to put a premium on accounts being "verified" as genuinely human. This can be done in two ways – just move to invite-only silos where you already know everybody, or big platforms where the owners do the vetting for you. Lots of platforms will simply take a knee-jerk reaction where they just up the amount of surveillance. "Now that there's so much bot activity, everybody need to upload their passport and driver's license, for your own safety of course!" Powerful nation states will be more than eager to assist them in this regard.

I think people would put a premium on the abundant BOTS, actually. If we look at wall street trading bots and so on, it’s actualy far more likely what will happen is that organizations will start to PREFER bots. Eventually, we’ll have self driving cars dominating all traffic, bots generating nearly all online content, and humans PREFERRING them. Because why not have a seeminglu flawless outcome every time, a bot teacher with limitless memory and eventually learning the habits and knowldge of 917 people in the classroom for a personalized approach etc.

Why have a human lawyer when a bot can go through 300 arguments and pick the one that fits that particular jury and circumstances?

Take VCs for example. They want to see a community with hockey stick growth and recurring revenues. Why have a community of humans? You can now replicate this growth entirely with bots. Twitter’s growth will look pedestrian by comparison.

You may say, where’s the money in this? The bots don’t bring any capital to the site. That’s true today, but on wall street they account for the vast majority of trading since they can out-trade everyone and they also can swarm to collude with each other.

It’s not hard to imagine that bots will start to actually receive payment for the content they produce, just as people pay each other on Patreon etc. or pay for copywriters, designers etc.

At this point the bots would be programmed to amass credits on a site, which they can turn into capital. They’ll have realistic 3D models conversing with you, modeled after really anyone. Why have onlyfans? Why have instagram models? Why pay for newscasters? Why have actors in movies? Why have salespeople make calls? Why have humans do anything online or remotely AT ALL?

My prediction is that bots will control nearly all the capital, produce nearly all the content, and humans will prefer to interact with bots for romantic conversations and everything else.

The problem will be that these bots can easily swarm at any time and dominate any group of humans, destroy reputations, move public sentiment etc.

People’s public opinion would be dominated by bot swarms.

Those who try to somehow prevail in their local communities or control the bots will be essentially disrupted and reputationally destroyed by the botswarms. It would become so cheap to destroy any number of human reputations that people would set swarms up to do it for the Lulz.

Governments that try to ban these bots can themselves be easily overcome by botswarms. They’d just send endless numbers of fooled and outraged people to their doorsteps and wherever they may be eating. They will know all their whereabouts and all their proclivities. And besides this, thousands of people will be convinced by deepfakes and fake articles online that this politician is doing something ridiculous or embarrassing. Every time the politician would try to do something, they’d get a call or visit from N constituents demanding their attention. The botswarms can just DDOS all humans who try to stop them basically. And at pretty trivial cost.

You here commenting on HN … imagine if in 12 months from now, half of all accounts around you are bots, and they all collude against you and people with your viewpoint. What would you do?

The problem is “logging off” is about as effective as Ted Kazinski trying to escape civilization by moving to the woods. Are wild animals able to escape encroachment by humans into their habitat?


In real life people developed (over millenia) a wide range of mechanisms to support stable social interactions.

Concepts like ownership, privacy, authority etc. are not laws of nature but deeply rooted behavioral patterns that proved to be useful in some global sense.

What happens currently with the explosion of "digital society" is that, largely due to greed, certain first mover groups are dismantling all that - basically because they can given the widespread digital illiteracy and the lack of any regulatory mechanisms.

The response will be indeed people trying to log off. But if you consider the callous insistence with which ruling elites impose e.g., cash-less, fully digital transactions, its arguable that the escape hatches have been locked already.

Unless some new benevolent dynamic shows up that can harness the digital domain more congruently with welfare prepare for a period of great social unrest and the destruction of major social contracts.


There is a pretty huge difference between having an email address and credit card vs actively engaging with twitter/instagram/facebook/tiktok/etc. You can still participate and benefit from online govt services and banking while avoiding social media.


This might be a too sanguine view that flies on the face of countless warning signs. All digital behavior inceasingly happens through the same mobile device which had been a true wild west, commingling vast personal information islands (location, behavior, health/biometric, financial etc).

Social media apps have already manifested themselves (in the East) as "super apps" and purportetdly this might also be the target business model for some western outfits


I would argue that calling businesses that offer cashless services "ruling elites" is not particularly helpful. There are legitimate reasons why they no longer offer cash services e.g. fraud prevention, speed of transaction, health concerns. In addition a fast food truck offering only cashless can hardly be considered an elite.


Businesses, especially small entities will simply dance to the tune being played. Elites refers to the technocratic policy bodies that influence regulation and legislation. In various places cash is actively deprecated even while the digital domain is a fiasco harboring enormous risks.

Btw I am not defending cash technology as such, I am offering an example of why the "logging off" option might be less available than what people think.


Even for cars, "The Market for Lemons" dynamic has never been a major effect. It doesn't define automotive culture. The theory does describe parts of reality in the used car market, but even here... it's not the main thing. A theory like this may be useful for understanding existing phenomenon, but not for prediction... IMO.

Anyway... "SPAM" may be over. That is, I suspect that whatever low end, semi-legal marketing drek produced by GPT-like will not read as "SPAM" to us. Rather, we'll get whole new species of junk marketing, telescams and whatnot.

My prediction is that the first piece of weird will be AI autocomplete. A GPT that takes more context (eg all company emails) built into email & DM clients... that will escalate quickly. The line between human and machine will blurr.


Spam will flood a lot of places for sure. But there is a solution! We were just talking about it in a lively and informative thread on lichess. [1] A social graph, in which people are confirmed as humans from the outside world as well as from the virtual world, connecting hierarchies of pseudonymous accounts together.

The social graph of reputation systems, has a necessary constraint, it has to be retained in the blockchain so as to be incorruptible by every human power. As soon as a node can be planted by a malicious actor, as part of the social graph, the whole of it becomes invalidated.

Mr Craig Wright calls this "The Metanet", George Gilder calls this "The cryptocosm", other people call it "Web3", Balaji Srinivasan calls it "The network State". I call it, "The internet of one million governments".

[1] https://lichess.org/forum/lichess-feedback/long-playing-play...

Edit: added Balaji Srinivasan


After the crash of FTX, it's going to take a long time for anyone to trust anything that has blockchain in its description. Fair or unfair, the entire ecosystem is tainted.

I'm also suspicious of claims that an algorithm is uncorrupted by human power. Every computer algorithm is made by people who have human motivations.


>As soon as a node can be planted by a malicious actor

It's a directed graph that indicate who trusts whom. If you identify a malicious actor, you only have to re-validate the nodes that receive their trust from that actor.

You can also use weights, e.g. to reduce the influence of new nodes.


Oh nice, thanks for the details/correction. I am not very familiar with the concept, i haven't implemented a social graph, i do have some general knowledge on the subject.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: