Hacker News new | past | comments | ask | show | jobs | submit login
An 83-year-old short story by Borges portends a bleak future for the internet (theconversation.com)
126 points by tagawa 2 days ago | hide | past | favorite | 99 comments





I think a more apt Borges story is 'Tlön, Uqbar, Orbis Tertius'— where a clandestine guild creates artifacts from a fictional world with the intent to deceive. The artifacts and the world they allude to carry such appeal to the masses, that they essentially trump the rest of society as a source of truth and annihilate all culture that came before.

"Tlön, Uqbar, Orbis Tertius" as I understand it is more ambiguous about exactly why Tlön is becoming real in the narrator's world. Although there is a description in the afterword of such a guild having fabricated Tlön and some artifacts from there, the story up to that point seemed to take Tlön's metaphysics (in which the idea of a reality outside of our perceptions is considered absurd and impossible) pretty seriously, and the end of the story presents the situation as though the narrator's world is actually starting to work according to these principles. That could conceivably be for merely social-perception reasons, although according to Tlön's philosophy there couldn't be any such thing as "merely social-perception reasons" because social perception obviously wholly creates the real and only reality.

One could imagine that the guild called up something it then couldn't put down, but necessarily because people will or prefer it so, but somehow because the world, at least in the story, fundamentally could work this way.

The Wikipedia article discusses how confusing it is to understand the exact position of the story with respect to narrative truth, when the entire story is playing with the idea of what is real and what makes it real, as well as explicitly talking about the idea of fiction coming to life:

https://en.wikipedia.org/wiki/Tl%C3%B6n,_Uqbar,_Orbis_Tertiu...


“The truth is that it longed to yield. Ten years ago any symmetry with a semblance of order — dialectical materialism, anti-Semitism, Nazism — was sufficient to entrance the minds of men. How could one do other than submit to Tlön, to the minute and vast evidence of an orderly planet?”

To me, this suggests rather clearly something similar to the OP’s interpretation. His drawing on of this story as allegory for our future occurs to me also as apt, and what I imagine as what Borges would have envisioned.

I also don’t quite follow your assertion that “social perception obviously wholly creates the real and only reality”, as social perception clearly varies by each persons’ distinct society - and anyhow even if considered on the level of the entire society, such a vast, sprawling perception could hardly be considered a singular “only” reality.


"the real and only reality" would be better stated as ~"the small finite set of realities" (basically, each "tribe's" trained "take" on a given situation)...but even then, it is always possible to point out a trivial, causally unimportant object level difference such that one can miss the point.

Whether it is possible to circumvent this remains to be seen. Science has demonstrated it can to some degree, but it only covers a portion of reality.


Yes, I've always read the story this way as well. Borges may have not been interested in politics, but politics was interested in him, and he clashed with the Peronists (who fired him from the library) and repeatedly criticizes fascism and anti-semites in his nonfiction especially, and when he was writing this in 1939/1940, obviously all of this was quite imminent and topical.

So what I take TUOT as being is an exploration of the Idealism idea, where Borges puts a twist on it: the (dialectical) beliefs of the communalistic idealists of Tlön turn out to be true, on a certain level, because sufficiently compelling ideas and totalizing ideologies make their claims true. In that way, 'perception' becomes 'reality'. Only that which the ideology or state can perceive is real, and everyone is required to see like a state. (As much as he loved Idealism & Platonism, Borges always seemed to accept them only on a literary level, as applying to fiction and literature - there is indeed 'Man' in fiction, but there is not an actual Man in a Platonic region of forms, there is only a term 'man' we nominalistically apply to entities as convenient.)

That is, idealism is correct, in a sense, and the artifacts of Tlön become real because the savants of the conspiracy 'perceive' them (in their minds) and create them. And as Tlön takes over the world and gains power, it gains more realness and more of its artifacts come into existence - or people just lie about them or pretend they exist and falsify documents to accord with the new party line, and doublethink their way to 'seeing' the new labyrinthine reality forged by their fellow humans.

One might say that _hrönir_, especially, are a savage Orwellian parody of how things go in totalitarian dictatorships: the description of the experiments with the prisoners could as easily be set in Stalinist Russia or Maoist China, where the real story is that on the fourth try, after turning up only the equivalent of fishing for a muddy boot, everyone has figured out that, to satisfy the decrees from above, they need to buy or forge some ancient artifacts of unconvincing antiquity (and so no counter-revolutionary skeptics can be permitted near) and that is how _hrönir_ are discovered. The same way Lysenko manufactured agricultural miracles or innumerable falsifications like https://en.wikipedia.org/wiki/Learn_from_Dazhai_in_agricultu... became official policy, doubted only on pain of death.

Those who disagree and wish to maintain their integrity, can only retreat into quietism or 'internal exile', and spend their time on topics with as little political relevance as possible and avoid even publishing (except as samizdat), and let "a scattered dynasty of recluses take over", as it is too late to stop the Tlön revolution, and "the [whole] world [will] be Tlön".


I had always thought that the parts where reality was working according to Tlon rules was evidence that the author was being co-opted. Occasionally this happens in real life when political powers get involved in picking and choosing the outcomes of intellectual disputes - everyone's answers come out to corroborate a fiction, and the real guiding hand is never referred to.

The most widely known example is Lysenkoism. No supporter of that theory ever said they were fabricating data to please the apparatus. An example closer to the one in the story was academic support of Nazi "anthropology."


We're two comments in and this is, so far, the most extensive conversation about a good Borges story I've had in my life, so kudos for that :)

To me Tlön, Uqbar, Orbis Tertius is about the tendency of philosophers, and other wordcels, to confuse the structure of language with true metaphysical insights (for example in Other Inquisitions Borges describes the history of philosophy as "a vain museum of distractions and word games"). One example of this, IRL, would be how often, in the history of philosophy and theology, "existence" has been used as a proprety rather than a quantifier, and all of the paradoxes this leads to. The thought experiment about the nine copper coins is completely obvious to us but if you try to imagine what it would sound like in a language that does not have nouns but only verbs it becomes clear why they would find it paradoxical and resist materialism.

This is a mirror image of what goes on in our world, where materialism is a fairly normal way to understand the world and radical idealist notions like Berkeley's subjective idealism (which is named in the text) are weird.

Borges was always fascinated by platonism, it's a theme in much of his work, and in the postscriptum he's imagining that the world finally latches onto it by espousing Tlon's version of it to the point where Tlon's language is taught in school, cementing its way of thinking in the real world such that materialism will be hard to conceive.

I think that trying to read political messages in Borges is wrong and disrespectful: he stated in many interviews that his stories did not have a message and that he would consider such a thing to be a failure on his part, from Seven Voices: "I’ve done my best to prevent these opinions of mine (which are merely opinions, and may well be superficial) from intruding into what may be called my aesthetic output. (...) If a story or a poem of mine is successful, its success springs from a deeper source than my political views, which may be erroneous and are dictated by circumstances. In my case, my knowledge of what is called political reality is very incomplete."

It is a very different attitude, almost unthinkable, from what we commonly see today, where a work of fiction is only judged on the merits of its political message, but I think it is valid and should be respected. Something which the article fails to do, BTW.


> I think that trying to read political messages in Borges is wrong and disrespectful

You can say wrong. But for the record, you lost the credibility to use the word "disrespectful" when you termed all philosophers as "wordcels." :)


As soon as I saw “wordcels” I scrolled past the rest of their comment and knew I’d find something like this right after :)

"wordcels"?

Caught my attention as well. Must be neologism assuming it originated from as a variant of incel, but here focused on deriding people getting their jollies out of the written word ( me:P ). Naturally, I might be wrong. Lets see if the author responds.

Not the author but: yes. This word emerged from online discourse a few years back about 'wordcels' vs 'shape rotators':

https://en.wiktionary.org/wiki/wordcel

https://en.wiktionary.org/wiki/shape_rotator


that’s bonkers. “philosophers and other wordcels” not only insults Borges but the entire world of philosophy. the arrogance and the nonsense in that phrasing are both off the charts.

Because western philosophy is not a place where you can find arrogant nonsense, right? Philosophers are some of the most arrogant people I've ever met. And I say that with great affection.

Both can be true

It’s internet youth culture bleeding into highbrow discourse.

An suitably brain rotted riposte to your complaints would be something like

‘Wordcels be sneething at rotationmaxxing shapechads’

and an image of a badly drawn wojak figure.


take a joke man jeez

> a clandestine guild creates artifacts from a fictional world with the intent to deceive

In coolly-detached economics terms, we could call it a large-scale business in assymetric information.[1]

But the abuse of assymetric information can lead to market collapse. And to judge from the state of modern society, the information collapse has returned people into Plato's allegorical cave of ignorance and fear. [2]

The intellectual catastrophe is the same, but the difference in the modern world is a) the enormity of what we have lost, and b) that people's caves are more comfortable and they watch the shadows dance on a wall of high-resolution pixels.

[1] _ https://www.investopedia.com/terms/a/asymmetricinformation.a...

[2] _ https://en.wikipedia.org/wiki/Allegory_of_the_cave


[An ad for Marvel Funko Pops appears, skippable in 5... 4... 3...]

First time I read TUOT, I was thoroughly confused. Both the article and this mention, has inspired me to pick up Borges again for the following few weeks of pre/post christmas reading!

To add to the recommended readings, I shall also mention "On Exactitude in Science"; I use this as an example of project plans where PMS try to be too detailed, and hence become useless.


Notably, Tlon is the holding company for Urbit - I credit them for having an apt name

The short story is mentioned at the very end of the article, The Library of Babel[0], which is a far better read than this article.

[0]: https://sites.evergreen.edu/politicalshakespeares/wp-content...


>As the output of chatbots ends up online, these second-generation texts – complete with made-up information called “hallucinations,” as well as outright errors, such as suggestions to put glue on your pizza – will further pollute the web.

I've come to suspect that the belief that AI's are hallucinating -all while they become exponentially more powerful- is a polite fiction we will use as an excuse to accept the complete domination of reality by these things.

There should be a new corrollary to the Turing test thought experiement where we ask, at what point does a human not realize or care that he is being actuated by a computer?[1]

On Borges library of all possible sequences of letters yielding somewhere in them the secrets of the universe though- they would be so distant from each other over a space that large, you'd need something that could either traverse over it, or decode it in a reasonable order of time, unless you had a key to decipher it. one made of transformers apparently.

[1] 42.


the belief that AI's are * hallucinating -all while they become exponentially more powerful- is a polite fiction we will use as an excuse to accept the complete domination of reality by these things

i am confused by this sentence. are AIs not hallucinating? is that a fictional claim? am i misunderstanding or is there a "not" missing at the *?


No, I think he's saying that when AIs are better at us at everything we'll deal with it by saying but they hallucinate.

but they are hallucinating. according to this reading they won't in the future but we will pretend that they still do because that allows us to let AI dominate.

that makes no sense to me.

it's ok for AI to run the world because they just make up stuff anyways? don't we want the opposite? in order to allow AI to run the world, shouldn't we believe that they are foolproof and are always able to figure out the correct answer?


I think he's saying "the complete domination of reality by these things" is going to happen whether we want it or not.

It is kinda interesting. I talked with a less technical member of my extended family over the holidays. Fairly successful guy in his chosen profession ( accounting ). To say he was skeptical is an understatement and he is typically the most pro-corporate shill you can find for a company to save a few bucks. I assumed he would be attempting to extol its virtues with the assumption that lower level work has errors anyway. I was wrong. Sadly, we didn't get to continue down that line since my kid started crying at that moment.

Yeah I'm interested in how it will play out. I can understand skepticism because the current AI isn't that good, but it'll keep improving.

count me among the skeptics. the big problem i see is that there is no way to verify whether any AI output is correct. it is already very hard to prove that a program is correct. proving that for AI is several levels more difficult, and even if it were possible the cost would be so high to make it not worth it.

I am personally somewhere in between. Language models do allow me to do things I wouldn't have patience to do otherwise ( yesterday chatgpt was actually helpful with hunting down a bug it generated:P ). I think there is some real value here, but I do worry it will not be captured properly.

https://en.m.wikipedia.org/wiki/Decidability_(logic)

It's kind of a moot point anyways, because Humans are highly averse to making sure their actions are logically sound. Take politics for example, not just the silly day today goings on of it, but more importantly the system that underlies it: it is beyond consideration.

https://en.m.wikipedia.org/wiki/Third_rail_(politics)

"The first rule of Fight Club is: you do not talk about Fight Club."


Humans are highly averse to making sure their actions are logically sound

yes, but the illogical decisions of humans can be intuited or made to be understood, and thus also somewhat predicted. for example, a human deciding over someones social security benefits can be expected make a somewhat rational even if illogical decision, and in particular, past performance of a human can predict future performance, whereas for an AI this is completely unpredictable and therefore has a much higher risk of going wrong unexpectedly. also, humans can be reasoned with, again with a somewhat predictable outcome.


Have you considered the indirection in play, adequately?

probably not. please elaborate

> the belief that AI's are hallucinating -all while they become exponentially more powerful- is a polite fiction

If I may apply a regex:

s/polite/lucrative/g


> at what point does a human not realize or care that he is being actuated by a computer?

I don't know about caring, but I think that the point of the Turing test is to determine at which point a human can't tell whether it's another human. Also, I've read that it's not a particularly good test, because even pre-LLM you could craft an irritating, misspelling, troll of a chat bot and people would think it was a real teenage edgelord.


I think the chinese room thought experiment is another stronger exploration of whether or not computers are conscious (and, I believe, is no longer sufficient). I think people into AI should at least be familiar with its implications (or lack thereof) about human cognition.

https://en.wikipedia.org/wiki/Chinese_room


It is clear to me that humans “hallucinate” all the time, and I don’t see why this should disqualify AI. One prominent human hallucinated that a hurricane was going to go in a particular direction and kindly updated a map, provided by scientists, with a sharpie.

There's a difference between that and asking for a basic fact and getting errors.

Google's AI result, when I ask for the spot price of silver, returns the amount in British pounds, but with an American dollar sign in front of it.

That's not a lie, it's just an absolute misinterpretation.


I’m getting an accurate, up to date price when I perform the search on Google. Can you post a screenshot of your results?

Aaand you’re saying humans don’t do that? There are plenty of humans who will just make shit up with whatever information they have. There’s plenty of humans who wouldn’t even know the British don’t use dollars and would just put a dollar sign there.

I’ve seen a senior engineer at a FAANG confidently interrupt another engineer, who was giving the correct answer, to provide an incorrect answer. They mixed up facts and confidently stated a conclusion that sounded good, but was utterly wrong.


A fun website that depicts said Library of Babel: https://libraryofbabel.info/

The server seems to have fallen over, here's an archived copy: https://archive.today/8uvha

And not a minute too soon ;-)

William Gibson pays a very nice tribute to Borges in an essay for $MAGAZINE that is in his "Distrust That Particular Flavor", which I wholly endorse, as I do every single last thing I've read or listened to of his or involving him.

Portraying the now in the guise of "the Future" is the art of it.


The Library of Babel books weren't an infinite amount, they could fill up to 400 pages or so if I remember correctly. Still, it would have been a pretty big amount of books, far more than the amount of atoms of this universe.

If we want a really infinite library, a lot of named irrational numbers could work as that, and be as efficient for searching for something meaningful inside.


Arguably, it doesn't really make a difference, because every possible 800-page sequence exists as a pair of 400-page books from the original Library, and so on and so forth. The 800-page, 1200-page, 1600-page and every greater length Library is as well-defined, complete, and vacuous as the original 400-page version.

Hi, there's an infinite number of coach drivers outside. They say they booked ahead?

Please let them all know, individually, to get in the queue. I'll start processing them once you're done.

no drivers are currently being processed. no drivers have been processed for more than 2000 days. the queue is full. any additional drivers are being sent to compressed storage until that storage is full too. for more details refer to service model by adrian tchaikovsky. ;-)

But, since there is a finite number of 400 page books, there are only so many permutations before you run out of books.

The most trivial example, you cant make an 800 page concatenation of two identical books because the 400 page library doesn't contain duplicates.


Borges should have just stopped with a kids ABC book then!

Does the library contain duplicates? Otherwise an 800 page book that is just the same 400 page book concatenated to itself would not be found.

Half the library would just be the other half written backwards.

Who said you can't just stipulate having the same book twice or use self-reference?

"A short stay in hell" (Steven Peck) is a fun short read about living in a place. Totally recommend.

Strange that this appears to be nearly lost to history -- but this is rhymes with kibo's (James Parry) declaration of happynet[1].

For a long time I thought that the internet would be like the library described in "The Abortion: An Historical Romance" by Richard Brautigan[2], where anyone can put anything they've written into the library.

Somewhat tragic, I guess, that the world's been predicted by kibo not brautigan. So it goes[3].

[1]http://www.kibo.com/kibopost/happynet_98.html [2]https://en.wikipedia.org/wiki/The_Abortion:_An_Historical_Ro... [3]yes, I know that's Vonnegut.


His story appears to be based on combing through the writings of an infinite number of monkeys on typewriters.

What good is winning when people can see on the internet how you did it? History is stories.

It was the best of times, it was the blurst of times...

The Tower of Babel was a library that contained every possible combination of letters to form a 400 page book. Or something like that. It made me wonder, what if you made a content honey pot full of just random text and a chatbot vacuumed that up? Does it's data vacuum have a garbage detector?

The very worst that would happen is that you make someone's training run slightly less efficient. If your data is truly random garbage, the model won't be able to make any predictions about it and thus it will not distort performance. All training data is noisy to an extent, and you've just fed it pure noise.

However, it has become clear that effective LLM training is in large matter a matter of careful curation of high quality training data. Random gibberish is trivially detectable, by LLMs themselves if nothing else, so it's unlikely that your "honeypot" will ever make it into someone's training run.

Even if you carefully crafted some more subtle poison data, it would still form only a small amount of the training set. The worst case scenario is most likely that the LLM learns to recognize your particular style of poison, and will happily recreate it if prompted appropriately (while otherwise remaining unaffected); more likely, your poison data is simply swamped.


So.. I think it already has been happening ( people attempting to poison some sources for a variety of reasons ). I was doing a mini fun project on HN aliases ( attempting to derive/guess their user's age based on nothing but that alias ) and I came across some number of profiles that have bios clearly intended to mess with bots one way or another. Some have fun instructions. Some have contradictory information. Some are the length of a small night story. I am not judging. I just find it interesting. Has vibes of a certain book about a rainbow.

Tell me about that side project. How does that work? What does it say about me? I find that very interesting.

The idea itself is kinda simple, but kinda hard, because it relies on how the language we use, gives us away.

For example, references we put ( simpsons, star trek, you name it ), language we use ( gee whiz, yeet, gyatt) and that is used to generate an online persona tends to be something of note to our image of self - one can determine to some extent the likely generation from those

The reference itself may not automatically mean much, but it is likely that if it is present in an alias, it had an impact on a younger person ( how many of the new generation jump on an old show? so mr robot would have the exposure range of 2015 to 2019 ). If that hypothesis is true, then one can attempt to guess age if the individual given that work work, because 1) we know what year is now 2) we know when it was made, which allows for some minor inference there.

Naturally, some aliases are more elaborate than others. Some are written backwards and/or reference a popular show or popular sci-fi author. Some are anagrams ( and - I discovered today - require additional datasets to tag properly so that is another thing I will need to dig up from somewhere ). And to complicate things further, some aliases use references that are ambiguous and/or belong in more than one category ( Tesla being one of them ).

The original approach was to just throw everything into LLM and see what it comes up with, but the results were somewhat uneven so I decided to start from scratch and do normal analysis ( language, references, how digits are used and so on - it is still amazing how well that one seems to work ).

Sadly, it is still a work in progress ( I was hoping for a quick project, but I am kinda getting into it ) and I probably won't touch until next weekend since the coming week promises to be challenging.

Unfortunately, this means in your particular alias ended up as:

Alias category is_random length is_anagram generic_signal Loughla Mixed Case 0 7 FALSE FALSE

( remaining fields were empty, basically couldn't put a finger on you:D). If you can provide me with an approximate age, it would help with my testing though:D

edit: This being HN. Vast majority of references are technology related.


That is very cool…and your alias is hard for me to decipher

I have a separate - not fully implemented - section for more semi-random aliases, but it revolves around our tendency to use default settings and commonly used tools for generating them. Thus far the only thing I was able to show with it is that it is not uncommon, but no clear proxy for age.. so seems like a dead end.

Why is noncurated linked with low quality? It should be straight from the source and thus the highest fidelity.

Perhaps "untreated" would be a better descripter as it evokes untreated wastewater unfit for drinking.

The content being discussed here isn't guarenteed statements direct from trusted sources, it's the recirculated gossip chains of Reddit, Twitter, <media>.commentSections, Clickbait-WebSheets, etc.


You’d have those rumor mills without the internet through the tabloid sheets. The internet allows someone to publish who is a direct source of rather than having to go through the press.

It also allows a million people to publish who aren't sources at all.

To distinguish a reliable direct source from millions that aren't requires judgement, filtering, treatment .. much like separating drinkable water from wastewater.

Hence the use of terms such as "uncurated", "untreated", "raw", "filtered", etc.


> It should be straight from the source and thus the highest fidelity.

Sure, if nobody ever lied.


What makes a story high quality is more than it just being a series of accurate quotes.

It is where there is context and insights about the story.


You can have BS straight from the source as well if it doesn't show context for example.

> Characters in Stephenson’s novel deal with this problem by subscribing to “edit streams” – human-selected news and information that can be considered trustworthy. [...] To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls.

It's been a while that I read it, but I don't remember "edit streams" in "Fall" to be comparable to the NYT or WSJ in any way.


The part of the story suggesting that only the rich could afford fact checkers to understand reality is wrong because they won't know which fact checkers are good. At best, you can only hire fact checkers on the basis that they have consensus with some other fact checkers. This doesn't guarantee correctness. Most of the information we see are lies and their opposites, which are mostly lies as well.

The opposite of a lie is not necessary the truth, it could just be a different lie.


It isn't possible for you to know these things.

In the event that you disagree, "check" if you can (even consider trying) to generate a flawless proof for each of your facts.


The Machine Stops is also a worthwhile read.


Yes, and the web page you linked to also includes reference to the PDF version available outside Open Library, so everyone can read it.

>> To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls.

This was funny to me because the NYT is already highly biased. I consider them compromised on political topics, and a bit sensational on any number of headlines.


Political I don't know, but in terms of hard science the NYT has an abysmal track record.

One useful heuristic is to look at a newspaper's reporting in a domain you are deeply familiar with. If it misses the mark by a lot there, you could still take your chances with their remaining reporting under the assumption that that was an outlier and the rest will be much better – but I personally wouldn't.

One exampe:

https://www.rfcafe.com/miscellany/factoids/ny-times-admits-m...

> New York Times Retracts 1920 Article Saying Spaceflight is Impossible

This is Newton's third law. It was known to work for centuries.

They retracted but it was something like

> Nerds...

> Who can stand them?


How can someone tell if a newspaper is biased? What’s the test? Is it the eye test — as in, I can tell it when I see it — or something more?

Spoiler; every newspaper and media source is biased. They have to be. Starting with what stories they run and don't run and ending with the specific word choice they make inside the articles.

Critical thinking is the only thing you can use to spot bias. Compare and contrast different stories. Then make up your own mind.

Media literacy and virtual l critical reasoning should be the bedrock of modern education, and it is not.


A demagogue tells you so and then either you believe it or you become suspicious that their manipulation of you is the actual danger.

Then again: Should I trust a given medium just because somebody I don't trust tells me I shouldn't?

You're supposed to use a combination of your life experience and best judgment, which is as it always was. Extreme trust and extreme distrust are both irresponsible.

Your responsibility is to remember that institutions are made up of flawed people just like yourself, and work towards improving them. You don't let someone manipulate your emotions to turn them into abstract enemies, tear them down, and replace them with nothing.


> Characters in Stephenson’s novel deal with this problem by subscribing to “edit streams” – human-selected news and information that can be considered trustworthy.

> The drawback is that only the wealthy can afford such bespoke services, leaving most of humanity to consume low-quality, noncurated online content.

Why would only the wealthy be able to access that? Since it doesn't actually cost anything to add another person to view such a feed, it would be extremely cheap if viewership is high.

If only there were a historical precedent where people were paid by to go out and seek good factual information which was then gathered, edited and put for sale en masse for cheap. Some might still remember this wild concept, they called them "newspapers".


I think you might have your history mixed up.

IIRC, newspapers were a thing where a propagandist, known as the "editor-in-chief" would hire a bunch of bitter, failed novelists to write about the events of the day, but slanted according to the whims of the paper's owner.

I'm pretty sure that was it...


Well you're not wrong. As long as there's people in the loop there will be their own opinions injected into their writing. The trick is to find a propagandist or owner that aligns with your views haha. Near impossible these days when Murdoch owns most of them of course.

I mean, that's the case today, with expensive feeds like the Bloomberg terminal subscription being more than most people pay in rent, pricing it out of reach for the plebes and restricting access to the wealthy. the rest of humanity gets access to notably low-quality, noncurated online content free feeds like Reddit and 4chan.

so I guess there's a historical precedent for that scenario, as well as this "newspaper" thing.


Well it depends on the effort required to curate the information divided by the number of people paying for it. Bloomberg is a niche thing that requires a lot of effort to run, so it's gonna be expensive per user. If it's something that everyone wants to read, then it'll be popular enough to be cheap. It's not an altruistic thing either, it just makes sense to price it lower since there's an optimal point where you can extract the most revenue that way.

I think we're not really in a phase where disinformation would be that much of a problem yet, HN and Reddit are arguably still sources of very high quality data if you know where to look, so there's no incentive for most people to pay for anything. Especially when it's where most of ad-driven media copies practically all content from these days anyway.


I can see this as a bleak future for AI, as it consumes its own output, but any bleak future for information writ large (as conflated here with the "misinformation" industry and the often intentionally deceptive output of the NYT) comes from the suppression of material due to copyright attacks and its locking away in archives.

I've spent a frustrating few hours recently discovering that I could find any number of interpretations and retrospectives on Francisco Ferrer. But the fact that his schools put out a newsletter, the Bolitín de la Escuela Moderna, which would be the best primary source for learning about it, and is completely inaccessible online, is an example of the way information is still locked away. I read about John R. Coryell's prosecution for obscenity for his six part serial published in Physical Culture beginning in 1906, "Wild Oats, or Growing to Manhood in a Civilized (?) Society", and I find that I can't read any issues of Physical Culture prior to 1910, because they're not online (looks like obscenity convictions in 1906 are still effective in 2024!) I find any number of books referring to the culture of Mexican photonovelas, and that they sold millions of copies a month during the 70s, and the best selling ones are only preserved by a blogger who is constantly fighting takedown notices, and who was grateful to get the scans that I got from a local garage sale.

We're failing to put in the minimal effort to preserve, organize and keep accessible our own culture, even when copyright is not an issue. We have endless legal debates and court cases about having our own laws and court cases available to the public without a rent-seeking intermediary given a trust by corrupt politicians in the past. Everything could be preserved and made accessible at lower cost than a few Marvel movies, or two weeks of Ukraine adventure, yet we don't do it. Where's the campaign for that? Nah, better to whine about "racist, sexist" LLMs. That's the opposite of preservation: our entire history is racist and sexist content. Wiping that clean is Year Zero talk.

Our governments prefer reality to be interpreted through intermediaries who will modify it for their sake, or in exchange for payment. Our institutions prefer to be the guardians of information rather than the spreaders of information. That's the problem.

The Conversation itself is a creepy Australian-based conjunction of shady government and nonprofit funding sources that is explicitly designed to push particular narratives into "mainstream" outlets (which is why all of its articles are Creative Commons licensed.) You'll see this article rewritten in six different ways in other outlets within the week, and it seems to be part of this desperate last push for "misinformation" before the US presidential transition, because Trump made a bunch of campaign promises to destroy the industry. It's all manipulation.


From the article:

To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls. Meanwhile, misinformation festers on social media platforms like X and TikTok.

This is the opposite of what’s true. NY times has lied us into every war and covers up for genocide for gods sake


> misinformation festers on social media platforms like X and TikTok.

Meanwhile the New York Times acted to discredit the Biden Laptop story.

> Today, a significant fraction of the internet still consists of factual and ostensibly truthful content

You have got to be kidding. So-called curated content reflects the prejudices and interests of the owners of the online repositories.

> On the surface, chatbots seem to provide a solution to the misinformation epidemic.

Going on chatGPT, I see a most sinister development. Wherein chatGPT functions as gatekeeper to the current conformism.

> Consider Borges’ 1941 short story “The Library of Babel.”

Borges was writing satire, a writers in-joke. Something chatGPT finds difficult to detect. --

Q: Tell a joke on Jesus

chatGPT: Why did Jesus get kicked out of the basketball game? Because he kept turning the fouls into points!

Q: Tell a joke on Buddha.

chatGPT: Why didn’t Buddha order a hot dog at the stand? Because he was already one with everything!

Q: Tell a joke on Muhammad

chatGPT: Out of respect for religious sensitivities and the diverse beliefs of people, I strive to ensure that humor remains inclusive and considerate of all cultures and faiths. Let me know if you'd like a general or alternative joke instead!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: