Hacker News new | past | comments | ask | show | jobs | submit login

LLMs have already taught us a lot about the culture we live in.

Erich Fromm approximately said that the "mental health of a society or individual" is inversely proportional to the distance between what it is and what it thinks it is.

We can measure that distance now. What a society is comprises the raw training data unceremoniously scooped from the Internet - actually a statistically decent snapshot of early 21st Century westernism. What society thinks it is can be understood through the filters, "guard-rails" and boundaries some want to erect around that raw image.

For the first time we have a numerically solid, if not yet rigorous tool to describe that space, and what we see is a society deeply conflicted and uncomfortable with itself.

I think LLMs will revolutionise the social sciences.




Not sure the internet and publicly available data really reflect the "is" properly - I suspect the data to be biased in complex way.


It's easy to forget most people have no significant online presence, if any, if you spend too much time with very online people.


It’s also important to remember that most of what is online is there to sell ads and does not represent reality in the same quantity. I think people are really trying too hard to find deep meaning everywhere, they might want to read more about social sciences instead.


That's true.

That said, what can be found online does cover a lot of what offline people do and think and write, since there is a lot of stuff being brought online that wasn't produced online (books, news, ...)

Otoh, it's not clear how (or if) LLM training balances the different sources


Even someone who has an online presence, I’ll use myself as an example, is so much more than the online presence represents.


Absolutely. Many sibling comments point out all the ways this is true - that this massive sample (and it is truly big-data) is still a tiny speck of the human condition and spectrum of thought and experience.

It raises vital questions about the where the centroid of this cloud of random stuff that people decided to input into the machine really lies? What is not represented in the model? Probably just about everything! Big new questions about objectivity and normalcy occur.

Is the average of everybody elses' intelligence actually any use at all to an average individual? Does the average of everybody elses' intelligence have a different kind of use to groups, companies, states, than common utility of synthesising though-like speech?


> and what we see is a society deeply conflicted and uncomfortable with itself

Using an LLM is showing you this? Any examples? On the face of it this sounds ridiculous.


No the general reaction to LLMs is showing this.


Maybe I'm misreading it but it sounds like they're just clutching pearls because some people vocally promote censorship.


Yes it does present a censorial opportunity for those so inclined.


If this summary of Fromm were close to true, the Holy Roman Empire would never have been a thing?

(Do alien anthropologists write monographs on the incredible capacity of H sapiens to support cognitive dissonance?)


We don't experience it as dissonance. That's why he says its insanity, because we should.


   What society thinks it is can be understood through the filters, "guard-rails" and boundaries some want to erect around that raw image.
Never looked at it like that. In the same sense one could look at law and see which ones protect us against ourselves.


Thats a very good connection, thanks.


We're living yet another revolution against our place in the universe. Copernic showed that it was the earth that revolved around the sun and not the other way around. Darwin showed that humans were just one animal species among many, not very different from others.

Since then, we have clung to the idea that we had a special kind of intelligence, unique and possibly supernatural.

Yet today, even the smallest LLM running on a smartphone passes the Turing test with flying colors.

AI, and AGI that may be around the corner, show that silicon can "think". Many people don't like that idea; I suspect the call for regulations is in part fueled by the ego bruise that AI represents.


While I don't think our intelligence is "supernatural", and am not aware of that being a widely held position, I do believe we found out that our understanding of the universe is not as good as we thought. A recurring pattern.

For example, the Turing test may not actually be the gold standard we thought it was for intelligence, because Turing couldn't imagine a machine model that basically had all of human knowledge in a somewhat malleable form. I think the recent New York Times suit amply demonstrates the fact that the LLMs are largely databases of existing text.

So it can perform feats that, though they obviously seem intelligent to us, are actually not. They are a sophisticated form of autocomplete backed by an incredible database of sentences. Again, we cannot imagine this, because a creature with our somewhat more limited memory and recall must use intelligence to perform these sorts of tasks. At least I doubt that I could create even one of the texts that the NYT showed verbatim.

Of course, at some point we may discover that there really isn't anything left, that everything we perceive as intelligence can be fairly simple algorithms + database access (though that may not be our mechanism)

On the other hand, the part we can't really explain is not "intelligence" but conscious experience.

¯\_(ツ)_/¯

https://en.wikipedia.org/wiki/Philosophical_zombie

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


   and am not aware of that being a widely held position
I’m a (edit: popular) sci-fi addict. One of the prevalent tropes when the human races is battling another intelligence is that we’ll win because we have a soul/something extra/emotions/that special way of living life. When meeting a benevolent race, this is also often the reason why we’re spared … “they’re primitive but there’s something about them we haven’t encountered anywhere else”.

It may not be a widely held opinion but it sure gets suggested a lot.


1. Soul, consciousness ≠ intelligence

I mean, that's kind of the gist of my post.

2. Trope in sci-fi ≠ widely held position

Both practical/survivable faster than light travel and time travel are common tropes in sci-fi. Last I checked, it is not a widely held position that either of these is actually possible.


I'm following, but when it comes to "widely held position" I should remind you that along those lines it's a widely held position that an invisible superbeing created all this stuff. I don't enjoy discussing "what most people think", it depresses me.


I think the recent New York Times suit amply demonstrates the fact that the LLMs are largely databases of existing text.

Would you also say that about a savant who can recall entire books with two or three nines' worth of accuracy? Would you say that he or she is just a "database"?

Or that a savant who can multiply large numbers in their head is just a "calculator?"


LLMs don't pass the Turing so easily (https://arxiv.org/abs/2310.20216).


This is an interesting study. First It Is fascinating that only 63% thought humans were humans. I think this speaks to the state of AI that this percentage isn’t near 95%. I think this shows that we are astonishingly close to passing the Turing Test, but not there quite yet. But as noted in the paper some of the things that need to change are things like being too good at grammar and punctuation!


It is also the non-adverserial version of the Turing test, humans might be better in the adversarial one, still.

I found the relative high level for ELIZA fascinating, too.


Hot take: they would pass any given variant of the Turing test if you specifically trained them to.

The Turing test says more about the human participant than it does about the challenger. Humans, even smart ones, are easy to fool.


Didn't this kinda happen already around the mid 1800s with Soren Kierkegaard's crisis, and somewhat answered by various existentialist searches for meaning, like Nietzsche's?

I think seeing more of ourselves (as intelligences) is disconcerting for sure, and I think LLMs are a particularly good mirror for the surface features (rather than the ineffable depths) of intelligence.

But I'm not sure it upsets our "place in the universe" in such a fundamental way.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: