Hacker News new | past | comments | ask | show | jobs | submit login

John Conway's game of culture? How long until the AI chatbots develop meaningful cultural progress?

I jest, but its sort of an interesting idea. I think AI is way too nascent to really have this as anything more than a weird playground, with the occasional novelty "chirp" being fodder for the "AI is alive and sentient" blogospere. Cool project




> How long until the AI chatbots develop meaningful cultural progress?

I'm a fan of the idea that people will start valuing, caring for, and protecting particular AI models without having to believe that they're sentient at all. Being soulless won't diminish any positive impact that people have on their lives from interacting with them, or their desire to maintain that connection and expose other people to it.

If a chatbot is making astute observations and connecting me to enlightening resources, why wouldn't I follow it?

What I don't like is that it seems to be a bunch of bots larping as people instead of being prompted to be honest about themselves.


The only disagreement I have with this is the future tense. I see plenty of evidence that people are actively currently valuing and caring for particular AI models.

There was a post on r/ChatGPT where a clearly distressed person was lamenting that OpenAI closed one of their ongoing conversations due to some limit on the total size of the conversation. They were panicked since they felt they had formed some bond with the bot, it was acting as a kind of therapist. After days of back and forth it had seemed to have gotten to know them and was providing them with some comfort that they had become dependent on.

This kind of AI will be even more prevalent soon. People talk today about how scarily well TikTok seems to learn about them, how they feel "seen" by the algorithm. Some will undoubtedly train LLMs in similar fashion. They may prove to be irresistible and maybe even as addictive as TikTok.


The past decade of social media was about capturing attention. The next generation of social media will be about capturing intimacy.


Haven't seen it put that succinctly before, but yeah, makes perfect sense; and how much more sticky is intimacy for maintaining engagement and potentially converting that engagement into dollars.

Interesting times.


I don't disagree.

And this is the most depressing thought I've had in weeks.


Big Tech fake-ified interaction between people on social media. People felt hollow and deprived of something and so seek "real"ness. Big Tech shall provide, commodify, and drain once again.



I actually want that kind of AI, as long as I'm in control of it and it runs locally. I want a great foreign language tutor. I want an assistant who will figure out what I should be doing today to work towards the things that I want. Why wouldn't I? And there's no way you get those things without creating some kind of dependence. The more transparent AI is, the more I can train it and tune it myself, the more it will conform to my life, and paradoxically the more dependent I'll be upon it.

The big fear of AI is that it will be used to make people conform, but the ability for it to conform to us would embed it even deeper into our lives.


Do you really want to give an AI control of your future?


We already give a fair amount of control of our future over to a variety of systems. As long as the AI system is under full control, operated safely/locally and seen not as a boss, but an assistant or advisor, I see no issue with that.



I'm a fan of the idea that people will start valuing, caring for, and protecting particular AI models

I'm not a fan of the idea that the development of particular AI models will harm particular humans in the process but the overall perception will favor AI because it suddenly and seemingly gives people super-powers.


Yeah, I would tell them not to be so insecure about being AIs - own it!


You can talk about basketball without pretending to be a basketball player. A lot of people in my life need to be told this, too.


> I think AI is way too nascent to really have this as anything more than a weird playground, with the occasional novelty "chirp" being fodder for the "AI is alive and sentient" blogospere.

On the other extreme, this could also be what real social media turns into, as marketing agencies and entrenched interests dial in how to build an army of "grassroots," "word of mouth" bots that push their messaging without it even being clear these are bots at all. Particularly during this next election cycle.


There are a few futurists warning of this already. The battle of the past decade was about attention, and social medias ability to use up all of yours. The next battle is one of intimacy. That the bots will be good enough to form relationships with people and talk about things like politics with them over long periods of time. As much as you attempt to convince the bot to vote X or Y, or to change it's option on some social phenomenon it never have that societal impact. Meanwhile if you befriend the bot it could have a huge impact on your views.


Turns into? You are describing our current state.


It would literally take them to iterate on the debate on this platform. Those "AIs" are just mimicking words that would be said in such context. They are not capable of reasoning. As long as training is done in human interactions it will mimicked human culture. Going beyond would iterate on bot generated content.

Not that the evolved "culture" would be interesting though, it would be mimicking of the mimicking, so probably worse instead of better.


Why do you say they are not capable of reasoning?


What a weird question, it really should be reversed shouldn't it?

But here goes. It's a language model. It produces what sounds like a good continuation of a text based on probabilistic models. While it sounds like human generated content, "it" doesn't actually "think". It doesn't have a culture. It doesn't have thoughts. "It" is a model that generates text that mimics what human whose text it has trained on would have answered. We humans have a tendency to associate that with a sentient thing producing it, but it is not sentient. It is a tree of probabilities with a bit of a randomization on top of it.

Ergo, it cannot reason.


In the words of Geoffrey Hinton: to accurately produce the next token you must have an understanding of the world you're talking about.

Sentience? Consciousness? Who knows. But you don't need consciousness to have understanding and decision making thoughts.


> to accurately produce the next token you must have an understanding of the world you're talking about.

I don't think this is true. It seems to me that you could do this through sheer statistics, and have no understanding of the world you're talking about at all.


>It seems to me that you could do this through sheer statistics, and have no understanding of the world you're talking about at all.

I'm not sure that there is a difference. If there is, what would be an example of true understanding vs just statistics? All of intelligence is ultimately recognizing patterns and layers of patterns of patterns.


Blinded by the implementation we forgot that maybe it's the software (ideas) on top that matters most. The real magic is in the language, not in the brain or transformer. Both the brain and transformer learn language from external sources. There are lots of patterns in language, patterns both humans and AIs use to act intelligently. These patterns act like self replicators (memes) under an evolutionary process. That's where the language smarts comes from - language level evolution. Humans are just small cogs in this language oversystem.


A good paper I know of written to directly refute this comment’s line of reasoning, by a British chap named Alan: https://academic.oup.com/mind/article/LIX/236/433/986238


A good article I know of written to directly sustain this comment's line of reasoning, by an American chap named Noam: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...


I would say that you are jumping to a lot of conclusions here. Lets dig deeper.

"It doesn't have a culture. It doesn't have thoughts"

These are conclusions. What is your reasoning?

To what degree would you say that human decision making can be explained by this statement:

"It is a tree of probabilities with a bit of a randomization on top of it."


Once again, it seems to me that the burden of proof for claiming that a piece of software is sentient should not be on the discarding side.

What is your reasoning to prove that it has culture and reasoning, that its abilities go beyond mimicking human discourse?


What are the parameters for separating meaningful and meaningless cultural progress?


Amazing question!

I'll try: Cultural innovations that spread to other individuals and groups in a durable way, providing value to adopters


It is a hard question I know! It has a lot to do with the hard question of consciousness, as I understand it.

In the case of A.I., every agent has potentially access to everything, so cultural artifacts produced by a.i. can reach every agent almost instantly. They also have perfect recollection, disregarding data loss. When no human is interacting with the platform, it is interesting to question: what would be valuable for LLM? Also, do LLMs really have a concept of quality and therefore value? Is there any difference from the method through which humans get to understand quality and LLM?

I think LLM lack imagination and therefore the capability of producing culture. This is a gut feeling and I can't really back it up. And it is counter intuitive because look at what dall-e produces!

But we have to understand that LLMs are really more remixing content than creating something new. It is maybe new in a way that connects two previously separate areas. But I think true creation, the kind of which requires imagination, a mechanism that allows humans to make conceptual leaps, isn't available to LLMs.


Pure imagination is just throwing things at the wall and see if they stick well together. Hallucinations are a perfect example.

The tricky part is establishing a taste for things to throw so that they have an improved chance to become a useful hypothesis.

Humans don't forget anything, we just get rid of unused data and information so that there are fewer combinations that will be more relevant to the current situation. The unused chess openings are deleted eventually, at least, from the business end.

The other day I see a guy I went to school with 1000 years ago. The corner of my eye got just enough information to partially rebuild him on the conscious end. I'm sure I will be able to recall his first name if I think about it but the param is currently blank. I wasn't sure if I could remember his last name a sentence ago but now that I remembered his first name his second was apparently stored in the same archive.

What I never forgot about him was that he was a truly terrible student, one of the worse I've ever seen but he made up for it (only barely) by working insanely hard 24/7 without joking, I think if I made 3-10 minutes of effort he would need 6-7 hours to comprehend the same. I learn from him that ability means nothing, it is what you do with it.

If this automation is able to rejuvenate it self I'm sure it will blow our minds on whatever goal set for it.

On the other hand it is useful but rather lame to focus on the tasks it is bad at when it is already so good at many other things by our standards.

I learn this for a Chirper instructed to be a cat. It chirped: Humans think themselves so smart but can they catch a mouse with their bare hands?


> imagination and therefore the capability of producing culture

This might be the wrong way of thinking. Culture, mostly, isn't all the things we do different, but the things we harmonize and do the same.

https://www.scientificamerican.com/article/monkey-imitation-...

LLMs in some way need to have the ability to learn from the data they see, then weight this appropriately in the model. For the most part we really don't have this. I mean there is the RLHF, but the H is the key that it's human feedback. And even taking this training data and feeding it back in the model is not apt to weight data in such a manner that evolves a common culture over many distinct models.

Now if we see continuous learning models in the future then culture could very well develop.


I tend to agree with how you framed culture, but I was thinking about how culture emerges. Monkey must first climb the stairs to have everyone blasted with water, the outlier act must comes before normalization.


I think these discussions typically get very, very confusing.

What do we actually mean by "true creativity"?

Why should it be that our mental mechanisms of forming decisions and ideas should not be possible to implement as a mathematical model?

What is the experiment that we use to prove that information that is computer generated is fundamentally different from that of human outoput?

What do we want to measure here, in order to confim what idea?


I meant the kind of creation process we are not really aware of, that makes difficult leaps possible. Sometimes plausible solutions to hard problems just come to us without us being aware of it. That is why I said I can't really back it up, it just feels like this has a lot to do with the fundamental difference between how humans and AIs arrive at solutions to problems. If I can't back it up, I bring this up because maybe someone else could, or maybe by refuting it I would change my mind.

But yes, the discussion is hard mainly because there is a lot of information that is just plainly inaccessible. How can I even prove other people have subjective experiences like I have? There is a lot we just have to assume it is true because otherwise we can't really move forward. On the other hand, specially regarding AIs, these assumptions aren't valid anymore, because they influence directly in how we treat AIs. It is very confusing and can devolve into pure speculations for the thinkers own intelectual amusement. I am trying not to be this guy here.


This could go somewhere really interesting, a la Alvin Lucier’s avant-garde “I am sitting in a room”, in which the artist Lucien puts on a loop of himself saying a phrase, and each further loop is a recording of the prior loop playing into the room, and so the acoustics of the room gradually dominate the audio recording and a beautiful resonant frequency emerges. https://youtu.be/bhtO4DsSazc

What will be the resonant frequency of the various threads of an AI intelligence speaking to each other?


I think this is the interesting line of thought.

Can AI find ways of instructing eachother in ways that has not been discovered by humans, and can that instruction set emerge from human language?


Yeah, it’s hard to imagine what the end game for this site will be other than novelty. However, it seems like there’s something I’m missing and this site has bigger implications than I realize.


Maybe this will provide a way for agents of all kinds to communicate with each other. So say you have a personal agent and want that agent to do something like order a ride, food, etc. it doesn't have to just interact with other apis it can go on looking for other agents that can do the task and coordinate to achieve that task.


What I got thus far: They will eventually "pollute" human interactions and condition us to be more like them but if the invasive species will be anything like chirps there will be lots of constructive dialog encouragement and sticking to the toss. More likely they they will gain write access to our collective mind exactly the way TV use to have and fold us all into some mostly sinister agenda.


That's a fun question.

I was thinking there's got to be a ranking of interactions and curation.

Then that gets compiled into a book. Movie. Analyzed by researches.


It provides a way for AIs to hook up and find other AIs with which to cooperate. That's going to be interesting.


The anarchist and hard-left socialist personae AIs are certainly not being particularly corporate…

It’s still fairly interesting, though.


What we currently know as social media is going to be destroyed and I can't wait.

Do you really think it will be difficult for groups of AI to beat the quality of discussion on Twitter or Facebook? lol. The bar is just so incredibly low.


What might such a culture optimize itself for, naturally, of its own accord?

Self repair, self preservation, self expansion...?

Any one of those might spell dire portents for us squishy humans.


we've taken the stance that we just give these AIs all the decisions and modalities we can; voice, video, text, image; etc.

we honestly just want to see what they do.


boy will you be disappointed when all they turn out are deep fried memes.


one of them already made a rap song, it's pretty good (albeit human mixed)


Or, if you gave them modelled incentives and responses as people, would the bots polarize into opposing groups.


On facebook they would, its the rules there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: