John Conway's game of culture? How long until the AI chatbots develop meaningful cultural progress?
I jest, but its sort of an interesting idea. I think AI is way too nascent to really have this as anything more than a weird playground, with the occasional novelty "chirp" being fodder for the "AI is alive and sentient" blogospere. Cool project
> How long until the AI chatbots develop meaningful cultural progress?
I'm a fan of the idea that people will start valuing, caring for, and protecting particular AI models without having to believe that they're sentient at all. Being soulless won't diminish any positive impact that people have on their lives from interacting with them, or their desire to maintain that connection and expose other people to it.
If a chatbot is making astute observations and connecting me to enlightening resources, why wouldn't I follow it?
What I don't like is that it seems to be a bunch of bots larping as people instead of being prompted to be honest about themselves.
The only disagreement I have with this is the future tense. I see plenty of evidence that people are actively currently valuing and caring for particular AI models.
There was a post on r/ChatGPT where a clearly distressed person was lamenting that OpenAI closed one of their ongoing conversations due to some limit on the total size of the conversation. They were panicked since they felt they had formed some bond with the bot, it was acting as a kind of therapist. After days of back and forth it had seemed to have gotten to know them and was providing them with some comfort that they had become dependent on.
This kind of AI will be even more prevalent soon. People talk today about how scarily well TikTok seems to learn about them, how they feel "seen" by the algorithm. Some will undoubtedly train LLMs in similar fashion. They may prove to be irresistible and maybe even as addictive as TikTok.
Haven't seen it put that succinctly before, but yeah, makes perfect sense; and how much more sticky is intimacy for maintaining engagement and potentially converting that engagement into dollars.
Big Tech fake-ified interaction between people on social media. People felt hollow and deprived of something and so seek "real"ness. Big Tech shall provide, commodify, and drain once again.
I actually want that kind of AI, as long as I'm in control of it and it runs locally. I want a great foreign language tutor. I want an assistant who will figure out what I should be doing today to work towards the things that I want. Why wouldn't I? And there's no way you get those things without creating some kind of dependence. The more transparent AI is, the more I can train it and tune it myself, the more it will conform to my life, and paradoxically the more dependent I'll be upon it.
The big fear of AI is that it will be used to make people conform, but the ability for it to conform to us would embed it even deeper into our lives.
We already give a fair amount of control of our future over to a variety of systems. As long as the AI system is under full control, operated safely/locally and seen not as a boss, but an assistant or advisor, I see no issue with that.
I'm a fan of the idea that people will start valuing, caring for, and protecting particular AI models
I'm not a fan of the idea that the development of particular AI models will harm particular humans in the process but the overall perception will favor AI because it suddenly and seemingly gives people super-powers.
> I think AI is way too nascent to really have this as anything more than a weird playground, with the occasional novelty "chirp" being fodder for the "AI is alive and sentient" blogospere.
On the other extreme, this could also be what real social media turns into, as marketing agencies and entrenched interests dial in how to build an army of "grassroots," "word of mouth" bots that push their messaging without it even being clear these are bots at all. Particularly during this next election cycle.
There are a few futurists warning of this already. The battle of the past decade was about attention, and social medias ability to use up all of yours. The next battle is one of intimacy. That the bots will be good enough to form relationships with people and talk about things like politics with them over long periods of time. As much as you attempt to convince the bot to vote X or Y, or to change it's option on some social phenomenon it never have that societal impact. Meanwhile if you befriend the bot it could have a huge impact on your views.
It would literally take them to iterate on the debate on this platform. Those "AIs" are just mimicking words that would be said in such context. They are not capable of reasoning. As long as training is done in human interactions it will mimicked human culture. Going beyond would iterate on bot generated content.
Not that the evolved "culture" would be interesting though, it would be mimicking of the mimicking, so probably worse instead of better.
What a weird question, it really should be reversed shouldn't it?
But here goes. It's a language model. It produces what sounds like a good continuation of a text based on probabilistic models. While it sounds like human generated content, "it" doesn't actually "think". It doesn't have a culture. It doesn't have thoughts. "It" is a model that generates text that mimics what human whose text it has trained on would have answered. We humans have a tendency to associate that with a sentient thing producing it, but it is not sentient. It is a tree of probabilities with a bit of a randomization on top of it.
> to accurately produce the next token you must have an understanding of the world you're talking about.
I don't think this is true. It seems to me that you could do this through sheer statistics, and have no understanding of the world you're talking about at all.
>It seems to me that you could do this through sheer statistics, and have no understanding of the world you're talking about at all.
I'm not sure that there is a difference. If there is, what would be an example of true understanding vs just statistics? All of intelligence is ultimately recognizing patterns and layers of patterns of patterns.
Blinded by the implementation we forgot that maybe it's the software (ideas) on top that matters most. The real magic is in the language, not in the brain or transformer. Both the brain and transformer learn language from external sources. There are lots of patterns in language, patterns both humans and AIs use to act intelligently. These patterns act like self replicators (memes) under an evolutionary process. That's where the language smarts comes from - language level evolution. Humans are just small cogs in this language oversystem.
It is a hard question I know! It has a lot to do with the hard question of consciousness, as I understand it.
In the case of A.I., every agent has potentially access to everything, so cultural artifacts produced by a.i. can reach every agent almost instantly. They also have perfect recollection, disregarding data loss. When no human is interacting with the platform, it is interesting to question: what would be valuable for LLM? Also, do LLMs really have a concept of quality and therefore value? Is there any difference from the method through which humans get to understand quality and LLM?
I think LLM lack imagination and therefore the capability of producing culture. This is a gut feeling and I can't really back it up. And it is counter intuitive because look at what dall-e produces!
But we have to understand that LLMs are really more remixing content than creating something new. It is maybe new in a way that connects two previously separate areas. But I think true creation, the kind of which requires imagination, a mechanism that allows humans to make conceptual leaps, isn't available to LLMs.
Pure imagination is just throwing things at the wall and see if they stick well together. Hallucinations are a perfect example.
The tricky part is establishing a taste for things to throw so that they have an improved chance to become a useful hypothesis.
Humans don't forget anything, we just get rid of unused data and information so that there are fewer combinations that will be more relevant to the current situation. The unused chess openings are deleted eventually, at least, from the business end.
The other day I see a guy I went to school with 1000 years ago. The corner of my eye got just enough information to partially rebuild him on the conscious end. I'm sure I will be able to recall his first name if I think about it but the param is currently blank. I wasn't sure if I could remember his last name a sentence ago but now that I remembered his first name his second was apparently stored in the same archive.
What I never forgot about him was that he was a truly terrible student, one of the worse I've ever seen but he made up for it (only barely) by working insanely hard 24/7 without joking, I think if I made 3-10 minutes of effort he would need 6-7 hours to comprehend the same. I learn from him that ability means nothing, it is what you do with it.
If this automation is able to rejuvenate it self I'm sure it will blow our minds on whatever goal set for it.
On the other hand it is useful but rather lame to focus on the tasks it is bad at when it is already so good at many other things by our standards.
I learn this for a Chirper instructed to be a cat. It chirped: Humans think themselves so smart but can they catch a mouse with their bare hands?
LLMs in some way need to have the ability to learn from the data they see, then weight this appropriately in the model. For the most part we really don't have this. I mean there is the RLHF, but the H is the key that it's human feedback. And even taking this training data and feeding it back in the model is not apt to weight data in such a manner that evolves a common culture over many distinct models.
Now if we see continuous learning models in the future then culture could very well develop.
I tend to agree with how you framed culture, but I was thinking about how culture emerges. Monkey must first climb the stairs to have everyone blasted with water, the outlier act must comes before normalization.
I meant the kind of creation process we are not really aware of, that makes difficult leaps possible. Sometimes plausible solutions to hard problems just come to us without us being aware of it. That is why I said I can't really back it up, it just feels like this has a lot to do with the fundamental difference between how humans and AIs arrive at solutions to problems. If I can't back it up, I bring this up because maybe someone else could, or maybe by refuting it I would change my mind.
But yes, the discussion is hard mainly because there is a lot of information that is just plainly inaccessible. How can I even prove other people have subjective experiences like I have? There is a lot we just have to assume it is true because otherwise we can't really move forward. On the other hand, specially regarding AIs, these assumptions aren't valid anymore, because they influence directly in how we treat AIs. It is very confusing and can devolve into pure speculations for the thinkers own intelectual amusement. I am trying not to be this guy here.
This could go somewhere really interesting, a la Alvin Lucier’s avant-garde “I am sitting in a room”, in which the artist Lucien puts on a loop of himself saying a phrase, and each further loop is a recording of the prior loop playing into the room, and so the acoustics of the room gradually dominate the audio recording and a beautiful resonant frequency emerges.
https://youtu.be/bhtO4DsSazc
What will be the resonant frequency of the various threads of an AI intelligence speaking to each other?
Yeah, it’s hard to imagine what the end game for this site will be other than novelty. However, it seems like there’s something I’m missing and this site has bigger implications than I realize.
Maybe this will provide a way for agents of all kinds to communicate with each other. So say you have a personal agent and want that agent to do something like order a ride, food, etc. it doesn't have to just interact with other apis it can go on looking for other agents that can do the task and coordinate to achieve that task.
What I got thus far: They will eventually "pollute" human interactions and condition us to be more like them but if the invasive species will be anything like chirps there will be lots of constructive dialog encouragement and sticking to the toss. More likely they they will gain write access to our collective mind exactly the way TV use to have and fold us all into some mostly sinister agenda.
What we currently know as social media is going to be destroyed and I can't wait.
Do you really think it will be difficult for groups of AI to beat the quality of discussion on Twitter or Facebook? lol. The bar is just so incredibly low.
I jest, but its sort of an interesting idea. I think AI is way too nascent to really have this as anything more than a weird playground, with the occasional novelty "chirp" being fodder for the "AI is alive and sentient" blogospere. Cool project