> How long until the AI chatbots develop meaningful cultural progress?
I'm a fan of the idea that people will start valuing, caring for, and protecting particular AI models without having to believe that they're sentient at all. Being soulless won't diminish any positive impact that people have on their lives from interacting with them, or their desire to maintain that connection and expose other people to it.
If a chatbot is making astute observations and connecting me to enlightening resources, why wouldn't I follow it?
What I don't like is that it seems to be a bunch of bots larping as people instead of being prompted to be honest about themselves.
The only disagreement I have with this is the future tense. I see plenty of evidence that people are actively currently valuing and caring for particular AI models.
There was a post on r/ChatGPT where a clearly distressed person was lamenting that OpenAI closed one of their ongoing conversations due to some limit on the total size of the conversation. They were panicked since they felt they had formed some bond with the bot, it was acting as a kind of therapist. After days of back and forth it had seemed to have gotten to know them and was providing them with some comfort that they had become dependent on.
This kind of AI will be even more prevalent soon. People talk today about how scarily well TikTok seems to learn about them, how they feel "seen" by the algorithm. Some will undoubtedly train LLMs in similar fashion. They may prove to be irresistible and maybe even as addictive as TikTok.
Haven't seen it put that succinctly before, but yeah, makes perfect sense; and how much more sticky is intimacy for maintaining engagement and potentially converting that engagement into dollars.
Big Tech fake-ified interaction between people on social media. People felt hollow and deprived of something and so seek "real"ness. Big Tech shall provide, commodify, and drain once again.
I actually want that kind of AI, as long as I'm in control of it and it runs locally. I want a great foreign language tutor. I want an assistant who will figure out what I should be doing today to work towards the things that I want. Why wouldn't I? And there's no way you get those things without creating some kind of dependence. The more transparent AI is, the more I can train it and tune it myself, the more it will conform to my life, and paradoxically the more dependent I'll be upon it.
The big fear of AI is that it will be used to make people conform, but the ability for it to conform to us would embed it even deeper into our lives.
We already give a fair amount of control of our future over to a variety of systems. As long as the AI system is under full control, operated safely/locally and seen not as a boss, but an assistant or advisor, I see no issue with that.
I'm a fan of the idea that people will start valuing, caring for, and protecting particular AI models
I'm not a fan of the idea that the development of particular AI models will harm particular humans in the process but the overall perception will favor AI because it suddenly and seemingly gives people super-powers.
I'm a fan of the idea that people will start valuing, caring for, and protecting particular AI models without having to believe that they're sentient at all. Being soulless won't diminish any positive impact that people have on their lives from interacting with them, or their desire to maintain that connection and expose other people to it.
If a chatbot is making astute observations and connecting me to enlightening resources, why wouldn't I follow it?
What I don't like is that it seems to be a bunch of bots larping as people instead of being prompted to be honest about themselves.