The only disagreement I have with this is the future tense. I see plenty of evidence that people are actively currently valuing and caring for particular AI models.
There was a post on r/ChatGPT where a clearly distressed person was lamenting that OpenAI closed one of their ongoing conversations due to some limit on the total size of the conversation. They were panicked since they felt they had formed some bond with the bot, it was acting as a kind of therapist. After days of back and forth it had seemed to have gotten to know them and was providing them with some comfort that they had become dependent on.
This kind of AI will be even more prevalent soon. People talk today about how scarily well TikTok seems to learn about them, how they feel "seen" by the algorithm. Some will undoubtedly train LLMs in similar fashion. They may prove to be irresistible and maybe even as addictive as TikTok.
Haven't seen it put that succinctly before, but yeah, makes perfect sense; and how much more sticky is intimacy for maintaining engagement and potentially converting that engagement into dollars.
Big Tech fake-ified interaction between people on social media. People felt hollow and deprived of something and so seek "real"ness. Big Tech shall provide, commodify, and drain once again.
I actually want that kind of AI, as long as I'm in control of it and it runs locally. I want a great foreign language tutor. I want an assistant who will figure out what I should be doing today to work towards the things that I want. Why wouldn't I? And there's no way you get those things without creating some kind of dependence. The more transparent AI is, the more I can train it and tune it myself, the more it will conform to my life, and paradoxically the more dependent I'll be upon it.
The big fear of AI is that it will be used to make people conform, but the ability for it to conform to us would embed it even deeper into our lives.
We already give a fair amount of control of our future over to a variety of systems. As long as the AI system is under full control, operated safely/locally and seen not as a boss, but an assistant or advisor, I see no issue with that.
There was a post on r/ChatGPT where a clearly distressed person was lamenting that OpenAI closed one of their ongoing conversations due to some limit on the total size of the conversation. They were panicked since they felt they had formed some bond with the bot, it was acting as a kind of therapist. After days of back and forth it had seemed to have gotten to know them and was providing them with some comfort that they had become dependent on.
This kind of AI will be even more prevalent soon. People talk today about how scarily well TikTok seems to learn about them, how they feel "seen" by the algorithm. Some will undoubtedly train LLMs in similar fashion. They may prove to be irresistible and maybe even as addictive as TikTok.