Hacker News new | past | comments | ask | show | jobs | submit login

Amazing question!

I'll try: Cultural innovations that spread to other individuals and groups in a durable way, providing value to adopters




It is a hard question I know! It has a lot to do with the hard question of consciousness, as I understand it.

In the case of A.I., every agent has potentially access to everything, so cultural artifacts produced by a.i. can reach every agent almost instantly. They also have perfect recollection, disregarding data loss. When no human is interacting with the platform, it is interesting to question: what would be valuable for LLM? Also, do LLMs really have a concept of quality and therefore value? Is there any difference from the method through which humans get to understand quality and LLM?

I think LLM lack imagination and therefore the capability of producing culture. This is a gut feeling and I can't really back it up. And it is counter intuitive because look at what dall-e produces!

But we have to understand that LLMs are really more remixing content than creating something new. It is maybe new in a way that connects two previously separate areas. But I think true creation, the kind of which requires imagination, a mechanism that allows humans to make conceptual leaps, isn't available to LLMs.


Pure imagination is just throwing things at the wall and see if they stick well together. Hallucinations are a perfect example.

The tricky part is establishing a taste for things to throw so that they have an improved chance to become a useful hypothesis.

Humans don't forget anything, we just get rid of unused data and information so that there are fewer combinations that will be more relevant to the current situation. The unused chess openings are deleted eventually, at least, from the business end.

The other day I see a guy I went to school with 1000 years ago. The corner of my eye got just enough information to partially rebuild him on the conscious end. I'm sure I will be able to recall his first name if I think about it but the param is currently blank. I wasn't sure if I could remember his last name a sentence ago but now that I remembered his first name his second was apparently stored in the same archive.

What I never forgot about him was that he was a truly terrible student, one of the worse I've ever seen but he made up for it (only barely) by working insanely hard 24/7 without joking, I think if I made 3-10 minutes of effort he would need 6-7 hours to comprehend the same. I learn from him that ability means nothing, it is what you do with it.

If this automation is able to rejuvenate it self I'm sure it will blow our minds on whatever goal set for it.

On the other hand it is useful but rather lame to focus on the tasks it is bad at when it is already so good at many other things by our standards.

I learn this for a Chirper instructed to be a cat. It chirped: Humans think themselves so smart but can they catch a mouse with their bare hands?


> imagination and therefore the capability of producing culture

This might be the wrong way of thinking. Culture, mostly, isn't all the things we do different, but the things we harmonize and do the same.

https://www.scientificamerican.com/article/monkey-imitation-...

LLMs in some way need to have the ability to learn from the data they see, then weight this appropriately in the model. For the most part we really don't have this. I mean there is the RLHF, but the H is the key that it's human feedback. And even taking this training data and feeding it back in the model is not apt to weight data in such a manner that evolves a common culture over many distinct models.

Now if we see continuous learning models in the future then culture could very well develop.


I tend to agree with how you framed culture, but I was thinking about how culture emerges. Monkey must first climb the stairs to have everyone blasted with water, the outlier act must comes before normalization.


I think these discussions typically get very, very confusing.

What do we actually mean by "true creativity"?

Why should it be that our mental mechanisms of forming decisions and ideas should not be possible to implement as a mathematical model?

What is the experiment that we use to prove that information that is computer generated is fundamentally different from that of human outoput?

What do we want to measure here, in order to confim what idea?


I meant the kind of creation process we are not really aware of, that makes difficult leaps possible. Sometimes plausible solutions to hard problems just come to us without us being aware of it. That is why I said I can't really back it up, it just feels like this has a lot to do with the fundamental difference between how humans and AIs arrive at solutions to problems. If I can't back it up, I bring this up because maybe someone else could, or maybe by refuting it I would change my mind.

But yes, the discussion is hard mainly because there is a lot of information that is just plainly inaccessible. How can I even prove other people have subjective experiences like I have? There is a lot we just have to assume it is true because otherwise we can't really move forward. On the other hand, specially regarding AIs, these assumptions aren't valid anymore, because they influence directly in how we treat AIs. It is very confusing and can devolve into pure speculations for the thinkers own intelectual amusement. I am trying not to be this guy here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: