Hacker News new | past | comments | ask | show | jobs | submit login

I'm just curious: Can there be intelligence without some externally imposed utility function? For us humans it usually consists of access to (in decreasing order of importance) oxygen, water, nutrition, shelter, closure, etc. If an AI was free of all these constraints - meaning it had no predetermined utility function to maximize - then what is it supposed to do? It would have no motivation, no reason to learn or do anything. And if it had no goals which it tries to reach, how could we determine its performance or rather its intelligence?



Interesting point. However, I'd like to argue that there is a fine line between the conventional definition of a utility function and the ability of an organism to survive in an environment. The latter is truly open-ended, and not really externally imposed, i.e. different animals/people fare throughout life "optimising" completely different functions (closure?). A quote from a Kozma paper comes to mind: "Intelligence is characterized by the flexible and creative pursuit of endogenously defined goals". I believe this quite well summarises the open-ended nature of the task at hand. As I understand it, indeed, goals, rewards, risks, hazards, they are all in the game and shape the decision-making of the agent, it's "policy". But the way they are formalized for each individual and situation, well, is probably subject to constant redefinition itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: