> AI agents are rational agents. They make rational decisions
this is wrong, it's almost impossible to build a fully-rational (in the Game-Theoretic sense) agent for almost any real life usecase, except some textbook toy problems.
There are many levels of Intelligence/Cognitions for Agents.
Here's an incomplete hierarchy out of my head (the real classification will deserve a whole blog post or a paper):
- Dumb/NPC/Zero-Intelligence
- Random/Probabilistic
- Rule-based / Reflexive
- Low-Rationality
- Boundedly-Rational
- Behavioral (i.e. replicating a recorded behavior of a real-life entity/phenomena)
- Learning (e.g. using AI/ML or simple statistics)
- Adaptive (similar to learning agents, but may take different (better) actions in the same situation)
- [Fully-]Rational / Game-Theoretic
"A rational actor - a perfectly informed individual with infinite computing capacity who maximizes a fixed (non-evolving) exogenous utility function"[1] bears little relation to a human being.[2]
--
[1] Aaron, 1994
[2] Growing Artificial Societies -- Joshua M. Epstein & Robert L. Axtell
AI agents are software systems that use AI to pursue goals and complete tasks on behalf of users. They show reasoning, planning, and memory and have a level of autonomy to make decisions, learn, and adapt (Google)
Agents “can be defined in several ways” including both “fully autonomous systems that operate independently over extended periods” and “prescriptive implementations that follow predefined workflows” (Anthropic)
Agents are “automated systems that can independently accomplish tasks on behalf of users” and “LLMs equipped with instructions and tools” (OpenAI)
Agents are “a type of system that can understand and respond to customer inquiries without human intervention” in different categories, ranging from “simple reflex agents” to “utility-based agents” (Salesforce)
A few days ago in TechCrunch: No one knows what the hell an AI agent is
When people use phrases like "rational decisions" it is generally a statement of intent. To interpret it in a manner which is so obviously incorrect seems rather pointless.
> AI agents are rational agents. They make rational decisions based on their perceptions and data to produce optimal performance and results. An AI agent senses its environment with physical or software interfaces (AWS)
my reply was to this definition, where the adjective "rational" was used with the noun "agents" by AWS, so it's obvious they're not talking about Human agents.
> When people use phrases like "rational decisions" it is generally a statement of intent.
I frequently hear on the news something like "the terrorists are rational". They completely missing the point that an agent might be rational (i.e. optimizing) for one variable, while they projecting it on a completely diffrerent variable. I.e. for non-textbook toy problems agents usually have lots of variables they care about, so when you talking about rationality you should specify that specific variable, and not generalize that b/c they're somewhat rational at one thing, they will be rational at all other things.
> it's obviously they're not talking Computational non-Human agents.
I don't follow your meaning?
> you should specify that specific variable, and not generalize that b/c they're somewhat rational at one thing, they will be rational at all other things.
You are being overly pedantic, although you might not realize it. The meaning in that example is the differentiation between someone with a nuanced goal who can be reasoned with versus someone who has lost all reason and only desires to lash out at others. The latter is a distinct possibility when it comes to violent acts.
this is wrong, it's almost impossible to build a fully-rational (in the Game-Theoretic sense) agent for almost any real life usecase, except some textbook toy problems.
There are many levels of Intelligence/Cognitions for Agents.
Here's an incomplete hierarchy out of my head (the real classification will deserve a whole blog post or a paper):
"A rational actor - a perfectly informed individual with infinite computing capacity who maximizes a fixed (non-evolving) exogenous utility function"[1] bears little relation to a human being.[2]--
[1] Aaron, 1994
[2] Growing Artificial Societies -- Joshua M. Epstein & Robert L. Axtell