Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs can do the same within the context window. It's especially obvious for the modern LLMs, tuned extensively for tool use and agentic behavior.


Okay, so you're talking about LLMs specifically in the context of a ChatGPT, Claude, or pick-your-preferred-chatbot. Which isn't just an LLM, but also a UI, a memory manager, a prompt builder, a vectorDB, a system prompt, and everything else that goes into making it feel like a person.

Let's work with that.

In a given context window or conversation, yes, you can have a very human-like conversation and the chatbot will give the feeling of understanding your world and what it's like. But this still isn't a real world, and the chatbot isn't really forming hypotheses that can be disproven. At best, it's a D&D style tabletop roleplaying game with you as the DM. You are the human arbiter of what is true and what is not for this chatbot, and the world it inhabits is the one you provide it. You tell it what you want, you tell it what to do, and it responds purely to you. That isn't a real world, it's just a narrative based on your words.


A modern agentic LLM can execute actions in "real world", whatever you deem as such, and get feedback. How is that any different from what humans do?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: