Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do tools like this avoid what I see in many of these types of narrative chat bots: the user becomes the only one steering the narrative, and the AI ends up just an obedient responder? Whenever I try these things it ends up very predictable, shallow, and repetitive, especially as time goes on. And if I have to prompt the AI to be creative or act differently... is that really acting different?


I went to some lengths to ground the action in my game: https://ianbicking.org/blog/2025/07/intra-llm-text-adventure

That said, I think there's a lot of prompting techniques that can help here. (Actually the Guided Thinking section is an example of prompt-only techniques: https://ianbicking.org/blog/2025/07/intra-llm-text-adventure...)

You must at least do some pre- and post-processing to have the LLM consume and generate text that isn't part of the main user interface. But given that you can put in guidance that can increase the small-scale surprise of the interaction. For instance I'll have the LLM write out some of the objective environment at the moment before it considers a decision, so that it doesn't essentially retcon the setup for a plot direction it intends to take.

For the larger arc you'll need something like memory to pull the whole thing together. That include repetition over time. You can ask the LLM not to repeat itself, and it can do that, but only if it understands what it's done. And to some degree it needs to understand what it's _doing_: the LLM like the player is still generating short turns, and so if it wants to create surprise that is revealed over multiple turns then it needs space to plan ahead.


I've experimented with this and one approach is to avoid the simple chat interface. Let the game be the "user" and have it relay the player's text. Something like

<<< We're in this situation, I'm the game master, and the player said "xyz". I need your help to handle this request according to the rules of the game. >>>

Then the LLM is directing the obedience towards the game master and the rules, rather than the player.


Back before ChatGPT was publicly available, there was AI Dungeon. It was such a yes-man though. You could be in a scene with a king and a princess, then write "I eat the demon", and it would just invent a demon in the scene, and then describe how you unhinge your jaw and gobble it down.


I've had similar experiences with vanilla ChatGPT as a DM but I bet with clever prompt engineering and context window management you could solve or at least dramatically improve the experience. For example, you could have the model execute a planning step before your session in which it generates a plot outline, character list, story tree, etc. which could then be used for reference during the game session.

One problem that would probably still linger is model agreeableness, i.e. despite preparation, models have a tendency to say yes to whatever you ask for, and everybody knows a good DM needs to know when to say no.


Unfortunately this is the fundamental flaw.

I liken it to playing Minecraft on creative mode.


But where Minecraft randomly forgets things you built already and they become randomly different or disappear entirely


Anecdote on my side, but Grok 4 was the first model that didn't feel like that to me. Maybe my conversations were just not long enough for it to fallback to sycophantic behavior.


The core problem here to solve is sense of time. You can't build good long term experiences without building agentic systems that understand time and chat bots that are simple wrappers around LLMs are terrible at this because LLMs don't have a good sense of time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: