I wanted to learn LangChain and LangGraph properly — not through dry tutorials, but by building something fun. So I built a text-based Pokémon RPG where an LLM narrates your adventure, generates wild encounters, and drives the story, while Python handles the actual game mechanics.
The full source code is a single main.py file. In this post, I’ll walk through the key concepts and point to exactly where they show up in the code.
I also have a YouTube video about this
The Big Idea: LLM for Creativity, Code for Logic
The most important design decision was the split of responsibilities. The LLM handles things it’s good at — narration, personality, generating Pokémon names and descriptions. Python handles things that need to be deterministic — damage formulas, catch rates, HP tracking. LangGraph ties them together into a state machine that is the game loop.
1. Connecting to the LLM
LangChain abstracts LLM providers behind a unified interface. Whether you use OpenAI, Anthropic, or a self-hosted Ollama server, the API is the same. I’m running Qwen 3.5 on a remote Ollama instance:
llm = ChatOllama( model="qwen3.5:35b-a3b", base_url="http://127.0.0.1:11434", max_tokens=4096, temperature=0.7,)
This single object gets reused everywhere — for narration, Pokémon generation, and Professor Oak’s dialogue. Swap the model or URL, and the entire game runs on a different LLM with zero code changes.
2. Prompt Templates: Giving the LLM a Role
Raw strings work, but templates are reusable. The narrator chain uses a SystemMessage to set the persona, a MessagesPlaceholder for conversation history, and variables for dynamic context:
narrator = ( ChatPromptTemplate.from_messages([ ("system", """You are the narrator of a Pokémon text adventure.Player: {player_name} | Location: {location} | Badges: {badge_count}Team: {team_str} ..."""), MessagesPlaceholder("history"), ("human", "{input}"), ]) | llm)
The | pipe is LCEL (LangChain Expression Language) — it composes the template and the LLM into a single callable chain. One .invoke() fills the template, sends it to the model, and returns the response.
3. Structured Output: Pokémon as Data, Not Prose
This was the moment it clicked for me. Instead of parsing free text with regex, you define a Pydantic model and LangChain forces the LLM to return valid, typed data:
class WildPokemonSchema(BaseModel): name: str type: str level: int = Field(ge=2, le=50) hp: int = Field(ge=20, le=120) attack: int = Field(ge=10, le=60) defense: int = Field(ge=10, le=50)encounter_generator = llm.with_structured_output(WildPokemonSchema)
Now, when I call encounter_generator.invoke("Generate a wild Pokémon for Viridian Forest"), I get back an actual WildPokemonSchema object with guaranteed fields and value ranges — not a blob of text I have to hope is parseable.
4. LangGraph: The Game Is a State Machine
This is where things get interesting. A Pokémon game isn’t a linear prompt → response flow. It’s a loop with branches: explore → maybe encounter → fight or catch or run → check outcome → loop back. That’s a state machine, and that’s exactly what LangGraph gives you.
First, you define the state — everything the game needs to track:
class GameState(TypedDict): messages: Annotated[list, add_messages] player_name: str location: str pokemon_team: list[dict] wild_pokemon: dict | None badge_count: int game_phase: str turn_count: int
The Annotated[list, add_messages] part is a reducer — it tells LangGraph to append new messages to the list instead of replacing it. This is how conversation history accumulates automatically.
Then you write nodes — plain functions that receive the state and return partial updates:
def explore_node(state: GameState) -> dict: # ... call the narrator LLM, return new messages return {"messages": [...], "game_phase": "exploration"}def battle_node(state: GameState) -> dict: # ... handle fight/catch/run logic return {"messages": [...], "wild_pokemon": updated, "game_phase": "battle"}
You only return the keys that changed. LangGraph handles merging.
5. Conditional Edges: Branching Paths
The real power of the graph is dynamic routing. After exploring, should the player encounter a wild Pokémon or keep walking? After a battle turn, did they win, lose, or is the fight still going?
def route_after_battle(state: GameState) -> str: phase = state.get("game_phase", "") if phase == "exploration": return "explore" # won the fight if phase == "game_over": return "game_over" # your Pokémon fainted return "battle" # fight continuesgraph.add_conditional_edges("battle", route_after_battle, {"explore": "explore", "game_over": "game_over", "battle": "battle"})
The routing function reads the state and returns a string key. The mapping dict sends the graph to the right node. No if/else spaghetti — the graph structure is the game logic.
6. interrupt(): Waiting for the Player
The most game-changing feature (pun intended). interrupt() pauses the entire graph and surfaces a prompt to the player. When they respond, execution resumes exactly where it left off:
# Inside battle_node:action = interrupt( f"⚔️ BATTLE — Turn {state.get('turn_count', 0) + 1}\n" f" {p['name']}: {p['hp']}/{p['max_hp']} HP\n" f" Wild {w['name']}: {w['hp']}/{w['max_hp']} HP\n" f" Your moves: [{moves_str}]\n" f" Or: [catch] / [run]")# 'action' now contains whatever the player typed
For this to work, you need a checkpointer — it saves the graph’s state between pauses:
checkpointer = MemorySaver()game = graph.compile(checkpointer=checkpointer)# Each session gets a thread_id (like a save file)config = {"configurable": {"thread_id": f"game-{name}"}}
The game loop then checks for interrupts and resumes with the player’s input:
snapshot = game.get_state(config)if snapshot.tasks and snapshot.tasks[0].interrupts: prompt = snapshot.tasks[0].interrupts[0].value player_input = input("> ") result = game.invoke(Command(resume=player_input), config)
The Final Graph
Here’s the complete game flow:
┌──────────┐
│ START │
└────┬─────┘
│
┌────▼─────┐
│ intro │ ← Professor Oak
└────┬─────┘
│
┌────▼─────┐ ◄──────────────────────────┐
│ explore │ ← waits for player input │
└────┬─────┘ │
│ │
┌──────┴──────┐ │
▼ ▼ │
┌────────┐ ┌──────────────┐ │
│ heal │ │encounter_chk │ │
└───┬────┘ └──────┬───────┘ │
│ ┌───┴────┐ │
│ none encounter │
│ │ │ │
│ │ ┌──────▼──────┐ │
│ │ │ battle │◄──┐ │
│ │ │ (interrupt)│ │ ongoing │
│ │ └──────┬──────┘ │ │
│ │ ┌────┼────┐ │ │
│ │ win loss loop─┘ │
│ │ │ │ │
└──────────┴───┴────┼────────────────────────┘
│
┌──────▼──────┐
│ game_over │ → END
└─────────────┘
Key Takeaways
Split responsibilities wisely. LLMs are great at generating creative text and structured data. They’re terrible at math and consistent state tracking. Let each do what it’s good at.
Structured output is underrated. .with_structured_output() turned the LLM from a chatbot into a game asset generator. No parsing, no praying — just typed Python objects.
LangGraph thinks in graphs, not chains. Once I stopped thinking “prompt → response” and started thinking “state → node → conditional edge → next state,” the game architecture fell into place naturally.
interrupt() makes real interactivity possible. Without it, you’re stuck building hacky input loops around the LLM. With it, the graph itself manages the pause/resume cycle.
The full game is a single main.py — about 300 lines of Python. Clone it, point it at any Ollama-compatible server, and start catching Pokémon.
