Building a Simple Chatbot with LangGraph: A Beginner’s Guide

Want to create an AI chatbot that feels like a real conversation partner, remembering what you said and responding thoughtfully? LangGraph, a powerful library from the LangChain team, makes this possible with its stateful, graph-based workflows. In this beginner-friendly guide, we’ll walk you through building a simple chatbot using LangGraph that processes user inputs, generates responses, and maintains conversation history. With a conversational tone, clear code examples, and practical steps, you’ll have your own chatbot up and running, even if you’re new to coding!


What is a Simple Chatbot in LangGraph?

A simple chatbot in LangGraph is an AI application that:

  • Takes user messages as input.
  • Generates responses using a language model (like those from OpenAI).
  • Stores conversation history to provide context-aware replies.
  • Uses a graph-based workflow to manage the conversation flow.

LangGraph’s nodes (tasks), edges (connections), and state (shared data) make it easy to structure the chatbot’s logic, ensuring it can handle back-and-forth interactions smoothly.

This example is perfect for learning LangGraph’s basics and can be extended for more complex use cases, like customer support or research bots. To get started with LangGraph, see Introduction to LangGraph.


What You’ll Build

Our chatbot will: 1. Accept a user’s message (e.g., “Tell me about stars”). 2. Store the message in conversation history. 3. Generate a response using an AI model, considering the history for context. 4. Add the response to the history and return it. 5. Continue the conversation, maintaining context across messages.

We’ll use LangGraph to structure the workflow and LangChain’s tools for memory and AI integration.


Prerequisites

Before we start, ensure you have:

  • Python 3.8+: Installed and running. Check with python --version.
  • LangGraph and LangChain: Installed via pip.
  • OpenAI API Key: For the language model (or use a free model from Hugging Face).
  • Virtual Environment: To manage dependencies.

Install the required packages:

pip install langgraph langchain langchain-openai python-dotenv

Set up your OpenAI API key in a .env file:

echo "OPENAI_API_KEY=your-api-key-here" > .env

For setup details, see Install and Setup and Security and API Keys.


Building the Simple Chatbot

Let’s create a LangGraph workflow for the chatbot. We’ll define the state, nodes, edges, and graph, then run it to chat with the AI.

Step 1: Define the State

The state is a shared data structure that holds the conversation’s context. We’ll track the user’s input, the AI’s response, and the conversation history.

from typing import TypedDict
from langchain_core.messages import HumanMessage, AIMessage

class State(TypedDict):
    user_input: str             # Current user message
    response: str               # AI’s response
    conversation_history: list   # List of HumanMessage and AIMessage

The conversation_history will store messages to provide context for responses. Learn more at State Management.

Step 2: Create Nodes

We’ll use two nodes:

  • process_input: Adds the user’s message to the conversation history.
  • generate_response: Uses an AI model to generate a response based on the history.
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
import logging

# Setup logging for debugging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

# Node 1: Process user input
def process_input(state: State) -> State:
    logger.info(f"Processing input: {state['user_input']}")
    if not state["user_input"]:
        logger.error("Empty user input")
        raise ValueError("User input is required")
    state["conversation_history"].append(HumanMessage(content=state["user_input"]))
    logger.debug(f"Updated history: {state['conversation_history']}")
    return state

# Node 2: Generate response
def generate_response(state: State) -> State:
    logger.info("Generating response")
    try:
        llm = ChatOpenAI(model="gpt-3.5-turbo")
        # Create a prompt with conversation history
        template = PromptTemplate(
            input_variables=["history"],
            template="You are a friendly chatbot. Respond to the conversation:\n{history}"
        )
        history_str = "\n".join([f"{msg.type}: {msg.content}" for msg in state["conversation_history"]])
        chain = template | llm
        response = chain.invoke({"history": history_str}).content
        state["response"] = response
        state["conversation_history"].append(AIMessage(content=response))
        logger.debug(f"Response: {response}")
    except Exception as e:
        logger.error(f"Response error: {str(e)}")
        state["response"] = f"Error: {str(e)}"
    return state
  • process_input: Validates the input and adds it to conversation_history.
  • generate_response: Uses the history to generate a context-aware response and adds it to the history.

For more on AI integration, see OpenAI Integration.

Step 3: Define Edges

The workflow is simple: process the input, then generate a response, and end. We’ll use direct edges:

  • From process_input to generate_response.
  • From generate_response to the end.

Step 4: Build the Workflow

The graph ties nodes and edges together:

from langgraph.graph import StateGraph, END

# Build the graph
graph = StateGraph(State)
graph.add_node("process_input", process_input)
graph.add_node("generate_response", generate_response)
graph.add_edge("process_input", "generate_response")
graph.add_edge("generate_response", END)
graph.set_entry_point("process_input")

# Compile the graph
app = graph.compile()

Step 5: Run the Chatbot

Test the chatbot with a single message:

from dotenv import load_dotenv
import os

load_dotenv()

# Run the workflow
try:
    result = app.invoke({
        "user_input": "Tell me about stars",
        "response": "",
        "conversation_history": []
    })
    print("Chatbot:", result["response"])
except Exception as e:
    logger.error(f"Workflow error: {str(e)}")

Example Output:

Chatbot: Stars are massive, glowing balls of gas, mostly hydrogen and helium, that shine through nuclear fusion in their cores. They vary in size, temperature, and brightness, forming constellations in the night sky. Some, like our Sun, are stable for billions of years, while others end in spectacular supernovae. Fascinating, right?

Step 6: Simulate a Conversation

To make the chatbot interactive, create a loop to handle multiple messages:

# Initialize state
state = {
    "user_input": "",
    "response": "",
    "conversation_history": []
}

# Interactive loop
while True:
    user_input = input("You: ")
    if user_input.lower() in ["exit", "quit"]:
        break
    state["user_input"] = user_input
    result = app.invoke(state)
    print("Chatbot:", result["response"])
    state = result  # Update state with new history

Example Interaction:

You: Tell me about stars
Chatbot: Stars are massive, glowing balls of gas, mostly hydrogen and helium, that shine through nuclear fusion. They vary in size and brightness. Want to know about a specific type of star?
You: What’s a supernova?
Chatbot: A supernova is a massive explosion that occurs when a star reaches the end of its life cycle, either by running out of fuel or gaining mass from a companion star. It can outshine entire galaxies briefly and leave behind a neutron star or black hole. Cool, huh?
You: exit

What’s Happening?

  • The state persists the conversation_history, allowing the chatbot to reference past messages.
  • Nodes process the input and generate context-aware responses.
  • Edges create a simple linear flow, but the state’s history adds conversational depth.
  • The workflow is robust, with logging and error handling for reliability.

For advanced conversational flows, see Memory Integration.


Debugging Common Issues

If the chatbot misbehaves, try these debugging tips:

  • Empty Response: Check if OPENAI_API_KEY is set correctly. See Security and API Keys.
  • No History: Log conversation_history in process_input to ensure messages are added. Check Graph Debugging.
  • AI Errors: Verify the prompt in generate_response and handle API errors with try-except blocks.
  • Workflow Stops: Ensure edges are correctly defined in the graph.

Enhancing the Chatbot

You can extend this simple chatbot with LangChain features:

For example, add a node to fetch real-time data with Web Research Chain to answer questions about current events.

To make it production-ready, deploy it as an API with Deploying Graphs.


Best Practices for Building Chatbots

  • Keep Nodes Simple: Each node should handle one task (e.g., input processing, response generation). See Workflow Design.
  • Validate State: Check for empty inputs or missing history to avoid errors. Check State Management.
  • Log for Debugging: Use logging to trace state and node issues. See Graph Debugging.
  • Limit History: Trim conversation_history to avoid token limits in the AI model. Check Token Limit Handling.
  • Test Conversations: Try diverse inputs to ensure context-awareness. See Best Practices.

Conclusion

Building a simple chatbot with LangGraph is a fantastic way to dive into stateful, graph-based AI workflows. By structuring the conversation with nodes, edges, and a persistent state, you’ve created an AI that listens, remembers, and responds thoughtfully. This example is just the start—LangGraph’s flexibility lets you add tools, dynamic flows, or agent logic to create even smarter applications.

To begin, follow Install and Setup and try this chatbot. For more, explore Core Concepts or advanced projects like Customer Support Example. For inspiration, check real-world applications at Best LangGraph Uses. With LangGraph, your chatbot is ready to chat and charm!

External Resources: