Looping and Branching in LangGraph: Creating Dynamic AI Workflows

Imagine an AI that can retry tasks until they’re perfect, or choose different paths based on what’s happening—like a smart assistant that adapts on the fly. That’s the power of looping and branching in LangGraph, a versatile library from the LangChain team. These techniques allow LangGraph’s graph-based workflows to handle complex, dynamic scenarios, making it ideal for applications like customer support bots that persist until an issue is resolved or research agents that pivot based on results. In this beginner-friendly guide, we’ll explore how looping and branching work in LangGraph, how to implement them, and how they enhance your AI workflows. With clear examples and a conversational tone, you’ll be ready to build adaptive AI, even if you’re new to coding!


What Are Looping and Branching in LangGraph?

In LangGraph, workflows are structured as a graph, where tasks (nodes) are connected by paths (edges). Looping and branching are techniques that make these workflows dynamic:

  • Looping: Repeating a task or set of tasks until a condition is met, like retrying a poem generation until it’s long enough.
  • Branching: Choosing different paths in the workflow based on conditions, like deciding whether to end a conversation or fetch more data.

These are enabled by conditional edges, which use the workflow’s state (a shared data structure) to decide the next step. This flexibility is what makes LangGraph perfect for complex, adaptive applications.

Key points:

  • Looping: Cycles back to previous nodes for retries or iterations.
  • Branching: Directs the workflow to different nodes based on logic.
  • State-Driven: Decisions rely on the state’s data, like user inputs or task outcomes.

To get started with LangGraph, see Introduction to LangGraph.


How Looping and Branching Work

Looping and branching are implemented using conditional edges in LangGraph’s graph. Here’s how they fit into the workflow:

  1. Nodes: Perform tasks (e.g., generating text, checking results) and update the state.
  2. State: Holds data (like task outputs or flags) that informs decisions.
  3. Edges:
    • Direct Edges: Fixed connections between nodes for linear flow.
    • Conditional Edges: Dynamic connections that choose the next node based on a condition checked against the state.

4. Decision Function: A function that evaluates the state and returns the next node’s name, enabling looping or branching.

Looping occurs when a conditional edge points back to an earlier node, creating a cycle. Branching happens when a conditional edge selects from multiple possible nodes, creating divergent paths.

For a deeper look at nodes and edges, check Nodes and Edges.


Implementing Looping: A Poem-Writing Bot

Let’s build a poem-writing bot that loops to retry generating a poem until it meets a quality threshold, demonstrating looping in action.

The Goal

The bot: 1. Takes a topic (e.g., “stars”). 2. Generates a poem using an AI model. 3. Checks if the poem is long enough (>50 characters). 4. Loops back to retry if the poem is too short, up to three attempts.

Step 1: Define the State

The state tracks the topic, poem, quality, and attempt count:

from typing import TypedDict

class State(TypedDict):
    topic: str         # e.g., "stars"
    poem: str          # Generated poem
    is_good: bool      # True if poem meets criteria
    attempt_count: int # Number of attempts

Step 2: Create Nodes

Nodes handle poem generation and quality checking:

from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate

# Node 1: Generate a poem
def write_poem(state):
    llm = ChatOpenAI(model="gpt-3.5-turbo")
    template = PromptTemplate(input_variables=["topic"], template="Write a short poem about {topic}.")
    chain = template | llm
    poem = chain.invoke({"topic": state["topic"]}).content
    state["poem"] = poem
    state["is_good"] = False
    state["attempt_count"] += 1
    return state

# Node 2: Check poem quality
def check_poem(state):
    state["is_good"] = len(state["poem"]) > 50
    return state

Step 3: Implement Looping with Conditional Edges

A decision function checks the state to decide whether to loop or end:

def decide_next(state):
    if state["is_good"] or state["attempt_count"] >= 3:
        return "end"
    return "write_poem"

Step 4: Build the Workflow

The graph connects nodes with edges, including a conditional edge for looping:

from langgraph.graph import StateGraph, END

# Build the graph
graph = StateGraph(State)
graph.add_node("write_poem", write_poem)
graph.add_node("check_poem", check_poem)
graph.add_edge("write_poem", "check_poem")
graph.add_conditional_edges("check_poem", decide_next, {
    "end": END,
    "write_poem": "write_poem"
})
graph.set_entry_point("write_poem")

# Run
app = graph.compile()
result = app.invoke({
    "topic": "stars",
    "poem": "",
    "is_good": False,
    "attempt_count": 0
})
print(result["poem"])

What’s Happening?

  • State: Tracks the topic, poem, quality, and attempts.
  • Nodes: write_poem generates a poem; check_poem evaluates it.
  • Edges: A direct edge from write_poem to check_poem; a conditional edge loops back to write_poem if is_good is False and attempts are under three.
  • Looping: The workflow retries poem generation until the poem is long enough or three attempts are made.

Try a similar project with Simple Chatbot Example.


Implementing Branching: A Customer Support Bot

Now, let’s build a customer support bot that uses branching to choose different paths based on the user’s issue, demonstrating branching in action.

The Goal

The bot: 1. Asks for the user’s problem. 2. Classifies the issue as “simple” (e.g., printer offline) or “complex” (e.g., hardware failure). 3. For simple issues, suggests a quick fix; for complex issues, queries a database for detailed solutions. 4. Checks if the solution worked, looping back if needed, up to three attempts.

Step 1: Define the State

The state tracks the issue, issue type, solution, resolution, and attempts:

class State(TypedDict):
    issue: str                  # e.g., "Printer won't print"
    issue_type: str             # "simple" or "complex"
    solution: str               # Suggested fix
    is_resolved: bool           # True if fixed
    conversation_history: list   # List of messages
    attempt_count: int          # Number of attempts

Step 2: Create Nodes

Nodes handle input, classification, solutions, and resolution:

from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage, AIMessage

# Mock database tool
@tool
def query_printer_database(issue: str) -> str:
    return "HP DeskJet 2755: Check firmware update"

# Node 1: Process issue
def process_issue(state: State) -> State:
    state["conversation_history"].append(HumanMessage(content=state["issue"]))
    state["attempt_count"] = 0
    return state

# Node 2: Classify issue
def classify_issue(state: State) -> State:
    llm = ChatOpenAI(model="gpt-3.5-turbo")
    template = PromptTemplate(
        input_variables=["issue"],
        template="Classify the issue as 'simple' or 'complex': {issue}"
    )
    chain = template | llm
    issue_type = chain.invoke({"issue": state["issue"]}).content.lower()
    state["issue_type"] = "simple" if "simple" in issue_type else "complex"
    return state

# Node 3: Suggest simple fix
def suggest_simple_fix(state: State) -> State:
    llm = ChatOpenAI(model="gpt-3.5-turbo")
    template = PromptTemplate(
        input_variables=["issue", "history"],
        template="For issue: {issue}\nHistory: {history}\nSuggest a simple fix."
    )
    history_str = "\n".join([f"{msg.type}: {msg.content}" for msg in state["conversation_history"]])
    chain = template | llm
    solution = chain.invoke({"issue": state["issue"], "history": history_str}).content
    state["solution"] = solution
    state["conversation_history"].append(AIMessage(content=solution))
    state["attempt_count"] += 1
    return state

# Node 4: Suggest complex fix
def suggest_complex_fix(state: State) -> State:
    db_result = query_printer_database(state["issue"])
    llm = ChatOpenAI(model="gpt-3.5-turbo")
    template = PromptTemplate(
        input_variables=["issue", "db_result", "history"],
        template="For issue: {issue}\nDatabase: {db_result}\nHistory: {history}\nSuggest a detailed fix."
    )
    history_str = "\n".join([f"{msg.type}: {msg.content}" for msg in state["conversation_history"]])
    chain = template | llm
    solution = chain.invoke({
        "issue": state["issue"],
        "db_result": db_result,
        "history": history_str
    }).content
    state["solution"] = solution
    state["conversation_history"].append(AIMessage(content=solution))
    state["attempt_count"] += 1
    return state

# Node 5: Check resolution
def check_resolution(state: State) -> State:
    state["is_resolved"] = "ink" in state["solution"].lower()
    if not state["is_resolved"]:
        state["conversation_history"].append(HumanMessage(content="That didn't work"))
    return state

Step 3: Implement Branching with Conditional Edges

Decision functions enable branching and looping:

def decide_fix_type(state):
    return "suggest_simple_fix" if state["issue_type"] == "simple" else "suggest_complex_fix"

def decide_next(state):
    if state["is_resolved"] or state["attempt_count"] >= 3:
        return "end"
    return "classify_issue"  # Loop back to reclassify or try another fix

Step 4: Build the Workflow

The graph connects nodes with direct and conditional edges:

# Build the graph
graph = StateGraph(State)
graph.add_node("process_issue", process_issue)
graph.add_node("classify_issue", classify_issue)
graph.add_node("suggest_simple_fix", suggest_simple_fix)
graph.add_node("suggest_complex_fix", suggest_complex_fix)
graph.add_node("check_resolution", check_resolution)
graph.add_edge("process_issue", "classify_issue")
graph.add_conditional_edges("classify_issue", decide_fix_type, {
    "suggest_simple_fix": "suggest_simple_fix",
    "suggest_complex_fix": "suggest_complex_fix"
})
graph.add_edge("suggest_simple_fix", "check_resolution")
graph.add_edge("suggest_complex_fix", "check_resolution")
graph.add_conditional_edges("check_resolution", decide_next, {
    "end": END,
    "classify_issue": "classify_issue"
})
graph.set_entry_point("process_issue")

# Run
app = graph.compile()
result = app.invoke({
    "issue": "Printer won't print",
    "issue_type": "",
    "solution": "",
    "is_resolved": False,
    "conversation_history": [],
    "attempt_count": 0
})
print(result["solution"])

What’s Happening?

  • State: Tracks issue, type, solution, resolution, history, and attempts.
  • Nodes: Handle input, classification, simple/complex fixes, and resolution checks.
  • Edges:
    • Direct: From process_issue to classify_issue, and from fix nodes to check_resolution.
    • Conditional (Branching): classify_issue branches to suggest_simple_fix or suggest_complex_fix.
    • Conditional (Looping): check_resolution loops back to classify_issue if unresolved.
  • Branching: The workflow chooses between simple or complex fixes based on issue_type.
  • Looping: It retries by reclassifying and suggesting new fixes if needed.

Build a similar bot with Customer Support Example.


Best Practices for Looping and Branching

To create robust, dynamic workflows, follow these tips:

  • Limit Loops: Use counters (like attempt_count) to prevent infinite loops. See Best Practices.
  • Simplify Conditions: Keep decision logic clear to avoid confusion. Check Graph Debugging.
  • Validate State: Ensure nodes check state values to handle edge cases. Explore State Management.
  • Test All Paths: Run scenarios to cover all branches and loops. See Workflow Design.
  • Use Memory: Store context for better decisions with Memory Integration.

Enhancing Looping and Branching with LangChain Features

LangGraph’s looping and branching can be amplified with LangChain’s tools:

For example, add a node to fetch real-time data with Web Research Chain.


Conclusion

Looping and branching in LangGraph unlock the ability to create AI workflows that are dynamic, adaptive, and intelligent. By using conditional edges to loop back for retries or branch to different paths, you can build applications that handle complex tasks with ease, from persistent support bots to flexible research agents. With clear state management and thoughtful design, your workflows can think and pivot like never before.

To start, follow Install and Setup and try Simple Chatbot Example. For more, explore Core Concepts or real-world applications at Best LangGraph Uses. With looping and branching in LangGraph, your AI is ready to tackle any challenge with smarts and flexibility!

External Resources: