Building a LangChain Streamlit App for Conversational AI: A Comprehensive Guide

Streamlit is a powerful Python framework for creating interactive web applications with minimal effort, making it an excellent choice for deploying conversational AI systems. By combining LangChain and OpenAI, you can build a user-friendly chatbot interface that leverages large language models (LLMs) and conversational memory.

Introduction to LangChain and Streamlit

LangChain is a versatile framework for developing LLM-powered applications, offering tools for conversational memory, chains, and integrations. Streamlit complements this by enabling rapid development of interactive web interfaces, ideal for chatbots or question-answering systems. Together with OpenAI’s API (e.g., gpt-3.5-turbo), they create a robust platform for conversational AI.

This tutorial assumes basic Python knowledge, with references to LangChain’s getting started guide, Streamlit’s documentation, and OpenAI’s API documentation.

Prerequisites for Building the Streamlit App

Ensure you have:

pip install langchain openai streamlit langchain-openai

Step 1: Setting Up the Development Environment

Configure your environment by importing libraries and setting the OpenAI API key.

import os
import streamlit as st
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = st.secrets.get("OPENAI_API_KEY", "your-openai-api-key")

For security, store the API key in Streamlit’s secrets management (e.g., a secrets.toml file):

OPENAI_API_KEY = "your-openai-api-key"

Replace "your-openai-api-key" with your actual key. Environment variables and secrets enhance security, as explained in LangChain’s security and API keys guide and Streamlit’s secrets management. The imported modules are core to the app, detailed in LangChain’s core components overview.

Step 2: Initializing the Language Model

Initialize the OpenAI LLM using ChatOpenAI for chat-based interactions.

llm = ChatOpenAI(
    model_name="gpt-3.5-turbo",
    temperature=0.7,
    max_tokens=512,
    top_p=0.9,
    frequency_penalty=0.2,
    presence_penalty=0.1
)

Key Parameters for ChatOpenAI

  • model_name: OpenAI model (e.g., gpt-3.5-turbo, gpt-4). gpt-3.5-turbo is cost-effective; gpt-4 offers advanced reasoning. See OpenAI’s model documentation.
  • temperature (0.0–2.0): Controls randomness. At 0.7, responses balance creativity and coherence. Lower (e.g., 0.3) for precision; higher (e.g., 1.2) for diversity.
  • max_tokens: Maximum response length (e.g., 512). Adjust for detail; higher values increase costs. See LangChain’s token limit handling.
  • top_p (0.0–1.0): Nucleus sampling. At 0.9, focuses on high-probability tokens.
  • frequency_penalty (–2.0–2.0): Discourages repetition. At 0.2, promotes variety.
  • presence_penalty (–2.0–2.0): Encourages new topics. At 0.1, mildly promotes novelty.

For more, see LangChain’s OpenAI integration guide. Alternatives include Anthropic or HuggingFace.

Step 3: Implementing Conversational Memory

Use ConversationBufferMemory to maintain conversation context.

memory = ConversationBufferMemory(
    memory_key="history",
    return_messages=True,
    k=5
)

Key Parameters for ConversationBufferMemory

  • memory_key: Variable name for history (default: "history"). Ensures chain access to context.
  • return_messages: If True, returns history as message objects; if False, as a string. True suits chat models.
  • k: Limits stored interactions (e.g., 5). Balances context and performance.

For advanced memory, see LangChain’s memory integration guide or conversational flows.

Step 4: Building the Conversation Chain

Create a ConversationChain to integrate the LLM and memory.

conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True,
    prompt=None,
    output_key="response"
)

Key Parameters for ConversationChain

  • llm: The initialized LLM.
  • memory: The memory component.
  • verbose: If True, logs prompts for debugging.
  • prompt: Optional custom prompt. If None, uses LangChain’s default.
  • output_key: Output key (default: "response"). Useful for chain integration.

See LangChain’s introduction to chains or sequential chains.

Step 5: Creating the Streamlit Interface

Build the Streamlit app to handle user input and display responses.

# Streamlit app setup
st.title("Conversational AI with LangChain")
st.write("Chat with an AI powered by LangChain and OpenAI!")

# Initialize session state for chat history
if "messages" not in st.session_state:
    st.session_state.messages = []

# Display chat history
for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

# User input
if prompt := st.chat_input("What would you like to ask?"):
    # Add user message to session state
    st.session_state.messages.append({"role": "user", "content": prompt})
    with st.chat_message("user"):
        st.markdown(prompt)

    # Generate response
    with st.chat_message("assistant"):
        with st.spinner("Thinking..."):
            response = conversation.predict(input=prompt)
            st.markdown(response)
            st.session_state.messages.append({"role": "assistant", "content": response})

This code creates a chat interface with a title, chat history display, and input field. st.session_state persists the conversation in the UI, while st.chat_message styles messages. The spinner indicates processing. For more UI options, see Streamlit’s chat elements.

Step 6: Testing the Streamlit App

Run the app locally:

streamlit run app.py

Visit http://localhost:8501 to interact with the chatbot. Example interaction:

User: Hi, recommend some sci-fi books.
Assistant: I suggest Dune by Frank Herbert for its epic world-building and The Martian by Andy Weir for a grounded take. Want more suggestions?
User: Tell me about Dune.
Assistant: Dune follows Paul Atreides on the desert planet Arrakis, exploring politics, religion, and ecology. Interested in its themes or adaptations?

The memory ensures context retention. For patterns, see LangChain’s conversational flows or simple chatbot example.

Step 7: Customizing the Streamlit App

Enhance the app with custom prompts, data integration, or tools.

7.1 Custom Prompt Engineering

Modify the prompt for a specific tone.

from langchain.prompts import PromptTemplate

custom_prompt = PromptTemplate(
    input_variables=["history", "input"],
    template="You are a friendly, knowledgeable assistant. Respond in a conversational tone based on the history:\n\n{history}\n\nUser: {input}\n\nAssistant: ",
    validate_template=True
)

conversation = ConversationChain(
    llm=llm,
    memory=memory,
    prompt=custom_prompt,
    verbose=True
)

PromptTemplate Parameters:

  • input_variables: Variables in the template (e.g., ["history", "input"]).
  • template: Defines tone and structure.
  • validate_template: If True, validates variables.

See LangChain’s prompt templates guide.

7.2 Integrating External Data

Add a knowledge base using RetrievalQA and FAISS.

from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter

# Load and split documents
loader = TextLoader("knowledge_base.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
docs = text_splitter.split_documents(documents)

# Create vector store
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
vectorstore = FAISS.from_documents(docs, embeddings)

# Create RetrievalQA chain
qa_chain = RetrievalQA.from_chain_type [RetrievalQA.from_chain_type](
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever(search_kwargs={"k": 3})
)

# Update Streamlit to use QA chain
if prompt := st.chat_input("Ask a question:"):
    st.session_state.messages.append({"role": "user", "content": prompt})
    with st.chat_message("user"):
        st.markdown(prompt)
    with st.chat_message("assistant"):
        response = qa_chain({"query": prompt})["result"]
        st.markdown(response)
        st.session_state.messages.append({"role": "assistant", "content": response})

RetrievalQA Parameters:

  • llm: The LLM.
  • chain_type: Processing method (e.g., "stuff").
  • retriever: Retrieval mechanism.

See LangChain’s vector stores.

7.3 Tool Integration

Add tools like SerpAPI.

from langchain.agents import initialize_agent, Tool
from langchain.tools import SerpAPIWrapper

search = SerpAPIWrapper()
tools = [
    Tool(
        name="Search",
        func=search.run,
        description="Fetch current information."
    )
]

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent="zero-shot-react-description",
    verbose=True,
    max_iterations=3,
    early_stopping_method="force"
)

# Update Streamlit
response = agent.run(prompt)

initialize_agent Parameters:

  • tools: List of tools.
  • llm: The LLM.
  • agent: Agent type.
  • max_iterations: Limits steps.
  • early_stopping_method: Stops execution.

See LangChain’s tools guide.

Step 8: Deploying the Streamlit App

Deploy to Streamlit Community Cloud:

  1. Push your code to a GitHub repository.
  2. Sign in to Streamlit Community Cloud with GitHub.
  3. Create a new app, selecting your repository and app.py.
  4. Configure secrets (e.g., OPENAI_API_KEY) in the app settings.
  5. Deploy the app.

For alternatives, use Heroku or AWS. See LangChain’s Flask API tutorial for deployment insights.

Step 9: Evaluating and Testing the App

Evaluate responses using LangChain’s evaluation metrics.

from langchain.evaluation import load_evaluator

evaluator = load_evaluator(
    "qa",
    criteria=["correctness", "relevance"]
)
result = evaluator.evaluate_strings(
    prediction="Dune is a sci-fi novel.",
    input="What is Dune?",
    reference="Dune is a science fiction novel by Frank Herbert."
)
print(result)

load_evaluator Parameters:

  • evaluator_type: Metric type (e.g., "qa").
  • criteria: Evaluation criteria.

Test the UI with diverse inputs. Use LangSmith for debugging, per LangChain’s LangSmith intro.

Advanced Features and Next Steps

Enhance with:

See LangChain’s startup examples or GitHub repos.

Conclusion

Building a LangChain Streamlit app combines conversational AI with an intuitive web interface. This guide covered setup, implementation, customization, deployment, evaluation, and parameters, empowering you to create engaging chatbots. Leverage LangChain’s chains, memory, and integrations with Streamlit’s UI capabilities.

Explore agents, tools, or evaluation metrics. Debug with LangSmith. Happy coding!