MongoDB Atlas Integration in LangChain: Complete Working Process with API Key Setup and Configuration

The integration of MongoDB Atlas with LangChain, a leading framework for building applications with large language models (LLMs), enables developers to leverage MongoDB Atlas’s fully-managed cloud database for vector search, full-text search, and retrieval-augmented generation (RAG). This blog provides a comprehensive guide to the complete working process of MongoDB Atlas integration in LangChain as of May 15, 2025, including steps to obtain an API key, configure the environment, and integrate the database, along with core concepts, techniques, practical applications, advanced strategies, and a unique section on optimizing MongoDB Atlas usage. For a foundational understanding of LangChain, refer to our Introduction to LangChain Fundamentals.

What is MongoDB Atlas Integration in LangChain?

MongoDB Atlas integration in LangChain involves connecting MongoDB Atlas, a cloud-native document database, to LangChain’s ecosystem. This allows developers to store vector embeddings and text data in MongoDB documents, perform vector search, full-text search, or hybrid search, and implement RAG for tasks like semantic search, question-answering, and chatbot development. The integration is facilitated through LangChain’s MongoDBAtlasVectorSearch class, along with components like MongoDBAtlasFullTextSearchRetriever, MongoDBAtlasHybridSearchRetriever, and MongoDBGraphStore, which interface with MongoDB Atlas’s API. It is enhanced by LangChain’s PromptTemplate, chains (e.g., LLMChain), memory modules, and embeddings (e.g., OpenAIEmbeddings). It supports a wide range of applications, from AI-powered chatbots to enterprise knowledge bases. For an overview of chains, see Introduction to Chains.

Key characteristics of MongoDB Atlas integration include:

  • Hybrid Search Capabilities: Combines vector-based semantic search with BM25 full-text search for enhanced relevance.
  • Scalable Cloud Database: Leverages MongoDB Atlas’s fully-managed infrastructure on AWS, Azure, or GCP for high availability.
  • Contextual Intelligence: Enhances LLMs with external knowledge via efficient document retrieval and GraphRAG.
  • Graph Support: Enables GraphRAG by storing entities and relationships using MongoDBGraphStore.

MongoDB Atlas integration is ideal for applications requiring scalable, high-performance search and RAG, such as intelligent chatbots, semantic search engines, or knowledge-augmented AI systems, where MongoDB’s vector search and document model augment LLM capabilities.

Why MongoDB Atlas Integration Matters

LLMs often lack access to specific, up-to-date, or proprietary knowledge, limiting their ability to provide accurate responses. MongoDB Atlas addresses this by enabling efficient storage and retrieval of vector embeddings and text data, supporting semantic search, full-text search, and GraphRAG workflows. LangChain’s integration with MongoDB Atlas matters because it:

  • Simplifies Development: Provides a unified interface for MongoDB Atlas’s vector and text search capabilities, reducing complexity.
  • Enhances Relevance: Combines semantic and keyword search for precise, context-aware retrieval.
  • Optimizes Performance: Manages search queries and API calls to minimize latency and costs (see Token Limit Handling).
  • Scales Seamlessly: Leverages MongoDB Atlas’s cloud-native architecture for large-scale, production-ready applications.

Building on the vector search capabilities of the Elasticsearch Integration, MongoDB Atlas integration adds native vector search, GraphRAG, and a document-oriented model, making it a natural fit for LangChain’s data-aware workflows.

Steps to Get a MongoDB Atlas API Key

To integrate MongoDB Atlas with LangChain, you need a MongoDB Atlas cluster and an API key for programmatic access. Follow these steps to obtain an API key and set up a cluster:

  1. Create a MongoDB Atlas Account:
    • Visit MongoDB Atlas’s website or the MongoDB Atlas Console.
    • Sign up with an email address, Google, or another supported method, or log in if you already have an account.
    • Verify your email and complete any required account setup steps.
  1. Set Up a MongoDB Atlas Cluster:
    • In the Atlas Console, create a new project or select an existing one.
    • Click “Build a Cluster” or “Create a Cluster”:
      • Choose a tier (e.g., M0 Free Tier for testing, or a paid tier like M10 for production).
      • Select a cloud provider (e.g., AWS, Azure, GCP) and region (e.g., US-East-1).
      • Name the cluster (e.g., “LangChainAtlas”).
      • Configure settings (e.g., MongoDB version 6.0.11, 7.0.2, or later for vector search support).
    • Click “Create Cluster” to deploy. The cluster may take a few minutes to provision.
    • Ensure your IP address is added to the project’s IP Access List under “Security” > “Network Access” to allow connections.
  1. Generate an API Key:
    • In the Atlas Console, navigate to “Organization” > “Access Manager” > “API Keys.”
    • Click “Create API Key” or a similar option.
    • Name the key (e.g., “LangChainIntegration”) and assign appropriate permissions (e.g., “Organization Member” or “Project Data Access Read/Write”).
    • Copy the Public Key and Private Key immediately, as the private key will not be displayed again.
    • Note the Organization ID or Project ID from the console, as they may be required for API authentication.
  1. Secure the API Key:
    • Store the public key, private key, and connection string securely in a password manager or encrypted file.
    • Avoid hardcoding credentials in your code or sharing them publicly (e.g., in Git repositories).
    • Use environment variables (see configuration below) to access credentials in your application.
  1. Verify Cluster and API Access:
    • Confirm your cluster is running MongoDB version 6.0.11, 7.0.2, or later (required for vector search) under “Database Deployments.”
    • Obtain the cluster’s Connection String (SRV format) from “Database Deployments” > “Connect” > “Drivers”:
      • Example: mongodb+srv://<username>:<password>@<cluster-name>.mongodb.net/?retryWrites=true&w=majority</cluster-name></password></username>
    • Test the connection and API key with a simple MongoDB client call:
    • from pymongo import MongoClient
           client = MongoClient("mongodb+srv://:@.mongodb.net/")
           print(client.list_database_names())
    • Ensure no authentication or connection errors occur.

Note for Local MongoDB: If using a local MongoDB instance instead of Atlas, install MongoDB Community Edition and configure it to run on localhost:27017 (default). No API key is required, but vector search is only supported in Atlas. See MongoDB’s installation guide for details.

Configuration for MongoDB Atlas Integration

Proper configuration ensures secure and efficient use of MongoDB Atlas with LangChain. Follow these steps for MongoDB Atlas (adapt for local MongoDB as noted):

  1. Install Required Libraries:
    • Install LangChain, MongoDB Atlas, and embedding dependencies using pip:
    • pip install langchain langchain-mongodb pymongo langchain-openai python-dotenv
    • Ensure you have Python 3.8+ installed. The langchain-openai package is used for embeddings in this example, but you can use other embeddings (e.g., HuggingFaceEmbeddings).[](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/)[](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas/)
  1. Set Up Environment Variables:
    • Store the MongoDB Atlas connection string, API key (if using API authentication), and embedding API key in environment variables.
    • On Linux/Mac, add to your shell configuration (e.g., ~/.bashrc or ~/.zshrc):
    • export MONGODB_ATLAS_URI="mongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority"
           export OPENAI_API_KEY="your-openai-api-key"
    • On Windows, set the variables via Command Prompt or PowerShell:
    • set MONGODB_ATLAS_URI=mongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority
           set OPENAI_API_KEY=your-openai-api-key
    • Alternatively, use a .env file with the python-dotenv library:
    • pip install python-dotenv

Create a .env file in your project root:

MONGODB_ATLAS_URI=mongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority
     OPENAI_API_KEY=your-openai-api-key
Load the <mark>.env</mark> file in your Python script:
from dotenv import load_dotenv
     load_dotenv()
  • For local MongoDB, set the URI to mongodb://localhost:27017 and omit the API key unless authentication is enabled.[](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas/)
  1. Configure LangChain with MongoDB Atlas:
    • Initialize a MongoDB Atlas client and connect it to LangChain’s MongoDBAtlasVectorSearch vector store:
    • from pymongo import MongoClient
           from langchain_mongodb import MongoDBAtlasVectorSearch
           from langchain_openai import OpenAIEmbeddings
           import os
      
           # Initialize MongoDB client
           client = MongoClient(os.getenv("MONGODB_ATLAS_URI"))
           collection = client["langchain_db"]["test_collection"]
      
           # Initialize embeddings and vector store
           embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
           vector_store = MongoDBAtlasVectorSearch(
               collection=collection,
               embedding=embeddings,
               index_name="vector_index"
           )
    • For local MongoDB, use the local URI:
    • client = MongoClient("mongodb://localhost:27017")
           collection = client["langchain_db"]["test_collection"]
  1. Create a Vector Search Index:
    • In the MongoDB Atlas UI, navigate to “Atlas Search” > “Create Search Index” for the langchain_db.test_collection namespace.
    • Select “Atlas Vector Search - JSON Editor” and paste the following index definition:
    • {
             "fields": [
               {
                 "type": "vector",
                 "path": "embedding",
                 "numDimensions": 1536,
                 "similarity": "cosine"
               }
             ]
           }
    • Name the index vector_index and create it. Ensure the numDimensions matches your embedding model (e.g., 1536 for OpenAI’s text-embedding-3-small). Wait for the index to build (typically ~1 minute).[](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/)[](https://www.mongodb.com/developer/products/atlas/leveraging-mongodb-atlas-vector-search-langchain/)
  1. Verify Configuration:
    • Test the setup with a simple vector store operation:
    • from langchain_core.documents import Document
           doc = Document(page_content="Test document", metadata={"source": "test"})
           vector_store.add_documents([doc])
           results = vector_store.similarity_search("Test", k=1)
           print(results[0].page_content)
    • Ensure no authentication or connection errors occur and the document is retrieved correctly.
  1. Secure Configuration:
    • Avoid exposing the connection string or API key in source code or version control.
    • Use secure storage solutions (e.g., AWS Secrets Manager, Azure Key Vault) for production environments.
    • Rotate API keys periodically via the MongoDB Atlas Console.
    • For local MongoDB, secure the instance with authentication and network restrictions (e.g., firewall rules).[](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/)

Complete Working Process of MongoDB Atlas Integration

The working process of MongoDB Atlas integration in LangChain enables efficient vector search, full-text search, hybrid search, and RAG by combining MongoDB Atlas’s database capabilities with LangChain’s LLM workflows. Below is a detailed breakdown of the workflow, incorporating setup and configuration:

  1. Set Up MongoDB Atlas and Embeddings:
    • Create a MongoDB Atlas cluster, generate an API key or connection string, and store them securely as environment variables (MONGODB_ATLAS_URI, OPENAI_API_KEY).
    • Configure an embedding model (e.g., OpenAI or Hugging Face) and create a vector search index on the target collection.
  1. Configure Environment:
    • Install required libraries and set up environment variables or .env file for credentials.
    • Verify the setup with a test vector store operation.
  1. Initialize LangChain Components:
    • LLM: Initialize an LLM (e.g., ChatOpenAI) for text generation.
    • Embeddings: Initialize an embedding model (e.g., OpenAIEmbeddings) for vector creation.
    • Vector Store: Initialize MongoDBAtlasVectorSearch with a MongoDB collection and embeddings.
    • Prompts: Define a PromptTemplate to structure inputs.
    • Chains: Set up chains (e.g., ConversationalRetrievalChain) for RAG workflows.
    • Memory: Use ConversationBufferMemory for conversational context (optional).
  1. Input Processing:
    • Capture the user’s query (e.g., “What is AI in healthcare?”) via a text interface, API, or application frontend.
    • Preprocess the input (e.g., clean, translate for multilingual support) to ensure compatibility.
  1. Document Embedding and Storage:
    • Load and split documents (e.g., PDFs, text files) into chunks using LangChain’s document loaders and text splitters.
    • Embed the chunks using the embedding model and upsert them into MongoDB Atlas’s collection with metadata (e.g., source, timestamp).
  1. Vector Search:
    • Embed the user’s query using the same embedding model.
    • Perform a similarity search, full-text search, or hybrid search in MongoDB Atlas’s collection to retrieve the most relevant documents, optionally applying metadata filters.
  1. LLM Processing:
    • Combine the retrieved documents with the query in a prompt and send it to the LLM via a LangChain chain (e.g., ConversationalRetrievalChain).
    • The LLM generates a context-aware response based on the query and retrieved documents.
  1. Output Parsing and Post-Processing:
    • Extract the LLM’s response, optionally using output parsers (e.g., StructuredOutputParser) for structured formats like JSON.
    • Post-process the response (e.g., format, translate) to meet application requirements.
  1. Memory Management:
    • Store the query and response in a memory module to maintain conversational context.
    • Summarize history for long conversations to manage token limits.
  1. Error Handling and Optimization:

    • Implement retry logic and fallbacks for API failures or rate limits.
    • Cache responses, batch upserts, or optimize search queries to reduce API usage and computational overhead.
  2. Response Delivery:

    • Deliver the processed response to the user via the application interface, API, or frontend.
    • Use feedback (e.g., via LangSmith) to refine prompts, retrieval, or collection configurations.

Practical Example of the Complete Working Process

Below is an example demonstrating the complete working process, including MongoDB Atlas setup, configuration, and integration for a conversational Q&A chatbot with RAG using LangChain:

# Step 1: Obtain and Secure API Key
# - Connection string obtained from MongoDB Atlas Console and stored in .env file
# - .env file content:
#   MONGODB_ATLAS_URI=mongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority
#   OPENAI_API_KEY=your-openai-api-key

# Step 2: Configure Environment
from dotenv import load_dotenv
load_dotenv()  # Load environment variables from .env

from pymongo import MongoClient
from langchain_mongodb import MongoDBAtlasVectorSearch
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain_core.documents import Document
import os
import time

# Step 3: Initialize LangChain Components
# Initialize MongoDB client and collection
client = MongoClient(os.getenv("MONGODB_ATLAS_URI"))
collection = client["langchain_db"]["test_collection"]

# Initialize embeddings, LLM, and vector store
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
llm = ChatOpenAI(model="gpt-4", temperature=0.7)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

vector_store = MongoDBAtlasVectorSearch(
    collection=collection,
    embedding=embeddings,
    index_name="vector_index"
)

# Step 4: Document Embedding and Storage
# Simulate document loading and embedding
documents = [
    Document(page_content="AI improves healthcare diagnostics through advanced algorithms.", metadata={"source": "healthcare"}),
    Document(page_content="AI enhances personalized care with data-driven insights.", metadata={"source": "healthcare"}),
    Document(page_content="Blockchain secures transactions with decentralized ledgers.", metadata={"source": "finance"})
]
vector_store.add_documents(documents)

# Cache for responses
cache = {}

# Step 5-10: Optimized Chatbot with Error Handling
def optimized_mongodb_atlas_chatbot(query, max_retries=3):
    cache_key = f"query:{query}:history:{memory.buffer[:50]}"
    if cache_key in cache:
        print("Using cached result")
        return cache[cache_key]

    for attempt in range(max_retries):
        try:
            # Step 6: Prompt Engineering
            prompt_template = PromptTemplate(
                input_variables=["chat_history", "question"],
                template="History: {chat_history}\nQuestion: {question}\nAnswer in 50 words based on the context:"
            )

            # Step 7: Vector Search and LLM Processing
            chain = ConversationalRetrievalChain.from_llm(
                llm=llm,
                retriever=vector_store.as_retriever(
                    search_kwargs={"k": 2, "filter": {"source": "healthcare"}}
                ),
                memory=memory,
                combine_docs_chain_kwargs={"prompt": prompt_template},
                verbose=True
            )

            # Step 8: Execute Chain
            result = chain({"question": query})["answer"]

            # Step 9: Memory Management
            memory.save_context({"question": query}, {"answer": result})

            # Step 10: Cache result
            cache[cache_key] = result
            return result
        except Exception as e:
            print(f"Attempt {attempt + 1} failed: {e}")
            if attempt == max_retries - 1:
                return "Fallback: Unable to process query."
            time.sleep(2 ** attempt)  # Exponential backoff

# Step 11: Response Delivery
query = "How does AI benefit healthcare?"
result = optimized_mongodb_atlas_chatbot(query)  # Simulated: "AI improves diagnostics and personalizes care."
print(f"Result: {result}\nMemory: {memory.buffer}")
# Output:
# Result: AI improves diagnostics and personalizes care.
# Memory: [HumanMessage(content='How does AI benefit healthcare?'), AIMessage(content='AI improves diagnostics and personalizes care.')]

Workflow Breakdown in the Example:

  • API Key: Stored the connection string and OpenAI API key in a .env file, loaded using python-dotenv.
  • Configuration: Installed required libraries, initialized MongoDB client, and set up MongoDBAtlasVectorSearch, ChatOpenAI, OpenAIEmbeddings, and memory.
  • Input: Processed the query “How does AI benefit healthcare?”.
  • Document Embedding: Embedded and upserted documents into MongoDB Atlas with metadata.
  • Vector Search: Performed similarity search with a metadata filter for relevant documents.
  • LLM Call: Invoked the LLM via ConversationalRetrievalChain for RAG.
  • Output: Parsed the response and logged it to memory.
  • Memory: Stored the query and response in ConversationBufferMemory.
  • Optimization: Cached results and implemented retry logic for stability.
  • Delivery: Returned the response to the user.

This example leverages the langchain-mongodb package (version 0.2.0, released April 2025) for seamless integration, as per recent LangChain documentation.

Practical Applications of MongoDB Atlas Integration

MongoDB Atlas integration enhances LangChain applications by enabling efficient vector search, full-text search, hybrid search, and GraphRAG. Below are practical use cases, supported by LangChain’s documentation and community resources:

1. Knowledge-Augmented Chatbots

Build chatbots that retrieve context from document sets for accurate, domain-specific responses. Try our tutorial on Building a Chatbot with OpenAI.

Implementation Tip: Use ConversationalRetrievalChain with MongoDBAtlasVectorSearch and LangChain Memory for contextual conversations.

2. Hybrid Search Engines

Create search systems combining semantic and full-text search for documents or products. Try our tutorial on Multi-PDF QA.

Implementation Tip: Use MongoDBAtlasHybridSearchRetriever with metadata filters for precise results.

3. GraphRAG Applications

Develop applications leveraging entity-relationship graphs for complex queries. See MongoDB’s GraphRAG guide for details.

Implementation Tip: Use MongoDBGraphStore with $graphLookup for entity-based retrieval.

4. Multilingual Q&A Systems

Support multilingual document retrieval with MongoDB Atlas’s vector search. See Multi-Language Prompts.

Implementation Tip: Use multilingual embedding models (e.g., intfloat/multilingual-e5-large) with MongoDBAtlasVectorSearch.

5. Enterprise RAG Pipelines

Build RAG pipelines for large-scale knowledge bases with analytics. See Code Execution Chain for related workflows.

Implementation Tip: Use MongoDBAtlasSemanticCache for optimized performance in production environments.

Advanced Strategies for MongoDB Atlas Integration

To optimize MongoDB Atlas integration in LangChain, consider these advanced strategies, inspired by LangChain and MongoDB documentation:

1. Hybrid Search with Vector and Full-Text

Combine vector-based semantic search with BM25 full-text search for improved relevance using MongoDBAtlasHybridSearchRetriever.

Example:

from langchain_mongodb import MongoDBAtlasHybridSearchRetriever
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
retriever = MongoDBAtlasHybridSearchRetriever(
    vectorstore=vector_store,
    search_index_name="text_index",
    k=2,
    vector_weight=0.7,
    text_weight=0.3
)
results = retriever.invoke("AI healthcare")
print([doc.page_content for doc in results])

This uses hybrid search with weighted scoring, as supported by MongoDB Atlas’s Reciprocal Rank Fusion (RRF) algorithm.

2. GraphRAG with Entity Relationships

Implement GraphRAG using MongoDBGraphStore to store and query entity relationships.

Example:

from langchain_mongodb import MongoDBGraphStore
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4")
graph_store = MongoDBGraphStore(
    collection=client["langchain_db"]["graph_collection"],
    lookup_index_name="graph_index"
)
# Simulate entity and relationship storage
graph_store.add_entities([
    {"id": "AI", "type": "technology", "attributes": {"name": "Artificial Intelligence"}},
    {"id": "Healthcare", "type": "industry", "attributes": {"name": "Healthcare"}}
])
graph_store.add_relationships([
    {"source": "AI", "target": "Healthcare", "type": "applied_to"}
])
results = graph_store.query("Entities related to Healthcare")
print(results)

This uses $graphLookup for GraphRAG, as supported by MongoDB Atlas’s graph capabilities.

3. Performance Optimization with Semantic Caching

Cache LLM responses using MongoDBAtlasSemanticCache to reduce redundant API calls, leveraging LangSmith for monitoring.

Example:

from langchain_mongodb import MongoDBAtlasSemanticCache
from langchain_core.globals import set_llm_cache
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
set_llm_cache(MongoDBAtlasSemanticCache(
    connection_string=os.getenv("MONGODB_ATLAS_URI"),
    collection_name="cache_collection",
    database_name="langchain_db",
    embedding=embeddings
))
# Cached LLM call
llm = ChatOpenAI(model="gpt-4")
response = llm.invoke("What is AI?")
print(response.content)

This caches responses based on semantic similarity, optimizing performance.

Optimizing MongoDB Atlas Usage

Optimizing MongoDB Atlas usage is critical for cost efficiency, performance, and reliability, given the cloud-based pricing and rate limits. Key strategies include:

  • Caching Responses: Use MongoDBAtlasSemanticCache to store frequent query results, as shown in the caching example.[](https://python.langchain.com/docs/integrations/providers/mongodb_atlas/)
  • Batching Upserts: Use MongoDBAtlasVectorSearch.add_documents with optimized batch sizes (e.g., 100-500 documents) to minimize API calls, as recommended by MongoDB.[](https://www.mongodb.com/developer/products/atlas/leveraging-mongodb-atlas-vector-search-langchain/)
  • Query Optimization: Apply metadata filters and pre-filtering to reduce search scope and improve latency, as shown in the hybrid search example.[](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas/)
  • Hybrid Search: Leverage MongoDBAtlasHybridSearchRetriever to balance precision and recall, reducing unnecessary queries.[](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/hybrid-search/)
  • Rate Limit Handling: Implement retry logic with exponential backoff to manage rate limit errors, as shown in the example.
  • Resource Management: Optimize cluster tier (e.g., M10 for production) and index settings (e.g., cosine similarity) to balance cost and performance.[](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas/)
  • Monitoring with LangSmith: Track API usage, latency, and errors to refine collection configurations, leveraging LangSmith’s observability features.

These strategies ensure cost-effective, scalable, and robust LangChain applications using MongoDB Atlas, as highlighted in recent tutorials and community resources.

Conclusion

MongoDB Atlas integration in LangChain, with a clear process for setting up a cluster, configuring the environment, and implementing the workflow, empowers developers to build scalable, search-augmented NLP applications. The complete working process—from setup to response delivery with hybrid search and RAG—ensures context-aware, high-quality outputs. The focus on optimizing MongoDB Atlas usage, through caching, batching, query optimization, and error handling, guarantees reliable performance as of May 15, 2025. Whether for chatbots, hybrid search engines, or GraphRAG pipelines, MongoDB Atlas integration is a powerful component of LangChain’s ecosystem, as evidenced by its widespread adoption in community tutorials and documentation.

To get started, follow the setup and configuration steps, experiment with the examples, and explore LangChain’s documentation. For practical applications, check out our LangChain Tutorials or dive into LangSmith Integration for observability. For further details, see MongoDB’s LangChain integration guide and the LangChain MongoDB documentation. With MongoDB Atlas integration, you’re equipped to build cutting-edge, search-powered AI applications.