You can integrate MongoDB with LangGraph to build AI agents and advanced RAG applications. This page provides an overview of the MongoDB LangGraph integration and how you can use MongoDB for agent state persistence, memory, and retrieval in your LangGraph workflows.
To build a sample AI agent that uses all of the components on this page, see the tutorial.
Note
For the JavaScript integration, see LangGraph JS/TS.
Background
LangGraph is a specialized framework within the LangChain ecosystem designed for building AI agents and complex multi-agent workflows. Graphs are the core components of LangGraph, representing the workflow of your agent. The MongoDB LangGraph integration enables the following capabilities:
MongoDB LangGraph Checkpointer: You can persist the state of your LangGraph agents in MongoDB, providing short-term memory.
MongoDB LangGraph Store: You can store and retrieve important memories for your LangGraph agents in a MongoDB collection, providing long-term memory.
Retrieval Tools: You can use the MongoDB LangChain integration to quickly create retrieval tools for your LangGraph workflows.
Integrating your LangGraph applications with MongoDB allows you to consolidate both retrieval capabilities and agent memory in a single database, simplifying your architecture and reducing operational complexity.
MongoDB LangGraph Checkpointer (Short-Term Memory)
The MongoDB LangGraph Checkpointer allows you to persist your agent's state in MongoDB to implement short-term memory. This feature enables human-in-the-loop, memory, time travel, and fault-tolerance for your LangGraph agents.
To install the package for this component:
pip install langgraph-checkpoint-mongodb
Usage
from langgraph.checkpoint.mongodb import MongoDBSaver from pymongo import MongoClient # Connect to your MongoDB cluster client = MongoClient("<connection-string>") # Initialize the MongoDB checkpointer checkpointer = MongoDBSaver(client) # Instantiate the graph with the checkpointer app = graph.compile(checkpointer=checkpointer)
MongoDB LangGraph Store (Long-Term Memory)
The MongoDB LangGraph Store allows you to store and retrieve memories in a MongoDB collection, which enables long-term memory for your LangGraph agents. This enables you to build agents that can remember past interactions and use that information to inform future decisions.
To install the package for this component:
pip install langgraph-store-mongodb
Usage
from langgraph.store.mongodb import MongoDBStore, create_vector_index_config from langchain_voyageai import VoyageAIEmbeddings # Optional vector search index configuration for the memory collection index_config = create_vector_index_config( embed = VoyageAIEmbeddings(), dims = <dimensions>, fields = ["<field-name>"], filters = ["<filter-field-name>", ...] # Optional list of fields that can filtered during search # Other fields... ) # Store memories in MongoDB collection with MongoDBStore.from_conn_string( conn_string=MONGODB_URI, db_name="<database-name>", collection_name="<collection-name>", index_config=index_config # If specified, automatically embeds and indexes the field value ) as store: store.put( namespace=("user", "memories"), # Namespace for the memories key=f"memory_{hash(content)}", # Unique identifier for each memory value={"content": content} # Document data that contains the memory content ) # Retrieve memories from MongoDB collection with MongoDBStore.from_conn_string( conn_string=MONGODB_URI, db_name="<database-name>", collection_name="<collection-name>", index_config=index_config # If specified, uses vector search to retrieve memories. Otherwise, uses metadata filtering ) as store: results = store.search( ("user", "memories"), query="<query-text">, limit=3 ) for result in results: print(result.value) # To delete memories, use store.delete(namespace, key) # To batch operations, use store.batch(ops)
Parameter | Necessity | Description | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| Required | Specify the connection string for your MongoDB cluster or local Atlas deployment. | |||||||||
| Required | The name of the database to use. It will be created if it doesn't exist. Defaults to "checkpointing_db". | |||||||||
| Required | The name of the collection to use. It will be created if it doesn't exist. Defaults to "persistent-store". | |||||||||
| Optional | TTL (Time To Live) configuration for the store. This configures automatic expiry of documents in the store. For example:
| |||||||||
| Optional | Vector search index configuration. For example:
|
Methods
Method | Description |
---|---|
| Stores a single item in the store with the specified namespace, key, and value. |
| Searches for items within a given The
|
| Retrieves a single item from the store. Optionally, you can refresh the item's TTL upon access. |
| Deletes a single item from the store identified by its namespace and key. |
| Lists unique namespaces in the store. Allows filtering by a path prefix, suffix, and document depth. |
| Executes a sequence of operations ( |
| Method that prepares a list of filter fields for MongoDB Vector Search indexing. |
Retrieval Tools
You can seamlessly use LangChain retrievers as tools in your LangGraph workflow to retrieve relevant data from MongoDB.
The MongoDB LangChain integration natively supports full-text search, vector search, hybrid search, and parent-document retrieval. For a complete list of retrieval methods, see MongoDB LangChain Retrievers.
Usage
To create a basic retrieval tool with MongoDB Vector Search and LangChain:
from langchain.tools.retriever import create_retriever_tool from langchain_mongodb.vectorstores import MongoDBAtlasVectorSearch from langchain_voyageai import VoyageAIEmbeddings # Instantiate the vector store vector_store = MongoDBAtlasVectorSearch.from_connection_string( connection_string = "<connection-string>", # MongoDB cluster URI namespace = "<database-name>.<collection-name>", # Database and collection name embedding = VoyageAIEmbeddings(), # Embedding model to use index_name = "vector_index", # Name of the vector search index # Other optional parameters... ) # Create a retrieval tool retriever = vector_store.as_retriever() retriever_tool = create_retriever_tool( retriever, "vector_search_retriever", # Tool name "Retrieve relevant documents from the collection" # Tool description ) To add the tool as a node in LangGraph:
Convert the tool into a node.
Add the node to the graph.
from langgraph.graph import StateGraph from langgraph.prebuilt import ToolNode # Define the graph workflow = StateGraph() # Convert the retriever tool into a node retriever_node = ToolNode([retriever_tool]) # Add the tool as a node in the graph workflow.add_node("vector_search_retriever", retriever_node) graph = workflow.compile()