You can integrate MongoDB Vector Search with Haystack to build custom applications with LLMs and implement retrieval-augmented generation (RAG). This tutorial demonstrates how to start using MongoDB Vector Search with Haystack to perform semantic search on your data and build a RAG implementation. Specifically, you perform the following actions:
Set up the environment.
Create a MongoDB Vector Search index.
Store custom data in MongoDB.
Implement RAG by using MongoDB Vector Search to answer questions on your data.
Work with a runnable version of this tutorial as a Python notebook.
Background
Haystack is a framework for building custom applications with LLMs, embedding models, and vector search. By integrating MongoDB Vector Search with Haystack, you can use MongoDB as a vector database and use MongoDB Vector Search to implement RAG by retrieving semantically similar documents from your data. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with MongoDB.
Procedure
Prerequisites
To complete this tutorial, you must have the following:
One of the following MongoDB cluster types:
An Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later. Ensure that your IP address is included in your Atlas project's access list.
A local Atlas deployment created using the Atlas CLI. To learn more, see Create a Local Atlas Deployment.
A MongoDB Community or Enterprise cluster with Search and Vector Search installed.
An OpenAI API Key. You must have an OpenAI account with credits available for API requests. To learn more about registering an OpenAI account, see the OpenAI API website.
A Voyage AI API Key. To create an account and API Key, see the Voyage AI website.
A notebook to run your Python project such as Colab.
Set Up the Environment
Set up the environment for this tutorial.
Create an interactive Python notebook by saving a file
with the .ipynb
extension. This notebook allows you to
run Python code snippets individually, and you'll use
it to run the code in this tutorial.
To set up your notebook environment:
Install and import dependencies.
Run the following command:
pip install --quiet --upgrade mongodb-atlas-haystack voyage-embedders-haystack pymongo Run the following code to import the required packages:
import os from haystack import Pipeline, Document from haystack.document_stores.types import DuplicatePolicy from haystack.components.writers import DocumentWriter from haystack.components.generators import OpenAIGenerator from haystack.components.builders.prompt_builder import PromptBuilder from haystack_integrations.components.embedders.voyage_embedders import VoyageDocumentEmbedder, VoyageTextEmbedder from haystack_integrations.document_stores.mongodb_atlas import MongoDBAtlasDocumentStore from haystack_integrations.components.retrievers.mongodb_atlas import MongoDBAtlasEmbeddingRetriever from pymongo import MongoClient from pymongo.operations import SearchIndexModel
Define environment variables.
Run the following code, replacing the placeholders with the following values:
Your Voyage AI API Key.
Your OpenAI API Key.
Your MongoDB cluster's connection string.
os.environ["VOYAGE_API_KEY"] = "<voyage-api-key>" os.environ["OPENAI_API_KEY"] = "<openai-api-key>" os.environ["MONGO_CONNECTION_STRING"]= "<connection-string>"
Note
Replace <connection-string>
with the connection string for your
Atlas cluster or local Atlas deployment.
Your connection string should use the following format:
mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
To learn more, see Connect to a Cluster via Drivers.
Your connection string should use the following format:
mongodb://localhost:<port-number>/?directConnection=true
To learn more, see Connection Strings.
Create the MongoDB Vector Search Index
In this section, you create the haystack_db
database
and test
collection to store your custom data.
Then, to enable vector search queries on your data, you
create a MongoDB Vector Search index.
Create the haystack_db.test
collection.
Run the following code to create your haystack_db
database and test
collection.
# Create your database and collection db_name = "haystack_db" collection_name = "test" database = client[db_name] database.create_collection(collection_name) # Define collection collection = client[db_name][collection_name]
Define the AMongoDB Vector Search index.
Run the following code to create an index of the vectorSearch type. The embedding
field
contains the embeddings that you'll create using Voyage AI's
voyage-3-large
embedding model. The index
definition specifies 1024
vector dimensions and
measures similarity using cosine
.
# Create your index model, then create the search index search_index_model = SearchIndexModel( definition={ "fields": [ { "type": "vector", "path": "embedding", "numDimensions": 1024, "similarity": "cosine" } ] }, name="vector_index", type="vectorSearch" ) collection.create_search_index(model=search_index_model)
The index should take about one minute to build. While it builds, the index is in an initial sync state. When it finishes building, you can start querying the data in your collection.
Store Custom Data in MongoDB
In this section, you instantiate MongoDB as a vector database, also called a document store. Then, you create vector embeddings from custom data and store these documents in a collection in MongoDB. Paste and run the following code snippets in your notebook.
Instantiate Atlas as a document store.
Run the following code to instantiate Atlas as a document store. This code establishes a connection to your Atlas cluster and specifies the following:
haystack_db
andtest
as the Atlas database and collection used to store the documents.vector_index
as the index used to run semantic search queries.
document_store = MongoDBAtlasDocumentStore( database_name="haystack_db", collection_name="test", vector_search_index="vector_index", full_text_search_index="search_index" # Declared but not used in this example )
Load sample data on your Atlas cluster.
This code defines a few sample documents and runs a pipeline with the following components:
An embedder from OpenAI to convert your document into vector embeddings.
A document writer to populate your document store with the sample documents and their embeddings.
# Create some example documents documents = [ Document(content="My name is Jean and I live in Paris."), Document(content="My name is Mark and I live in Berlin."), Document(content="My name is Giorgio and I live in Rome."), ] # Initializing a document embedder to convert text content into vectorized form. doc_embedder = VoyageDocumentEmbedder() # Setting up a document writer to handle the insertion of documents into the MongoDB collection. doc_writer = DocumentWriter(document_store=document_store, policy=DuplicatePolicy.SKIP) # Creating a pipeline for indexing documents. The pipeline includes embedding and writing documents. indexing_pipe = Pipeline() indexing_pipe.add_component(instance=doc_embedder, name="doc_embedder") indexing_pipe.add_component(instance=doc_writer, name="doc_writer") # Connecting the components of the pipeline for document flow. indexing_pipe.connect("doc_embedder.documents", "doc_writer.documents") # Running the pipeline with the list of documents to index them in MongoDB. indexing_pipe.run({"doc_embedder": {"documents": documents}})
Calculating embeddings: 100%|██████████| 1/1 [00:00<00:00, 4.42it/s] {'doc_embedder': {'meta': {'total_tokens': 32}}, 'doc_writer': {'documents_written': 3}}
Tip
After running the sample code, if you're using Atlas, you can verify your vector embeddings
by navigating to the haystack_db.test
namespace
in the Atlas UI.
Answer Questions on Your Data
This section demonstrates how to implement RAG in your application with MongoDB Vector Search and Haystack.
The following code defines and runs a pipeline with the follow components:
The OpenAITextEmbedder embedder to create embeddings from your query.
The MongoDBAtlasEmbeddingRetriever retriever to retrieve embeddings from your document store that are similar to the query embedding.
A PromptBuilder that passes a prompt template to instruct the LLM to use the retrieved document as context for your prompt.
The OpenAIGenerator generator to generate a context-aware response using an LLM from OpenAI.
In this example, you prompt the LLM with the sample query
Where does Mark live?
. The LLM generates an accurate,
context-aware response from the custom data you stored
in Atlas.
# Template for generating prompts for a movie recommendation engine. prompt_template = """ You are an assistant allowed to use the following context documents.\nDocuments: {% for doc in documents %} {{ doc.content }} {% endfor %} \nQuery: {{query}} \nAnswer: """ # Setting up a retrieval-augmented generation (RAG) pipeline for generating responses. rag_pipeline = Pipeline() rag_pipeline.add_component("text_embedder", VoyageTextEmbedder()) # Adding a component for retrieving related documents from MongoDB based on the query embedding. rag_pipeline.add_component(instance=MongoDBAtlasEmbeddingRetriever(document_store=document_store,top_k=15), name="retriever") # Building prompts based on retrieved documents to be used for generating responses. rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template, required_variables=["query", "documents"])) # Adding a language model generator to produce the final text output. rag_pipeline.add_component("llm", OpenAIGenerator()) # Connecting the components of the RAG pipeline to ensure proper data flow. rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding") rag_pipeline.connect("retriever", "prompt_builder.documents") rag_pipeline.connect("prompt_builder", "llm") # Run the pipeline query = "Where does Mark live?" result = rag_pipeline.run( { "text_embedder": {"text": query}, "prompt_builder": {"query": query}, }); print(result['llm']['replies'][0])
Mark lives in Berlin.
Prerequisites
To complete this tutorial, you must have the following:
One of the following MongoDB cluster types:
An Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later. Ensure that your IP address is included in your Atlas project's access list.
A local Atlas deployment created using the Atlas CLI. To learn more, see Create a Local Atlas Deployment.
A MongoDB Community or Enterprise cluster with Search and Vector Search installed.
An OpenAI API Key. You must have an OpenAI account with credits available for API requests. To learn more about registering an OpenAI account, see the OpenAI API website.
A notebook to run your Python project such as Colab.
Set Up the Environment
Set up the environment for this tutorial.
Create an interactive Python notebook by saving a file
with the .ipynb
extension. This notebook allows you to
run Python code snippets individually, and you'll use
it to run the code in this tutorial.
To set up your notebook environment:
Install and import dependencies.
Run the following command:
pip install --quiet --upgrade mongodb-atlas-haystack pymongo Run the following code to import the required packages:
import os from haystack import Pipeline, Document from haystack.document_stores.types import DuplicatePolicy from haystack.components.writers import DocumentWriter from haystack.components.generators import OpenAIGenerator from haystack.components.builders.prompt_builder import PromptBuilder from haystack.components.embedders import OpenAITextEmbedder, OpenAIDocumentEmbedder from haystack_integrations.document_stores.mongodb_atlas import MongoDBAtlasDocumentStore from haystack_integrations.components.retrievers.mongodb_atlas import MongoDBAtlasEmbeddingRetriever from pymongo import MongoClient from pymongo.operations import SearchIndexModel
Define environment variables.
Run the following code, replacing the placeholders with the following values:
Your OpenAI API Key.
Your MongoDB cluster's connection string.
os.environ["OPENAI_API_KEY"] = "<api-key>" os.environ["MONGO_CONNECTION_STRING"]= "<connection-string>"
Note
Replace <connection-string>
with the connection string for your
Atlas cluster or local Atlas deployment.
Your connection string should use the following format:
mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
To learn more, see Connect to a Cluster via Drivers.
Your connection string should use the following format:
mongodb://localhost:<port-number>/?directConnection=true
To learn more, see Connection Strings.
Create the MongoDB Vector Search Index
In this section, you create the haystack_db
database
and test
collection to store your custom data.
Then, to enable vector search queries on your data, you
create a MongoDB Vector Search index.
Create the haystack_db.test
collection.
Run the following code to create your haystack_db
database and test
collection.
# Create your database and collection db_name = "haystack_db" collection_name = "test" database = client[db_name] database.create_collection(collection_name) # Define collection collection = client[db_name][collection_name]
Define the MongoDB Vector Search index.
Run the following code to create an index of the vectorSearch type. The embedding
field
contains the embeddings that you'll create using OpenAI's
text-embedding-ada-002
embedding model. The index
definition specifies 1536
vector dimensions and
measures similarity using cosine
.
# Create your index model, then create the search index search_index_model = SearchIndexModel( definition={ "fields": [ { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" } ] }, name="vector_index", type="vectorSearch" ) collection.create_search_index(model=search_index_model)
The index should take about one minute to build. While it builds, the index is in an initial sync state. When it finishes building, you can start querying the data in your collection.
Store Custom Data in MongoDB
In this section, you instantiate MongoDB as a vector database, also called a document store. Then, you create vector embeddings from custom data and store these documents in a collection in MongoDB. Paste and run the following code snippets in your notebook.
Instantiate Atlas as a document store.
Run the following code to instantiate Atlas as a document store. This code establishes a connection to your Atlas cluster and specifies the following:
haystack_db
andtest
as the Atlas database and collection used to store the documents.vector_index
as the index used to run semantic search queries.
document_store = MongoDBAtlasDocumentStore( database_name="haystack_db", collection_name="test", vector_search_index="vector_index", full_text_search_index="search_index" # Declared but not used in this example )
Load sample data on your Atlas cluster.
This code defines a few sample documents and runs a pipeline with the following components:
An embedder from OpenAI to convert your document into vector embeddings.
A document writer to populate your document store with the sample documents and their embeddings.
# Create some example documents documents = [ Document(content="My name is Jean and I live in Paris."), Document(content="My name is Mark and I live in Berlin."), Document(content="My name is Giorgio and I live in Rome."), ] # Initializing a document embedder to convert text content into vectorized form. doc_embedder = OpenAIDocumentEmbedder() # Setting up a document writer to handle the insertion of documents into the MongoDB collection. doc_writer = DocumentWriter(document_store=document_store, policy=DuplicatePolicy.SKIP) # Creating a pipeline for indexing documents. The pipeline includes embedding and writing documents. indexing_pipe = Pipeline() indexing_pipe.add_component(instance=doc_embedder, name="doc_embedder") indexing_pipe.add_component(instance=doc_writer, name="doc_writer") # Connecting the components of the pipeline for document flow. indexing_pipe.connect("doc_embedder.documents", "doc_writer.documents") # Running the pipeline with the list of documents to index them in MongoDB. indexing_pipe.run({"doc_embedder": {"documents": documents}})
Calculating embeddings: 100%|██████████| 1/1 [00:00<00:00, 4.16it/s] {'doc_embedder': {'meta': {'model': 'text-embedding-ada-002', 'usage': {'prompt_tokens': 32, 'total_tokens': 32}}}, 'doc_writer': {'documents_written': 3}}
Tip
After running the sample code, if you're using Atlas, you can verify your vector embeddings
by navigating to the haystack_db.test
namespace
in the Atlas UI.
Answer Questions on Your Data
This section demonstrates how to implement RAG in your application with MongoDB Vector Search and Haystack.
The following code defines and runs a pipeline with the follow components:
The OpenAITextEmbedder embedder to create embeddings from your query.
The MongoDBAtlasEmbeddingRetriever retriever to retrieve embeddings from your document store that are similar to the query embedding.
A PromptBuilder that passes a prompt template to instruct the LLM to use the retrieved document as context for your prompt.
The OpenAIGenerator generator to generate a context-aware response using an LLM from OpenAI.
In this example, you prompt the LLM with the sample query
Where does Mark live?
. The LLM generates an accurate,
context-aware response from the custom data you stored
in Atlas.
# Template for generating prompts for a movie recommendation engine. prompt_template = """ You are an assistant allowed to use the following context documents.\nDocuments: {% for doc in documents %} {{ doc.content }} {% endfor %} \nQuery: {{query}} \nAnswer: """ # Setting up a retrieval-augmented generation (RAG) pipeline for generating responses. rag_pipeline = Pipeline() rag_pipeline.add_component("text_embedder", OpenAITextEmbedder()) # Adding a component for retrieving related documents from MongoDB based on the query embedding. rag_pipeline.add_component(instance=MongoDBAtlasEmbeddingRetriever(document_store=document_store,top_k=15), name="retriever") # Building prompts based on retrieved documents to be used for generating responses. rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template, required_variables=["query", "documents"])) # Adding a language model generator to produce the final text output. rag_pipeline.add_component("llm", OpenAIGenerator()) # Connecting the components of the RAG pipeline to ensure proper data flow. rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding") rag_pipeline.connect("retriever", "prompt_builder.documents") rag_pipeline.connect("prompt_builder", "llm") # Run the pipeline query = "Where does Mark live?" result = rag_pipeline.run( { "text_embedder": {"text": query}, "prompt_builder": {"query": query}, }); print(result['llm']['replies'][0])
Mark lives in Berlin.
Next Steps
MongoDB also provides the following developer resources: