Join us Sept 17 at .local NYC! Use code WEB50 to save 50% on tickets. Learn more >
MongoDB Jokes
Docs Menu
Docs Home
/
Atlas
/ /

Get Started with the Haystack Integration

You can integrate MongoDB Vector Search with Haystack to build custom applications with LLMs and implement retrieval-augmented generation (RAG). This tutorial demonstrates how to start using MongoDB Vector Search with Haystack to perform semantic search on your data and build a RAG implementation. Specifically, you perform the following actions:

  1. Set up the environment.

  2. Create a MongoDB Vector Search index.

  3. Store custom data in MongoDB.

  4. Implement RAG by using MongoDB Vector Search to answer questions on your data.

Work with a runnable version of this tutorial as a Python notebook.

Haystack is a framework for building custom applications with LLMs, embedding models, and vector search. By integrating MongoDB Vector Search with Haystack, you can use MongoDB as a vector database and use MongoDB Vector Search to implement RAG by retrieving semantically similar documents from your data. To learn more about RAG, see Retrieval-Augmented Generation (RAG) with MongoDB.

To complete this tutorial, you must have the following:

  • One of the following MongoDB cluster types:

  • An OpenAI API Key. You must have an OpenAI account with credits available for API requests. To learn more about registering an OpenAI account, see the OpenAI API website.

  • A Voyage AI API Key. To create an account and API Key, see the Voyage AI website.

  • A notebook to run your Python project such as Colab.

Set up the environment for this tutorial. Create an interactive Python notebook by saving a file with the .ipynb extension. This notebook allows you to run Python code snippets individually, and you'll use it to run the code in this tutorial.

To set up your notebook environment:

1
  1. Run the following command:

    pip install --quiet --upgrade mongodb-atlas-haystack voyage-embedders-haystack pymongo
  2. Run the following code to import the required packages:

    import os
    from haystack import Pipeline, Document
    from haystack.document_stores.types import DuplicatePolicy
    from haystack.components.writers import DocumentWriter
    from haystack.components.generators import OpenAIGenerator
    from haystack.components.builders.prompt_builder import PromptBuilder
    from haystack_integrations.components.embedders.voyage_embedders import VoyageDocumentEmbedder, VoyageTextEmbedder
    from haystack_integrations.document_stores.mongodb_atlas import MongoDBAtlasDocumentStore
    from haystack_integrations.components.retrievers.mongodb_atlas import MongoDBAtlasEmbeddingRetriever
    from pymongo import MongoClient
    from pymongo.operations import SearchIndexModel
2

Run the following code, replacing the placeholders with the following values:

  • Your Voyage AI API Key.

  • Your OpenAI API Key.

  • Your MongoDB cluster's connection string.

os.environ["VOYAGE_API_KEY"] = "<voyage-api-key>"
os.environ["OPENAI_API_KEY"] = "<openai-api-key>"
os.environ["MONGO_CONNECTION_STRING"]= "<connection-string>"

Note

Replace <connection-string> with the connection string for your Atlas cluster or local Atlas deployment.

Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

To learn more, see Connect to a Cluster via Drivers.

Your connection string should use the following format:

mongodb://localhost:<port-number>/?directConnection=true

To learn more, see Connection Strings.

In this section, you create the haystack_db database and test collection to store your custom data. Then, to enable vector search queries on your data, you create a MongoDB Vector Search index.

1
client = MongoClient(os.environ.get("MONGO_CONNECTION_STRING"))
2

Run the following code to create your haystack_db database and test collection.

# Create your database and collection
db_name = "haystack_db"
collection_name = "test"
database = client[db_name]
database.create_collection(collection_name)
# Define collection
collection = client[db_name][collection_name]
3

Run the following code to create an index of the vectorSearch type. The embedding field contains the embeddings that you'll create using Voyage AI's voyage-3-large embedding model. The index definition specifies 1024 vector dimensions and measures similarity using cosine.

# Create your index model, then create the search index
search_index_model = SearchIndexModel(
definition={
"fields": [
{
"type": "vector",
"path": "embedding",
"numDimensions": 1024,
"similarity": "cosine"
}
]
},
name="vector_index",
type="vectorSearch"
)
collection.create_search_index(model=search_index_model)

The index should take about one minute to build. While it builds, the index is in an initial sync state. When it finishes building, you can start querying the data in your collection.

In this section, you instantiate MongoDB as a vector database, also called a document store. Then, you create vector embeddings from custom data and store these documents in a collection in MongoDB. Paste and run the following code snippets in your notebook.

1

Run the following code to instantiate Atlas as a document store. This code establishes a connection to your Atlas cluster and specifies the following:

  • haystack_db and test as the Atlas database and collection used to store the documents.

  • vector_index as the index used to run semantic search queries.

document_store = MongoDBAtlasDocumentStore(
database_name="haystack_db",
collection_name="test",
vector_search_index="vector_index",
full_text_search_index="search_index" # Declared but not used in this example
)
2

This code defines a few sample documents and runs a pipeline with the following components:

  • An embedder from OpenAI to convert your document into vector embeddings.

  • A document writer to populate your document store with the sample documents and their embeddings.

# Create some example documents
documents = [
Document(content="My name is Jean and I live in Paris."),
Document(content="My name is Mark and I live in Berlin."),
Document(content="My name is Giorgio and I live in Rome."),
]
# Initializing a document embedder to convert text content into vectorized form.
doc_embedder = VoyageDocumentEmbedder()
# Setting up a document writer to handle the insertion of documents into the MongoDB collection.
doc_writer = DocumentWriter(document_store=document_store, policy=DuplicatePolicy.SKIP)
# Creating a pipeline for indexing documents. The pipeline includes embedding and writing documents.
indexing_pipe = Pipeline()
indexing_pipe.add_component(instance=doc_embedder, name="doc_embedder")
indexing_pipe.add_component(instance=doc_writer, name="doc_writer")
# Connecting the components of the pipeline for document flow.
indexing_pipe.connect("doc_embedder.documents", "doc_writer.documents")
# Running the pipeline with the list of documents to index them in MongoDB.
indexing_pipe.run({"doc_embedder": {"documents": documents}})
Calculating embeddings: 100%|██████████| 1/1 [00:00<00:00, 4.42it/s]
{'doc_embedder': {'meta': {'total_tokens': 32}},
'doc_writer': {'documents_written': 3}}

Tip

After running the sample code, if you're using Atlas, you can verify your vector embeddings by navigating to the haystack_db.test namespace in the Atlas UI.

This section demonstrates how to implement RAG in your application with MongoDB Vector Search and Haystack.

The following code defines and runs a pipeline with the follow components:

In this example, you prompt the LLM with the sample query Where does Mark live?. The LLM generates an accurate, context-aware response from the custom data you stored in Atlas.

# Template for generating prompts for a movie recommendation engine.
prompt_template = """
You are an assistant allowed to use the following context documents.\nDocuments:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
\nQuery: {{query}}
\nAnswer:
"""
# Setting up a retrieval-augmented generation (RAG) pipeline for generating responses.
rag_pipeline = Pipeline()
rag_pipeline.add_component("text_embedder", VoyageTextEmbedder())
# Adding a component for retrieving related documents from MongoDB based on the query embedding.
rag_pipeline.add_component(instance=MongoDBAtlasEmbeddingRetriever(document_store=document_store,top_k=15), name="retriever")
# Building prompts based on retrieved documents to be used for generating responses.
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template, required_variables=["query", "documents"]))
# Adding a language model generator to produce the final text output.
rag_pipeline.add_component("llm", OpenAIGenerator())
# Connecting the components of the RAG pipeline to ensure proper data flow.
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "llm")
# Run the pipeline
query = "Where does Mark live?"
result = rag_pipeline.run(
{
"text_embedder": {"text": query},
"prompt_builder": {"query": query},
});
print(result['llm']['replies'][0])
Mark lives in Berlin.

To complete this tutorial, you must have the following:

  • One of the following MongoDB cluster types:

  • An OpenAI API Key. You must have an OpenAI account with credits available for API requests. To learn more about registering an OpenAI account, see the OpenAI API website.

  • A notebook to run your Python project such as Colab.

Set up the environment for this tutorial. Create an interactive Python notebook by saving a file with the .ipynb extension. This notebook allows you to run Python code snippets individually, and you'll use it to run the code in this tutorial.

To set up your notebook environment:

1
  1. Run the following command:

    pip install --quiet --upgrade mongodb-atlas-haystack pymongo
  2. Run the following code to import the required packages:

    import os
    from haystack import Pipeline, Document
    from haystack.document_stores.types import DuplicatePolicy
    from haystack.components.writers import DocumentWriter
    from haystack.components.generators import OpenAIGenerator
    from haystack.components.builders.prompt_builder import PromptBuilder
    from haystack.components.embedders import OpenAITextEmbedder, OpenAIDocumentEmbedder
    from haystack_integrations.document_stores.mongodb_atlas import MongoDBAtlasDocumentStore
    from haystack_integrations.components.retrievers.mongodb_atlas import MongoDBAtlasEmbeddingRetriever
    from pymongo import MongoClient
    from pymongo.operations import SearchIndexModel
2

Run the following code, replacing the placeholders with the following values:

  • Your OpenAI API Key.

  • Your MongoDB cluster's connection string.

os.environ["OPENAI_API_KEY"] = "<api-key>"
os.environ["MONGO_CONNECTION_STRING"]= "<connection-string>"

Note

Replace <connection-string> with the connection string for your Atlas cluster or local Atlas deployment.

Your connection string should use the following format:

mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

To learn more, see Connect to a Cluster via Drivers.

Your connection string should use the following format:

mongodb://localhost:<port-number>/?directConnection=true

To learn more, see Connection Strings.

In this section, you create the haystack_db database and test collection to store your custom data. Then, to enable vector search queries on your data, you create a MongoDB Vector Search index.

1
client = MongoClient(os.environ.get("MONGO_CONNECTION_STRING"))
2

Run the following code to create your haystack_db database and test collection.

# Create your database and collection
db_name = "haystack_db"
collection_name = "test"
database = client[db_name]
database.create_collection(collection_name)
# Define collection
collection = client[db_name][collection_name]
3

Run the following code to create an index of the vectorSearch type. The embedding field contains the embeddings that you'll create using OpenAI's text-embedding-ada-002 embedding model. The index definition specifies 1536 vector dimensions and measures similarity using cosine.

# Create your index model, then create the search index
search_index_model = SearchIndexModel(
definition={
"fields": [
{
"type": "vector",
"path": "embedding",
"numDimensions": 1536,
"similarity": "cosine"
}
]
},
name="vector_index",
type="vectorSearch"
)
collection.create_search_index(model=search_index_model)

The index should take about one minute to build. While it builds, the index is in an initial sync state. When it finishes building, you can start querying the data in your collection.

In this section, you instantiate MongoDB as a vector database, also called a document store. Then, you create vector embeddings from custom data and store these documents in a collection in MongoDB. Paste and run the following code snippets in your notebook.

1

Run the following code to instantiate Atlas as a document store. This code establishes a connection to your Atlas cluster and specifies the following:

  • haystack_db and test as the Atlas database and collection used to store the documents.

  • vector_index as the index used to run semantic search queries.

document_store = MongoDBAtlasDocumentStore(
database_name="haystack_db",
collection_name="test",
vector_search_index="vector_index",
full_text_search_index="search_index" # Declared but not used in this example
)
2

This code defines a few sample documents and runs a pipeline with the following components:

  • An embedder from OpenAI to convert your document into vector embeddings.

  • A document writer to populate your document store with the sample documents and their embeddings.

# Create some example documents
documents = [
Document(content="My name is Jean and I live in Paris."),
Document(content="My name is Mark and I live in Berlin."),
Document(content="My name is Giorgio and I live in Rome."),
]
# Initializing a document embedder to convert text content into vectorized form.
doc_embedder = OpenAIDocumentEmbedder()
# Setting up a document writer to handle the insertion of documents into the MongoDB collection.
doc_writer = DocumentWriter(document_store=document_store, policy=DuplicatePolicy.SKIP)
# Creating a pipeline for indexing documents. The pipeline includes embedding and writing documents.
indexing_pipe = Pipeline()
indexing_pipe.add_component(instance=doc_embedder, name="doc_embedder")
indexing_pipe.add_component(instance=doc_writer, name="doc_writer")
# Connecting the components of the pipeline for document flow.
indexing_pipe.connect("doc_embedder.documents", "doc_writer.documents")
# Running the pipeline with the list of documents to index them in MongoDB.
indexing_pipe.run({"doc_embedder": {"documents": documents}})
Calculating embeddings: 100%|██████████| 1/1 [00:00<00:00, 4.16it/s]
{'doc_embedder': {'meta': {'model': 'text-embedding-ada-002',
'usage': {'prompt_tokens': 32, 'total_tokens': 32}}},
'doc_writer': {'documents_written': 3}}

Tip

After running the sample code, if you're using Atlas, you can verify your vector embeddings by navigating to the haystack_db.test namespace in the Atlas UI.

This section demonstrates how to implement RAG in your application with MongoDB Vector Search and Haystack.

The following code defines and runs a pipeline with the follow components:

In this example, you prompt the LLM with the sample query Where does Mark live?. The LLM generates an accurate, context-aware response from the custom data you stored in Atlas.

# Template for generating prompts for a movie recommendation engine.
prompt_template = """
You are an assistant allowed to use the following context documents.\nDocuments:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
\nQuery: {{query}}
\nAnswer:
"""
# Setting up a retrieval-augmented generation (RAG) pipeline for generating responses.
rag_pipeline = Pipeline()
rag_pipeline.add_component("text_embedder", OpenAITextEmbedder())
# Adding a component for retrieving related documents from MongoDB based on the query embedding.
rag_pipeline.add_component(instance=MongoDBAtlasEmbeddingRetriever(document_store=document_store,top_k=15), name="retriever")
# Building prompts based on retrieved documents to be used for generating responses.
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template, required_variables=["query", "documents"]))
# Adding a language model generator to produce the final text output.
rag_pipeline.add_component("llm", OpenAIGenerator())
# Connecting the components of the RAG pipeline to ensure proper data flow.
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "llm")
# Run the pipeline
query = "Where does Mark live?"
result = rag_pipeline.run(
{
"text_embedder": {"text": query},
"prompt_builder": {"query": query},
});
print(result['llm']['replies'][0])
Mark lives in Berlin.

MongoDB also provides the following developer resources:

Back

C# Integration

On this page