Contextualize your LLM App

Retrieval Augmented Generation (RAG) with LangChain connects your company data to the power of LLMs.

With LangChain’s built-in ingestion and retrieval methods, developers can augment the LLM’s knowledge with company or user data.

150+

001

Document Loaders

60+

002

Vector Stores

50+

003

Embedding Models

40+

004

Retrievers

A complete set 

of RAG building blocks

Build best-in-class RAG systems with LangChain's comprehensive integrations, state-of-the-art techniques, and infinite composability.

See Integrations

The data connections and infrastructure you need for your retrieval use-case

LangChain offers an extensive library of off-the-shelf tools
and an intuitive framework for customizing your own.

Document loaders 
for any type of data.

Retrieval algorithms that provide greater precision and results

Self Query Retriever
This retriever inspects the natural language query and writes a structured query to run on the underlying VectorStore.
Contextual Compression
Compress the retrieved document using the context of the query, so that only the relevant information 
in the source is returned.
Multi Vector Retriever
This retriever lets you query across multiple stored vectors per document, including ones on smaller chunks, summaries, and hypothetical questions.
Multi Vector Retriever
This retriever lets you query across multiple stored vectors per document, including ones on smaller chunks, summaries, and hypothetical questions.
Time-Weighted Vector Store
Combine semantic similarity with 
a time decay to factor in recency 
in your retrieval.
Parent Document Retriever
Embed small chunks, which are better for similarity search, but retrieve larger chunks, which help with generation.
Self Query Retriever
This retriever inspects the natural language query and writes a structured query to run on the underlying VectorStore.
Contextual Compression
Compress the retrieved document using the context of the query, so that only the relevant information 
in the source is returned.
Multi Vector Retriever
This retriever lets you query across multiple stored vectors per document, including ones on smaller chunks, summaries, and hypothetical questions.
Multi Vector Retriever
This retriever lets you query across multiple stored vectors per document, including ones on smaller chunks, summaries, and hypothetical questions.
Time-Weighted Vector Store
Combine semantic similarity with 
a time decay to factor in recency 
in your retrieval.
Parent Document Retriever
Embed small chunks, which are better for similarity search, but retrieve larger chunks, which help with generation.
Minimize writing duplicated content
Avoid re-writing unchanged content
Never recompute embeddings over unchanged content

Ingestion done right

The LangChain Indexing API syncs your data from 
any source into a vector store, helping you save money and time.

Ready to start shipping 
reliable GenAI apps faster?

Get started with LangChain, LangSmith, and LangGraph to enhance your LLM app development, from prototype to production.