Concept | Embed recipe and Retrieval Augmented Generation (RAG)#
While Large Language Models (LLMs) are powerful tools, they often lack specific internal knowledge from organizations. The Embed recipe in Dataiku uses the Retrieval Augmented Generation (RAG) approach to help you fetch relevant pieces of text from a knowledge bank and enrich the user prompts with them. This improves the precision and relevance of answers returned by the LLMs.
A benefit of the RAG approach is that to gain precision, you do not have to fine-tune a model, which is time-consuming and expensive. You just have to add your internal knowledge to a knowledge bank and feed the LLM with it.
RAG pipeline for question-answering using an augmented LLM#
Let’s see how the RAG approach works.
As you can see in the diagram above, using an augmented LLM includes several steps:
Gather a corpus of documents that will serve as the bespoke information you’ll augment an LLM’s base knowledge with.
For instance, these might be internal policy or financial documents, technical documentation, or research papers about a certain topic.
Note
If your textual data is in PDF, HTML, or another document format, you can first apply a text extraction recipe to transform the files into a tabular dataset with each document’s extracted text captured in its own row.
The Embed recipe uses vector stores such as FAISS, Pinecone, or ChromaDB to break your textual data into smaller chunks and then vectorize it (i.e. encode the semantic information into a numerical representation). The output of the Embed recipe is stored in a knowledge bank that is optimized for high-dimensional semantic search.
Note
The numeric vectors are commonly known as text embeddings, hence the recipe’s name.
When a user asks a question, the smart retriever uses the vector store to augment the provided question with relevant information from the vector store.
The augmented prompt is used to query the Large Language Model to provide more precise answers along with their sources.
Embed recipe settings#
As the name suggests, the Embed recipe manages the text embedding stage (i.e., text vectorization) of the RAG approach.
The recipe settings page is where you:
Configure the dataset columns to be used, including:
The embedding column, which is the column from the input dataset that contains the text to embed (i.e. to convert from textual data into numerical vectors).
The metadata columns used to get some additional information to enrich the retrieval augmented LLM or for the retrieval stage in place of the embedding column.
Configure the splitting methods and chunks of the input data.
Define a maximum number of rows to process.
Set your knowledge bank by selecting the LLM and vector store to use for text embedding.
Knowledge bank settings#
The knowledge bank is the output of the Embed recipe. This is where your input textual data has been converted into numerical vectors to augment the LLM you want to use.
On the Flow, it is represented as a pink square object.
The table below describes the different tabs used to configure the knowledge base.
Tab |
Description |
---|---|
Use |
This tab is meant to configure the LLMs that will be augmented with the content of the knowledge bank. It allows you to:
In the example above, you are augmenting the Chat GPT 4 LLM and you ask that, among the 20 documents closest to the query, the LLM uses the five top documents to build an answer in plain text. As you enable the Print sources option, when testing it in the Prompt Studio, the answer will indicate the five sources used to generate the answer. |
Core settings |
This tab lets you edit the embedding method and indicate the vector store used to store the vector representation of the textual data. By default, the embed recipe uses the FAISS vector store. |
Embedding settings |
This tab is a snapshot of the parent recipe settings upon the last run. |
Flow settings |
This tab allows you to define whether you want to automatically recompute the knowledge bank when building a downstream dataset in the Flow. |
Testing in the Prompt Studio#
Once you’ve augmented an LLM with the content of your knowledge bank, this LLM becomes available in the Prompt Studios and Prompt recipe under a section named Retrieval Augmented.
Note
Retrieval-augmented LLMs are only available within the scope of the project. They inherit properties from the underlying LLM connection (caching, filters, permissions, etc.).
Using the Prompt Studio, you can test your augmented model with various prompts and evaluate the responses it generates.
Note that the LLM provides its sources at the bottom of the response. This is a key capability and a primary benefit of the RAG approach.
As you may know, generic LLMs can sometimes offer answers that sound plausible but are, in fact, hallucinations — statements that seem true but aren’t backed up by real data. This is a major risk when accurate and credible information is not just a nice-to-have, but a must-have. That’s why prompt engineering and thorough testing are crucial to minimize hallucinations. With your knowledge bank documentation clearly displayed with each answer, the application delivers insights that your teams can verify and trust.
What’s next?#
Continue learning about the Embed recipe and the RAG approach by working through the Tutorial | Use the Retrieval Augmented Generation (RAG) approach for question-answering article.