Concept | Prompt Studios and Prompt recipe#

Large language models (LLMs) are a type of generative AI that specializes in generating coherent responses to input text, known as prompts. With an extensive understanding of language patterns and nuances, LLMs can be used for a wide range of natural language processing tasks, such as classification, summarization, translation, question answering, entity extraction, spell check, and more.

When interacting with a generative model, you can modify and craft your prompt to achieve a desired outcome. This process is known as prompt engineering. Prompts guide the model’s behavior and output, but they don’t modify the underlying model. This makes prompt engineering a quicker, easier way to tailor a model for your business needs without fine-tuning, which can be time-consuming and costly.

Prompt engineering in Dataiku#

In Dataiku, Prompt Studios and the Prompt recipe work together to test and iterate on LLM prompts, then operationalize them in your data pipeline.

Prompt Studios and the Prompt recipe allow you to securely connect to LLMs either through a generative AI service or a private model, including open-source models. You can first experiment with different prompt designs in Prompt Studios, before deploying a desired prompt in your data pipeline with the Prompt recipe.

The prompt engineering workflow in Dataiku.

Important

To use Prompt Studios and the Prompt recipe, your administrator must configure at least one connection to a supported LLM provider and make sure instruct/prompt models are made available.

Prompt Studios#

Prompt Studios is the playground where you can iterate and test different prompts and models to engineer the optimal prompt for a specific use case. You can find Prompt Studios in the top navigation bar under the Visual Analyses menu.

Within a Prompt Studio, you can create multiple prompts and iterations, and even compare them. For each new prompt, you can choose from three different modes:

  • Managed mode: You specify the task using plain language, optionally include other details, and Dataiku generates the final prompt for you. Within managed mode, you can start with a blank template or choose from pre-filled templates for use cases such as analyzing product reviews or classifying support requests.

  • Advanced mode: You create the prompt and examples using placeholders, giving you more control over the final prompt sent to the LLM.

  • Prompt without inputs: You create a one-off prompt that queries the LLM and is designed for quick experimenting. These prompts are not reusable, meaning they can’t be converted into a Prompt recipe and their results cannot be compared in Prompt Studio.

The new prompt info window where you choose the mode to write your prompt in.

The Prompt design steps will differ depending on the mode you’re using. In general, to design a prompt, you write the task, then optionally define the inputs, examples, and test cases to include. Dataiku integrates all the information into a structured prompt to send to the LLM.

Tip

You can view this structured prompt by clicking Show raw prompt from the Prompt design page or within the Prompt recipe settings.

You can quickly iterate on your prompt design, including updating the task or adding new inputs, examples, and test cases, before deploying your prompt.

Task#

The task includes the main instructions for the LLM.

The task window in prompt design.

Inputs#

Inputs are variables you can insert into the prompt. You can map the inputs to columns from a dataset and use values from the dataset as test cases to check the prompt’s output. You can also choose to manually write test cases.

Addding inputs mapped from columns in a dataset.

Examples#

Examples allow you to add sample inputs and the desired output to help the LLM learn what the output should look like. In managed mode, you can add one or more examples within the Prompt design page. In advanced mode, you can write examples within the task.

A structured prompt with one provided example of input and output.

Test cases#

Prompt Studios will run a small number of test cases through the LLM to help you gauge the performance of your prompt. You can manually write test cases or use a few rows from a dataset as test cases.

The test cases help you see the actual output the model will return given your dataset.

Each time you run inference on your prompt and test cases, the model will also return an estimated cost for 1,000 records. LLM providers are not free, and each API call has an associated cost that typically depends on length of the prompt.

Managing costs is an important part of prompt engineering, which is why Dataiku gives you full transparency on how much this setup would cost to run on 1,000 records.

A structured prompt and corresponding inputs.

Settings#

You can change hyperparameter settings to control the behavior of the LLM:

  • Temperature: The temperature controls the randomness, or creativity, of the model. The higher the temperature, the more random or creative the responses will be. You can set the value between 0 and 2.

  • Top P: P refers to the probability of a given token in an LLM’s response. This setting tells the LLM to sample from the top tokens with probabilities that add up to P, causing the response to focus on the most probable options.

  • Top K: This limits the response to the top K most likely token. For example, if you set K = 1, the model would output the most likely token at each step of generation.

  • Max output tokens: LLMs process text by converting words and subwords into tokens. The number of max output tokens roughly equates to the maximum number of words returned by the model.

You can also set validation and formatting rules, such as validating the expected output format as a JSON object or forbidding certain words from the output. Note that these validation and formatting rules won’t change the LLM response, but you will receive a warning if the validation is not verified.

Prompt settings for validation and formatting.

Comparing prompts#

Prompt Studios track your prompts in the left sidebar. You can navigate between prompts, view each prompt’s history, restore previous versions, and duplicate prompts.

When using managed or advanced modes, you can also compare the output of multiple prompts by selecting them in the sidebar and clicking Compare. Prompt Studios will build a table comparing the outputs and costs of the selected prompts.

A comparison of the inputs and outputs of two similar prompts.

Tip

Prompt Sudios also tracks each run of a prompt as you iterate. To view the version history of a prompt and revert back to any version, click on the clock icon in the top right of the Prompt design screen.

Prompt recipe#

The Prompt recipe puts your prompt into action on a dataset. It generates an output dataset and appears in the Flow.

You can create a new Prompt recipe directly from Prompt Studios by saving a prompt template that is mapped to a dataset as a recipe. This allows you to experiment with your prompts before operationalizing them with the recipe.

You can also create a new Prompt recipe directly from the Flow or from the Actions panel of a dataset. With this method, you can write a Structured prompt or a Text prompt directly in the recipe, with the same settings as a prompt template in a Prompt Studio.

The Prompt recipe settings screen.

Note

You can create an API endpoint for a prompt from the right panel of the Prompt recipe. Learn more about creating APIs in the Dataiku Knowledge Base.

What’s next?#

Continue learning about prompt engineering with LLMs by working through the Tutorial | Prompt engineering with LLMs article.