Tutorial | Prompt engineering with LLMs#

Designing high-quality prompts is the quickest and most efficient way to leverage generative AI models and adapt them for specific business purposes. This technique, known as prompt engineering, allows you to modify the behavior of a large language model without costly and highly technical fine-tuning methods.

In Dataiku, Prompt Studios allow you to test and iterate on prompts until the model produces your desired response, and the Prompt recipe allows you to operationalize the prompt.

Get started#

Objectives#

In this tutorial, you will:

  • Use Prompt Studios to engineer prompts for a large language model to summarize and classify text.

  • Push your final prompt to the Flow using the Prompt recipe.

Prerequisites#

To complete this tutorial, you will need:

  • A Dataiku instance (version 12.3 and above). Dataiku Cloud is compatible.

  • A connection to at least one supported generative AI model. Your administrator must configure the connection beforehand in the Administration panel > Connections > New connection > LLM Mesh. Supported model connections include models such as OpenAI, Hugging Face, Cohere, etc.

  • No prior knowledge of working with large language models (LLMs), though it would be useful to read the article Concept | Prompt Studios and Prompt recipe before completing this tutorial.

Create the project#

This tutorial uses a dataset of articles from the Reuters news agency that is publicly available on Kaggle. We’ll work with a subset of 1,000 articles to reduce computation cost.

To create the project:

  1. From the Dataiku Design homepage, click + New Project > DSS tutorials > ML Practitioner > LLM - Prompt Engineering.

  2. From the project homepage, click Go to Flow (or g + f) and open the reuters_headlines dataset to explore the columns.

Screenshot of the dataset, which includes 1,000 news article headlines and descriptions.

Note

You can also download the starter project from this website and import it as a zip file.

Open a new Prompt Studio#

The dataset includes a headline, date, and short description of each article. Let’s say we want to know the main subjects of each article, and for the results to be returned in JSON format so we can easily use them in downstream recipes and models. We can use Prompt Studio to build a prompt that automates this process.

We’ll start by creating a new Prompt Studio to experiment with and iterate on the prompts.

  1. In the top navigation bar, select Visual Analyses > Prompt Studios.

  2. Click New Prompt Studio in the top right and give the new studio the name Financial headlines, then click OK.

  3. In the studio, click Add prompt, then in the New Prompt window, select Prompt template, which can be used later to create a Prompt recipe.

Screenshot of new prompt window, with Prompt template selected.

The next window requires selecting a large language model (LLM) to run the prompt. The selections available will depend on the connection set up by your administrator. For example, if you are connected to OpenAI, you can choose from multiple GPT models.

We’ll start experimenting with our prompt by writing examples and test cases directly before inputting the full dataset. This will help us save computation before running the prompt with more examples.

  1. Choose the LLM you want to use with your prompt. You can change this selection later and experiment with different models.

  2. Under Source for test cases, select Write queries directly.

  3. Create the prompt.

Design the prompt#

On the Prompt design page, you can add your prompt text, provide examples with the desired output, and run test cases using the LLM, before deploying it on your entire dataset.

Screenshot of the prompt design page in Prompt Studio.

Our prompt will instruct the model to determine the topic of each provided news article. To prevent it from creating too many topics, we’ll also give it a list of potential topics we’re interested in.

To write the first iteration of the prompt:

  1. In the Task window, copy and paste the following prompt, replacing the existing explainer text (use the Copy button at the top right of the block for easier copying):

    Determine whether each topic of the following list of topics is covered in the financial news article provided.
    
    List of topics: fed and central banks, company and product news, corporate debt and earnings, energy and oil, currencies, gold and metals, IPO, legal and regulation, M&A and investments, markets, politics, stock movement.
    
  2. On the right, create two Inputs: Headline and Text preview. As you added the inputs, they appear as headers under Test cases. We’ll add two test cases taken from the headline dataset to gauge how our model runs the prompt as it is.

  3. Click Add a test case and copy and paste the following text into the corresponding boxes:

Headline:

Manufacturing, vaccine data power stocks higher; U.S. dollar dips

Text preview:

Stocks across the globe rose on Wednesday following data pointing to a recovery in manufacturing and on bets for a COVID-19 vaccine, while the risk-on mood pushed the U.S. dollar lower.
  1. Add another test case with the following text in the boxes:

Headline:

U.S. weekly jobless claims up slightly; leading indicator rises

Text preview:

The number of Americans filing for unemployment benefits rose just marginally last week, suggesting strong job growth in March that should underpin consumer spending.
  1. Select Run to pass the prompt and test cases to your selected model.

Screenshot of the prompt design, showing results from the first iteration.

Depending on the model you selected, you’ll see different results. In this case, using ChatGPT 3.5 Turbo, we can see several problems with the responses. The model answered in complete sentences, and each response is in a slightly different format. Neither format is very useful for further analysis.

You might notice other issues depending on your results. For example, the model might return topics that were not in your initial list.

We can fix all of these issues with a bit of prompt engineering.

Iterate on the prompt#

First, let’s specify the format for the model’s results. The most useful format for downstream recipes would be a JSON object where topics are listed as the keys along with 0 or 1 values to indicate whether they are present. Later, we could easily parse the JSON using a Prepare recipe and use the results in other recipes or machine learning models.

We can also instruct the model to bucket any topics not in our initial list under an “other” topic, to keep our results clean and consistent with the topics we’re interested in.

  1. Copy and paste the following text to your prompt, under the list of topics:

Format your response as a JSON object with each topic as the keys, and 0 or 1 as values. Add the “other” key to list potential topics not listed above. Use an array as value.

Another way to help the model understand what is expected is to provide examples with input and the desired output. We can do this in Examples located above the test cases.

  1. In the Examples area, click Add example.

  2. Copy and paste the following text into the corresponding boxes:

Input: Headline

Travel stocks soar as encouraging vaccine study lifts Europe

Input: Text preview

European stocks closed at over a five-week high on Wednesday, with travel stocks surfing a wave of optimism following reports of progress in developing a COVID-19 vaccine.

Output

{
"company and product news": 0,
"corporate debt and earnings": 0,
"IPO": 0,
"M&A and investments": 0,
"stock movement": 1,
"markets": 1,
"legal and regulation": 0,
"politics": 0,
"currencies": 0,
"gold and metals": 0,
"energy and oil": 0,
"fed and central banks": 0,
"other": []
}
  1. Add another example with the following text:

Input: Headline

Oil climbs 2% on U.S. stock draw but gains capped as OPEC+ set to ease cuts

Input: Text preview

Oil prices rose 2% on Wednesday, supported by a sharp drop in U.S. crude inventories, but further gains were limited as OPEC and its allies are set to ease supply curbs from August as the global economy gradually recovers from the coronavirus pandemic.

Output

{
"company and product news": 0,
"corporate debt and earnings": 0,
"IPO": 0,
"M&A and investments": 0,
"stock movement": 0,
"markets": 0,
"legal and regulation": 0,
"politics": 0,
"currencies": 0,
"gold and metals": 0,
"energy and oil": 1,
"fed and central banks": 0,
"other": []
}
  1. Run the prompt again and review the results.

Screenshot of the final prompt with examples and results from test cases.

Results from the test cases should be much improved. The output is in JSON format, which can be easily parsed, and any topics not in our original list are included within the “other” array. If there are no other topics in the output, the “other” array will be empty.

Use the dataset for test cases#

Now that we have an efficient and useful prompt, we can run it on more test cases from the dataset.

  1. Click on Use a dataset for test cases in the top right.

  2. Under Mapped from columns in select reuters_headlines as the dataset.

  3. Map the Headline description to the column Headlines and the Text preview description to the Description column.

Screenshot with the columns mapped from the reuters_headlines dataset.

We can also set a validation rule to check that the output conforms to our expected JSON object format.

  1. Next to the Prompt design and model selection, click Settings.

  2. Under Validation & formatting in the settings, select the Expected output format of JSON object.

  3. Save the settings.

Screenshot with JSON object selected in the prompt settings.

When we run the model again, it will use a small sample of test cases from the dataset instead of the test cases we entered manually. Results will be based on the same prompt and examples.

  1. Click Run.

  2. Explore the results.

The model ran on eight test cases selected from the dataset. It also returns an estimated cost per 1,000 records to run the full dataset. Results are colored green if they pass the validation test for JSON objects and colored red if they fail the test.

Tip

If your results don’t pass the validation test, make sure your example outputs are in the correct format for JSON objects.

Screenshot with results that passed the validation test.

We can see that all the results passed the validation test, and we’re ready to save our prompt as a recipe so we can deploy it to the Flow.

Deploy a Prompt recipe#

You can save your crafted prompt as a recipe directly from Prompt Studio.

  1. In the top right, select Save as recipe.

  2. In the New prompt recipe window, make sure the Input dataset is reuters_headlines.

  3. Give the Output dataset a name or use the default, and choose a storage connection and output file format.

  4. Click Create recipe.

The New prompt recipe info window.

Dataiku creates a new Prompt recipe with the settings pre-filled from your work in Prompt Studio. You’ll see the prompt text, input columns, and examples with output.

Screenshot with JSON object selected in the prompt settings.

If you’d like, you can run the recipe to use the model on the entire headlines dataset and create an output dataset ready to be used for further analysis or model training. This will incur some costs from the generative AI company for processing 1,000 rows.

The New prompt recipe info window.

What’s next?#

Congratulations! You have designed an effective prompt using Prompt Studios and deployed it to the Flow using the Prompt recipe.

You can explore other use concepts and tutorials using LLMs in the Knowledge Base.