Tutorial | Monitoring models: A batch workflow within Dataiku#

Many data science workloads call for a batch deployment framework.

As a means of comparison to other deployment contexts, this article presents how to monitor a model under a batch deployment framework staying entirely within Dataiku.

Objectives#

In this tutorial, you will:

  • Create a model monitoring feedback loop using a batch workflow entirely within Dataiku.

Prerequisites#

For this case, you’ll only need to satisfy the requirements included in the introduction and prerequisites, including creating the starter project found there.

Score data within Dataiku#

For this case, we’ll be using the Dataiku Monitoring (Batch) Flow zone found in the starter project.

Dataiku screenshot of the Flow zone for batch monitoring.
  1. In the Dataiku Monitoring (Batch) Flow zone, select the pokemon_scored_dss dataset.

  2. Click Build to non-recursively run the Score recipe.

  3. Before moving to the monitoring setup, examine the schema of the output to the Score recipe compared to the input. You should notice the addition of a prediction column containing the predicted type of Pokemon.

Dataiku screenshot of the output dataset from a Score recipe.

Note

You’ll notice that, in addition to a prediction column, the schema of the pokemon_scored_dss dataset includes four columns beginning with smmd_. This is because, in the parent Score recipe, we’ve chosen to output model metadata.

Monitor model metrics#

The monitoring setup in this case is the same as that presented in Tutorial | Model monitoring basics, but we can review the core tenets for completeness.

In Dataiku’s world, the Evaluate recipe takes a saved model and an input dataset of predictions, computes model monitoring metrics, and stores them in a model evaluation store (MES). In this case, we assume that we do not have ground truth, and so are not computing performance metrics.

  1. In the Dataiku Monitoring (Batch) Flow zone, select the model evaluation store called Monitoring - DSS Automation.

  2. Non-recursively Build it from the Actions sidebar, thereby running the Evaluate recipe.

  3. Open the MES to find one model evaluation.

Dataiku screenshot of a model evaluation store with one model evaluation.

Note

You can review the reference documentation on model evaluations if this is unfamiliar to you.

Automate model monitoring#

The same automation toolkit of metrics, checks, and scenarios that you find for Dataiku objects like datasets also is available for model evaluation stores.

  1. Within the Monitoring - DSS Automation MES, navigate to the Settings tab, and then the Status Checks subtab.

  2. Observe the example native and Python checks based on the data drift metric.

Dataiku screenshot of a status check for a MES metric.

With acceptable limits for each chosen metric formally defined in checks, you can then leverage these objects into a scenario, such as the Monitor batch job scenario included in the project, that:

  • Computes the model evaluation with the data at hand.

  • Runs checks to determine if the metrics have exceeded the defined threshold.

  • Sends alerts if any checks return an error or trigger other actions.

Dataiku screenshot of a sample scenario using a MES check.

Note

The Monitor batch job scenario found in the project uses a Microsoft Teams webhook, but many other reporters are available.

You’ll also notice that the scenario has no trigger attached. Determining how often your scenario should run is highly dependent on your specific use case, but you’ll want to make sure you have enough data for significant comparisons.

Push to the Automation node#

This article presents the basis of building a working operationalized project that will automatically batch score, monitor, and alert. Although simple, it highlights the main components to use such as the Evaluate recipe, the model evaluation store, and scenarios controlled by metrics and checks.

In this simplified example, we performed both scoring and monitoring on the Design node. However, in a real-life batch use case contained within Dataiku’s universe, both scoring and monitoring should be done on an Automation node. A true production environment, separate from the development environment, is required in order to produce a consistent and reliable Flow.

Accordingly, the next steps would be to:

  • Create a project bundle on the Design node.

  • Publish it to the Project Deployer.

  • Deploy the bundle to the Automation node.

  • Run scenarios on the Automation node for both the scoring and monitoring (the entire Flow zone Dataiku Monitoring (Batch)).

Dataiku screenshot of a bundle on the Project Deployer.

Note

Follow the tutorial on batch deployment basics for a walkthrough of these steps.

What’s next?#

Having seen the monitoring setup for a batch workflow, you might want to move on to one of the following monitoring cases that reuse the same project. They can be completed in any order independently of each other.