How To: Use Visual ML Assertions

ML assertions are checks that help you to systematically verify whether your model predictions align with the experience of your domain experts. Learn how to use ML assertions with this hands-on exercise.

Getting Started

You will need a Dataiku DSS project with a dataset that you can use to create a predictive model. We will use the starter project from the Machine Learning Basics course as an example. There are two ways you can import this project:

  • From the Dataiku DSS homepage, click +New Project > DSS Tutorials > ML Practitioner > Machine Learning Basics (Tutorial).

  • Download the zip archive for your version of Dataiku DSS, then from the Dataiku DSS homepage, click +New Project > Import project and choose the zip archive you downloaded.

You should now be on the project’s homepage.

  • Go to the project’s Flow and select the customers_labeled dataset.

  • In the right panel, click Lab > AutoML Prediction.

  • In the dialog that opens, choose to create a prediction model on the column high_revenue.

  • Click Create to create a new prediction modeling task.

Your quick model is now ready to train.

Quick model of high_revenue column, ready to train

Defining Assertions

There are no default assertions, you must define them based upon your domain knowledge. For example, we may know from experience that while 10% of all customers are considered “high revenue”, those whose first order was made before they turn 22 years old are almost never “high revenue”. To create an assertion that captures this knowledge:

  • Click on the Design tab, then the Debugging panel.

  • In the Assertions section, click + Add an Assertion.

  • Change the name of the assertion from Assertion 1 to Under 22 not high revenue.

  • Define a condition where age_first_order < 22.

  • Change With valid ratio to 100% to reflect that we expect none of these customers to be high revenue.

Debugging panel of the Design tab in a visual model
  • Click Train.

After training is complete, a summary of the diagnostics is available. We can see that the assertion failed for the logistic regression model.

Summary results of quick model for high_revenue column, with diagnostics tooltip
  • Click Diagnostics. This opens the model to the Training Information section.

  • Navigate to the Metrics and assertions section.

Here we can see that 134 customers in the test set made their first order before the age of 22, and the model predicts that 1 will be a high revenue customer. This contradicts our domain knowledge that no such customers should exist, and gives us pause before deploying the model.

Metrics and assertions section of model results

Assertions in Deployed Model Metrics

ML assertions are incorporated into metrics and checks for models deployed to the Flow. To see these:

  • In the logistic regression model, click Deploy.

  • In the Flow, open the deployed model and navigate to the Metrics & Status tab. By default, Build duration is the only metric displayed.

  • Click the Display button that currently reads 1/13 Metrics.

For each assertion, there are three available metrics: the number of dropped rows, the number of rows matching the assertion condition (in this case, customers who age at first purchase is under 22), and the proportion of rows that are members of the expected class, according to the definition of the assertion.

  • Add each of these metrics to the display.

Model metrics display settings
  • Click Save.

The metrics match what we saw in the original analysis; 134 customers in the test set made their first order before the age of 22, and the model predicts that 133/134 = .9925 will not be high revenue customers.

Like any other metrics, you can create checks for these metrics and use those checks in scenarios that rebuild this model. That way, when this project is put into production, you can be automatically notified when the rebuilt model does not pass these checks.

Model metrics

Assertions in Evaluation Metrics

ML assertions can also be computed as one of the metrics from an Evaluate recipe. To see this:

  • From within the deployed model, click Actions and then Evaluate.

  • In this project, we don’t have an external validation dataset, so for now simply select customers_labeled as the input dataset.

  • Create customer_scores and customer_metrics as the output datasets, then click Create Recipe.

By default, the Evaluate recipe computes any ML assertions as part of its output.

Evaluate recipe outputs
  • Click Run.

The resulting metrics dataset has a column called assertionsMetrics, which contains JSON of form:

{"Under 22 not high revenue": {
  "nbMatchingRows": 689,
  "validRatio": 0.9970972423802612,
  "nbDroppedRows": 0,
  "result": false
}}

You can parse this with your own code, or use the Unnest object processor in a Prepare recipe to work with this data.