Concept | Model maintenance in Dataiku Govern#

The last step in our governance framework is to monitor our items. Let’s outline how we can use model metrics to track and improve the performance of model versions.

Model metrics#

Take a look at the Model registry page. The Model registry page provides a list of all models and model versions from your connected Dataiku nodes. We can also see valuable model metrics on this page. The Govern node pulls standard and custom model metrics from the Design node.

Let’s see an example. Say we want to monitor the project Coupon Redemption. We can click the dropdown to find the active model version that we want to analyze: in this case, a random forest model.

Dataiku Govern screenshot highlighting the active version of a model.

By default, the Metric to Focus dropdown is set to ROC AUC. Because of this, a model version will display its ROC AUC and ROC AUC Drift metrics. You can change the Metric to Focus at any time.

To see multiple model metrics at once, you can look at the Model metrics tab in the Details panel.

Dataiku Govern screenshot highlighting the review and delegate buttons in the review step of the workflow.

Note

Most of these metrics show the initial metric values drawn from the Design node or Automation node when building the model version. However, drift metrics come from the model evaluations that are stored in a model evaluation store (MES).

The MES must be held in the same project as the saved model of the model version we are evaluating. You can configure the MES to opt out of the Govern sync if needed. Otherwise, metrics are updated anytime the evaluation is run.

Deployment#

After monitoring metrics in the Govern node, you may decide to update your models in the Design or Automation node. You also might want to deploy updated models.

If you’re using sign-offs in Dataiku Govern, you can learn how to reset sign-offs for a new deployment cycle in this article on the sign-off process.

To learn more about managing the model lifecycle, you can start the MLOps Practitioner learning path or see the MLOps section of the reference documentation.

Finally, if you are curious about deployment bias and data drift, check out resources on Responsible AI.