Automate model monitoring#
Of course, we don’t want to manually build the model evaluation stores every time new data is available. We can automate this task with a scenario.
In addition to scheduling the computation of metrics, we can also automate actions based on the results. For example, assume our goal is to automatically retrain the model if a certain metric (such as data drift) exceeds a certain threshold. Let’s create the bare bones of a scenario to accomplish this kind of objective.
Note
In this case, we will be monitoring a MES metric. We can also monitor datasets with data quality rules.
Create a check on a MES metric#
Our first step is to choose a metric important to our use case. Since it’s one of the most common, let’s choose data drift.
From the Flow, open the mes_for_input_drift, and navigate to the Settings tab.
Go to the Status checks subtab.
Click Metric Value is in a Numeric Range.
Name the check
Data Drift < 0.4
.Choose Data Drift as the metric to check.
Set the Soft maximum to
0.3
and the Maximum to0.4
.Click Check to confirm it returns an error.
Click Save.
Now let’s add this check to the display of metrics for the MES.
Navigate to the Status tab.
Click X/Y Metrics.
Add both the data drift metric and the new check to the display.
Click Save once more.
Tip
Here we’ve deliberately chosen a data drift threshold to throw an error. Defining an acceptable level of data drift is dependent on your use case.
Design the scenario#
Just like any other check, we now can use this MES check to control the state of a scenario run.
From the Jobs menu in the top navigation bar, open the Scenarios page.
Click + New Scenario.
Name the scenario
Retrain Model
.Click Create.
First, we need the scenario to build the MES.
Navigate to the Steps tab of the new scenario.
Click Add Step.
Select Build / Train.
Name the step
Build MES
.Click Add Item > Evaluation store > mes_for_input_drift > Add Item.
Next, the scenario should run the check we’ve created on the MES.
Still in the Steps tab, click Add Step.
Select Verify rules or run checks.
Name the step
Run MES checks
.Click Add Item > Evaluation store > mes_for_input_drift > Add Item.
Finally, we need to build the model, but only in cases where the checks fail.
Click Add Step.
Select Build / Train.
Name the step
Build model
.Click Add Item > Model > Predict authorized_flag (binary) > Add Item.
Change the Run this step setting to If some prior step failed (that step being the Run checks step).
Check the box to Reset failure state.
Click Save when finished.
Tip
For this demonstration, we’ll trigger the scenario manually. In real life cases, we’d create a trigger and/or a reporter based on how often or under what conditions the scenario should run and who should be notified. Try these features out in Tutorial | Automation scenarios and Tutorial | Scenario reporters.
Run the scenario#
Let’s introduce another month of data to the pipeline, and then run the scenario.
Return to the new_transactions dataset in the Model Scoring Flow zone.
On the Settings tab, switch the data to the next month as done previously.
Return to the Retrain Model scenario.
Click Run to manually trigger the scenario.
On the Last Runs tab, observe its progression.
Assuming your MES check failed as intended, open the saved model in the Flow to see a new active version!
Note
This goal of this tutorial is to cover the foundations of model monitoring. But you can also think about how this specific scenario would fail to meet real-world requirements.
For one, it retrained the model on the original data!
Secondly, model monitoring is a production task, and so this kind of scenario should be moved to the Automation node.