Model Monitoring¶
Learn how to monitor machine learning models in production environments including both batch and real-time methods.
Tutorials¶
- Tutorial | MLOps introduction & prerequisites (MLOps part 0)
- Tutorial | Model monitoring basics (MLOps part 1)
- Tutorial | API endpoint monitoring basics (MLOps part 10)
- Tutorial | Monitoring models: An introduction to monitoring in different contexts
- Tutorial | Monitoring models: A batch workflow within Dataiku
- Tutorial | Monitoring models: An API endpoint on a Dataiku API node
- Tutorial | Monitoring models: An exported Python model scored externally
- Tutorial | Monitoring models: An exported Java model scored externally
FAQ | How can I get model monitoring metrics in a dataset format?¶
The key Dataiku object for model monitoring is the model evaluation store. However, in many cases, you may want model monitoring metrics as a standalone dataset so it can serve as an input for other analyses.
You can achieve this in two ways:
When configuring an Evaluate recipe (at creation or from the Input/Output tab), one output option is adding a metrics dataset.
This option contains the main performance metrics, but not all the drift metrics.

Alternatively, you can create a metrics dataset from the Status tab of the model evaluation store itself.
This second approach is more complete and offers more options in terms of formats (if you go to the Settings tab of the resulting dataset).
