Concept | Process governance for MLOps#
Controlling models is one of the keys to successful MLOps management. Without control over models, an organization risks a model causing inadvertent damage.
Although steps in the MLOps process might be difficult to formalize, the outcome is worthwhile because it provides visibility into progress tracking and decision making.
Let’s look at three categories of model control and governance:
Audit and documentation
Human-in-the-loop
Pre-production verification
Audit and documentation#
Both internal and external stakeholders will want to be able to ask questions about deployed models, including what experiments were conducted and why each decision was made. To meet these needs, consider keeping a full log of all changes made during development.
Some ways to accomplish this include:
Create an audit trail. Dataiku includes an audit trail that logs all actions performed by users.
Create manual documentation, such as in a wiki.
Use project version control. In Dataiku, each change is automatically recorded in a Git repository.
Configure audit and query logging for API services in production.
Use the Flow Document Generator and Model Document Generator for creating automatic documentation.
Human in the loop#
Scaling AI projects requires both people and automation. Machines allow people to work faster. Humans in the loop make sure AI projects align with expectations so that you can trust the output.
One example of keeping humans in the loop is to require sign-offs on model versions before deploying a model from one environment to another. You might require a human to advance a model from a development to a test environment, or from a test to a production environment.
Here are some ways to implement human-in-the-loop interactions:
Document and revisit the goals of the project.
Incorporate responsible AI strategies. For example, use interactive statistics, individual prediction explanations, and model fairness reports.
Monitor the model for drift.
Run What if? analyses.
Perform model evaluation and model validation throughout the MLOps process.
Implement automated alerts and notifications through scenario reporters.
Implement model version sign-offs.
Pre-production verification#
Pre-production verification refers to the ways that you can validate ML models before deploying them into production. Dataiku offers numerous capabilities for implementing pre-production verification in MLOps process. You can take advantage of these capabilities to:
Create data quality rules on datasets and checks on models, model evaluation stores, and managed folders.
Perform model evaluations.
Run model comparisons.
Design models in a development environment and deploy to a production environment (using production deployments and bundles).
Works Cited
Mark Treveil and the Dataiku team. Introducing MLOps: How to Scale Machine Learning in the Enterprise. O’Reilly, 2020.