Concept | Process governance for MLOps¶
Controlling your models is one of the keys to successful MLOps management. Without control over models, an organization risks an inadvertently harmful model causing damage.
Although steps in the MLOps process might be difficult to formalize, the outcome is worthwhile because it provides visibility into progress tracking and decision making.
Let’s look at three categories of model control and governance:
Audit and documentation
Both internal and external stakeholders will want to be able to ask questions about deployed models, including what experiments were conducted and why each decision was made. To meet these needs, consider keeping a full log of all changes made during development.
Some ways to accomplish this include:
Create an audit trail. Dataiku includes an audit trail that logs all actions performed by users.
Create manual documentation, such as in a wiki.
Use project version control. In Dataiku, each change we make is automatically recorded in the Git repository.
Configure audit and query logging for API services in production.
To scale our AI projects, we need both people and automation. We need machines to allow people to work faster. We need humans in the loop to make sure our AI projects align with our values and do what we expect so that we continue to trust the output.
One example of keeping humans in the loop is to require sign-offs on model versions before deploying the model from one environment to another, such as from development to test or from test to production.
Here are some ways we can implement human-in-the-loop interactions to avoid pitfalls:
Document and revisit the goals of the project.
Monitor the model for drift.
Run What if scenarios (interactive scoring).
Perform model evaluation and model validation throughout the MLOps process.
Implement automated alerts and notifications.
Implement model version sign-offs.
Enterprise customers can take advantage of Dataiku’s Govern node which allows you to prevent deployment of projects or models without a signoff.
Pre-production verification refers to the ways that we can validate our ML models before deploying them into production. Dataiku offers numerous capabilities for implementing pre-production verification in our MLOps process. We can take advantage of these capabilities to:
Create metrics and checks
Perform model evaluations
Compare model evaluations by running model comparisons
Design models in a development environment and deploy to a production environment (using production deployments and bundles)
Mark Treveil and the Dataiku team. Introducing MLOps: How to Scale Machine Learning in the Enterprise. O’Reilly, 2020.