Product Pillar: AI Operationalization

The AI Operationalization pillar seeks to control, monitor, and govern the process and deployment of machine learning models and AI applications.

../../../_images/03_GRAPHIC_PILLARS_ai-operationalization.png

Historically, many enterprises have struggled to realize actual business value from AI projects. Moving from a prototype on a data scientist’s laptop to a fully operationalized model cleared by IT is often a formidable challenge. Moreover, after a model is successfully deployed into production, monitoring and governing models in production is a new challenge.

This lesson introduces some of the most important tools for deploying and managing models with DSS.

Model Version Management

After building a model, users can easily deploy it to a production environment by downloading a project as an application bundle and uploading it to the Automation node. While in production, users can establish their own custom metrics to verify data consistency and track model performance over time. Placing conditions on these metrics allows users to develop checks that can issue alerts or trigger scenarios.

Together with metrics and checks, scenarios can be used to create validation feedback loops that automate many important parts of the model lifecycle. Users can instruct DSS to initiate jobs using any number of pre-defined triggers, such as a unit of time or modification of a dataset, or entirely custom Python code. Examples of these jobs could include rebuilding a dataset, retraining a model, or redeploying an application bundle. Users can add reporters to stay informed of scenario activity.

In addition to deploying improved versions of a model, these tools can also be used to roll back to previous versions when necessary.

../../../_images/intro-scenario-steps.png

This automation scenario includes three pre-built steps on a monthly trigger. The first rebuilds the training dataset. The second rebuilds the model. The third step updates the API Deployer with a new version of the deployment.

Real-time API-based Scoring

The ability to manage the entire model lifecycle while enabling the scoring of new data in real-time is a common use case. For example, the NY taxi fare project sends locations and times of day and receives fare predictions in return. The API node, along with the API Deployer, makes this possible.

The API nodes are individual servers that do the actual job of answering REST calls. API nodes are horizontally scalable and highly available. They can be deployed either as a set of servers or as containers through the use of Kubernetes (which allows deployment either on-premises or on a serverless stack on the cloud).

The visual interface for using API nodes is the API Deployer. The API Deployer is a user-friendly, unified interface that makes it easy to design and deploy powerful real-time APIs. These APIs can be used for scoring unseen data in predictive models, regardless of whether these models were visual models generated by DSS or custom models written in Python or R.

Moreover, in situations in which all features of an input record are not easily known by the API user, the DSS API node includes a real-time data enrichment feature that makes these scenarios easy to handle using a lookup on an additional table.

Having these tools in one user-friendly web interface empowers data science teams to autonomously self-deploy, manage, and monitor their machine learning models at scale.

../../../_images/intro-api-deployer.png

In our example, the NY Taxi Fares project is shown in the API Deployer. Through the API Deployer, we can track deployments across development, testing, and production environments. In the third version deployed on a static server, we have four API endpoints, including the predictive model, in the development stage.