Tutorial | API endpoint monitoring basics#

Get started#

After deploying an API service with one or more endpoints, the next logical step is to set up a monitoring system to centralize the logs from the API node and to monitor the responses of endpoints.

Objectives#

In this tutorial, you will:

  • Store the API endpoint prediction logs into an usable data storage.

  • Use these logs to build a feedback loop monitoring data drift.

Prerequisites#

This tutorial requires having an active deployment of an API service including a prediction endpoint. If you don’t already have this, see the tutorial on deploying a real-time API service to create one.

If you are not using Dataiku Cloud, you’ll need to have configured the Event server (in addition to the other prerequisites for the real-time API tutorial).

  • A person with administrator permissions on an instance of Dataiku can follow the reference documentation to set up the Event server on a Design or Automation node.

  • After setting up the Event server, the admin should follow the documentation on configuring audit logging for API nodes.

Potential uses of API audit logs#

You can use the audit logs of the API node for several different use cases, including:

  • Understand the user base of the API service through information such as the IP address and timestamp for each query.

  • Check the health of the API, for instance, by monitoring its response time.

  • Monitor the API predictions for data drift (shown in this tutorial).

  • Implement an ML feedback loop: In the case of a machine learning model trained on a few data points, the queries sent to the API can be used, after manual relabelling, to retrain the original model with more data.

Introducing the Event server#

For users who are not also instance administrators, the Event server operates in the background.

The Event server is a Dataiku component that allows for the monitoring of API nodes. API nodes can send event logs to the Event server, and in turn, the Event server will take care of sorting and storing the logs in an accessible place.

In more detail, the Event server centralizes all queries that the API node receives and the responses an API service returns. The Event server then makes these logs available as a dataset that one can use in a Dataiku project.

Note

Dataiku Cloud can achieve the same, but does so in a managed way explained in this article.

Send test queries to the API endpoint#

After setting up the Event server and configuring the API infrastructure, be sure to test that the API endpoint is an active deployment.

  1. Send at least one test query from the Run and Test tab of the API deployment or submit Sample code in a terminal as done in the API deployment tutorial.

See a screencast covering this step

Create a monitoring feedback loop#

Note

If using Dataiku Cloud, please see the steps for pre-Dataiku 12 users below.

Our goal is to import the logs from these queries into the Design node for analysis, and in particular, use them to monitor the model’s predictions.

  1. Return to the fraud_detection API service on the Design node project.

  2. For the predict_fraud endpoint, navigate to the Monitoring panel, where you should see details of active deployments.

  3. Click Configure to set up the monitoring loop.

  4. Click OK, and then return to the Flow to see the new zone dedicated to this objective.

Dataiku screenshot of the monitoring panel of an API endpoint.

Let’s review what this action did. The Flow now contains a new zone dedicated to monitoring this API deployment, including three objects:

  • API log data, partitioned by day (stored in your Event server connection)

  • An Evaluate recipe with the API log data and saved model as inputs

  • A model evaluation store as output to the Evaluate recipe

Dataiku screenshot of a Flow including a zone for monitoring the event server.
Create a monitoring loop for pre-Dataiku 12 users

Import API logs into the Design node

Note

If you have admin access, navigate to Administration > Settings > Event Server, and find the requested values below for the target destination. Otherwise, you’ll need to ask your admin.

If using Dataiku Cloud, as explained in this article, you’ll find the logs under + Dataset > Cloud Storage & Social > Amazon S3 and the connection customer-audit-log.

  1. In the Flow, click + Dataset > Filesystem.

  2. Under Read from, choose the connection set as the destination target for the Event server.

  3. Click Browse and choose the path associated within the above connection.

  4. Select api-node-query, and then the name of the API deployment, which by default follows the syntax PROJECTKEY-on-INFRASTRUCTURE.

  5. Click OK and then Create to import this dataset of API logs.

In the image below, the connection to the Event server is called event-server, the path is logs, and the API deployment name is fraud-detection-on-api-dev-v11.

Dataiku screenshot of the creation dialog for the API log data import.

Monitor data drift

We can now use this log dataset as the input to an Evaluate recipe that will monitor data drift.

  1. Select the log dataset from the Flow, and click Move to a Flow zone from the Actions sidebar.

  2. Click New Zone. Provide the name API Endpoint Monitoring, and click Confirm.

  3. Select the log dataset in the new Flow zone, and choose the Evaluate recipe from the Actions sidebar.

  4. Add the prediction model as the second input.

  5. Set mes_for_api_monitoring as the Evaluation store output.

  6. Click Create recipe.

Inspect the API log data#

Each row of the API log data corresponds to one actual prediction request answered by the model.

The exact columns may vary depending on the type of endpoint. For example, here we are using a prediction model, but it won’t be exactly the same for an enrichment or a custom Python endpoint.

Scrolling through the columns, you’ll find:

  • Basic information like the ID of the service, endpoint, its generation, deployment infrastructure, and model version.

  • Timing details.

  • All of the features that were requested (see columns beginning with clientEvent.features).

  • The prediction results (see columns beginning with clientEvent.results).

  • Additional information like IP address and the server timestamp.

Dataiku screenshot of log data fetched from the API node.

Tip

Return to the sample queries you submitted to the endpoint and confirm for yourself that these are the same queries.

Inspect the Evaluate recipe#

Before running it, take a moment to compare this particular Evaluate recipe to the others in the Flow.

In the Settings tab, note the additional presence of an Input tile with a checkbox marking that the input data will automatically be handled as API node logs. Even though the API node data has a very different schema than the input data to the other Evaluate recipes, Dataiku is able to automatically recognize API node logs and ingest it without additional configuration.

Dataiku screenshot of an Evaluate recipe with API log data input.

Note

If using a version of Dataiku prior to 11.2, you need to add a Prepare recipe that keeps only the features and prediction columns and renames them to match the initial training dataset convention.

Just below the Input tile, you’ll notice one additional checkbox in the Model tile not found in Evaluate recipes without API node log data as input. This setting matches model versions between the active model version in the Flow and the model version used when scoring the rows retrieved from the API node.

Dataiku screenshot of an Evaluate recipe for API node logs showing the Model tile.

If you have followed the other tutorials in this series, you’ll notice that the modelVersion portion of the clientEvent.savedModel.fullModelId column in the API log data does not match the current active model version found in the Flow. (Recall that the model was retrained).

  1. Depending on your situation, you may need to uncheck this setting. If you run the Evaluate recipe and receive an error that the evaluation dataset is empty, uncheck this setting.

Finally, in the Output tile, note that the box is checked to skip computation of performance metrics — just as was done for the recipe monitoring data drift. Remember that the ground truth is also missing here as this data is straight from the API endpoint.

Dataiku screenshot of an Evaluate recipe highlighting the performance metrics setting.
  1. After observing these settings, Run the Evaluate recipe.

  2. Open the output MES to find the exact same set of data drift metrics that you would normally find.

Dataiku screenshot of a model evaluation store with data from the API endpoint.

Note

You can read more in the reference documentation about reconciling the ground truth in the context of a feedback loop.

What’s next?#

Congratulations! You’ve set up a feedback loop to monitor data drift on an API endpoint deployed in production.

If you have components of your monitoring pipeline outside of Dataiku, see our series on monitoring models in different contexts.

Note

See the reference documentation on MLOps for full coverage of Dataiku’s capabilities in this area.