Tutorial | Deploy a real-time API service (MLOps part 5)¶
Once you’ve designed an API service with the desired endpoints and tested that the endpoints return the expected responses to incoming queries, the next step is to deploy the API service from the Design node to a production environment.
Objectives¶
In this tutorial, you will:
Push a version of an API service from the Design node to the Deployer.
Deploy the API service from the Deployer to the deployment infrastructure (an API node).
Query the prediction endpoint deployed on an API node.
Starting here?
This section requires having created the API endpoint in Part 4, so complete that section in order to reproduce the steps here.
Publish the API service on the API Deployer¶
Recall that our Flow has a prediction model to classify credit card transactions as fraudulent or not. We have packaged this model as a prediction endpoint in an API service, but this service only exists on the Design node (our development environment). It can’t answer real queries yet.
The next step is to publish the API service from the Design node to the Deployer.
Note
The remaining steps require that you are connected to the API Deployer. We’ve used a remote (standalone) deployer for illustration. However, your Dataiku instance may be configured to use a local Deployer, depending on the Deployer setup that your instance admin has configured.
If not already open, from the More options menu in the top navigation bar on the Design node, click on API Designer. Select the fraud_detection API service.
Click the Publish on Deployer button.
Keep the default version id name (v1), and click OK.

Note
For greater context on the concepts at work here, see our resources on Real-Time API Deployment and the API Deployer.
Deploy the API service to the deployment infrastructure¶
We now have the API service including the prediction endpoint on the Deployer, but to query the endpoint, we still need to deploy the API service to an infrastructure (an independent pool of API nodes).
Note
If not already available, follow the reference documentation or contact your instance administrator for creating an infrastructure.
If still on the Design node, from the Applications menu in the right-hand corner of the top navigation bar, click Remote (or Local) Deployer.
On the Deployer homepage, select API Services.
On the API services tab of the Deployer, find the fraud_detection API service on the left, and click Deploy.
In the dialog, choose an infrastructure for the new deployment.
Accept the default Deployment ID (which takes the form
<api-service-name>-on-<infrastructure-name>
), and click Deploy.On the Status tab of the new deployment, click Deploy once again.

You now have a prediction endpoint available to serve real-time API calls. Note the dedicated URL for this API endpoint.

Query the API endpoint¶
You can now submit real queries to this service by calling the endpoint URL. The Sample code tab provides snippets for calling the API in various languages, such as Shell (cURL), Python, R, or Java.
Within the Status tab of the prediction endpoint, navigate to the Sample code subtab.
Assuming you are connected to the API node, copy-paste the Shell code into a terminal window.
Execute the live query, and see the prediction for this record returned in real-time.

Note
The second line in the sample code indicates the URL of the prediction endpoint. Your URL will be different from the one shown in the screenshot.
You can also run the test queries previously defined in the API Designer of your project.
Still within the Status tab of the prediction endpoint, navigate to the Run and test subtab.
Click Run All.
Now the same queries tested in the API Designer on the Design node have been run on the API node.
Copy the deployment to another infrastructure (optional)¶
When choosing an infrastructure in the previous step, you may have seen the stages “Development”, “Test”, and “Production”. These are pre-configured lifecycle stages. An instance admin can modify these stages as desired, so your options may be different depending on the complexity of your organization’s deployment strategy.
Let’s imagine we have another stage of deployment infrastructure, and all tests on the first deployment infrastructure were successful. We now are ready to copy the existing deployment to a new pool of API nodes.
Still within the Status tab of the API service, click Actions > Copy this deployment at the top right.
Select a new infrastructure, keep the other default values, and click Copy.
The service is now ready to be deployed.
Click Deploy again on the page that opens up for the deployment.
Once the deployment has been updated successfully, click Deployments in the top navigation bar to see that the fraud_detection API service was deployed to the development infrastructure and then to the production infrastructure.

Next steps¶
Congratulations! You successfully published an API service from the Design node to the API Deployer and then to a deployment infrastructure. This allowed you to receive a response from a query to a live prediction endpoint.
Next, let’s return to the Design node, and add an enrichment to the prediction endpoint within the existing API service. Then we’ll re-deploy a new version of the API service, just as we did for versioning project bundles.
Note
For more information, see the reference documentation on Setting up the API Deployer and deployment infrastructures.