Concept | Deployment biases#

Deployment bias is a type of bias in the AI lifecycle that occurs outside of the data pipeline. We quickly introduced this in the Concept | Dangers of irresponsible AI article, and we’ll expand on it here.

Think about what happens after a model is completed. The way that an AI system is put into operation, used to make decisions, and interpreted by end users can all contribute to bias.

This means that deployment bias is not only technical, but social as well. In other words: Beyond the inner workings of our technical component (the model), designers and consumers of AI have great influence too.

To minimize deployment bias, we need to learn how to structure our findings and communicate those findings in a way that maintains integrity.

Some origins of deployment bias include:

  • Low explainability in models

  • Declining data quality

  • Off-label use of AI

Black box algorithms#

Black box algorithms are those that do not provide adequate explainability. Take the example of pharmaceutical algorithms that flag potential drug addicts to health providers. In some extreme cases, changes to the model provided by third party vendors flag people suffering from chronic pain as abusers because of their medicinal drug use, resulting in a sudden denial of medication and care.

For these patients, the denial comes with no explanation or opportunity to address the model’s output, while healthcare providers cannot understand why a certain person has been flagged.

Without clear model explanations or insight into how data led to these decisions, the deployment of AI creates new harms.

Data quality#

Sometimes deployment bias comes from the degradation of an otherwise fair model once it is in production.

For example, insurance pricing models that are highly regulated can be slow to react to changes in the market. This means that models in use are trained on out-of-date information or data that no longer reflects the state of the world. Accordingly, the model produces less accurate or meaningful predictions.

This was one reason why Zillow’s infamous home appraisal model degraded, leading to overpricing homes across the country.

Off-label AI#

Another cause of deployment bias that is important to note is the “off-label use” of AI. This is when a model optimized for a specific goal is used to make unrelated real-world decisions.

Take a look at the COMPAS model: Its original purpose was to predict risk of recidivism (or likelihood to recommit a crime) so that corrections officers could support the rehabilitation of prisoners.

In 2016, however, an analysis of the severe racial bias in the COMPAS model unearthed another huge issue: that judges were using these risk scores to determine the sentence length for convicts.

Not only was the model racially biased — assigning higher risk to Black defendants than white defendants — but it was also deployed in a way that was not intended. As such, people that were labeled with a higher risk of recidivism were given longer sentences even if their crime was not equally severe.

While various issues around the COMPAS algorithm have been discussed in depth over the years, we want to emphasize here that these scores were never meant to be used by judges or for the purpose of determining sentence length, thus creating a deployment bias that inflicted harm and reinforced the other inequalities in the model.

What’s next?#

To address deployment bias broadly, we must strive towards full transparency about when AI is used to make decisions, so we are able to correct or counter those decisions if needed. It’s also important to have explainability on specific predictions and more coherent reporting about system design to build trust in model outputs.

Making sure a model is implemented in a clear and safe way is just as important as addressing data and model bias for Responsible AI.

Continue on to start our Responsible AI hands-on training!