Concept | Dangers of irresponsible AI#
Before we discuss the keys to Responsible AI in the machine learning lifecycle, it’s important to understand some of the dangers when these principles aren’t followed. Here we’ll look at three potential dangers and some real-world examples.
Assumed objectivity#
Data science is born from the traditions of computer science, statistics, and quantitative methods — domains that we largely consider objective fields of knowledge.
This assumed objectivity makes us believe that the output of various machine learning models is fact, even if the data, algorithms, and practitioners themselves carry biases. Our society values the hard sciences for the (assumed) objective view they take on the world, and in doing so fails to acknowledge the influence of historical context and past experience.
Let’s look at an example that can highlight this concept.
Example: Bias in medical algorithms#
In 2019, researchers found that an algorithm meant to flag patients for extra medical care in the United States was biased against Black patients. For patients presenting the same level of illness, the AI model determined that white patients were more at risk and in need of extra medical attention than Black patients.
Correspondingly, Black patients who were flagged for extra care were consistently more ill than their white counterparts, in essence creating two different risk thresholds based on a patient’s race. As a result, the algorithm discriminated against the needs of Black patients and overestimated the need for care among white patients.
How did this happen? Of course, none of the designers and developers of this algorithm intended to create a discriminatory product, but the lack of attention to systematic discrimination in both the data and the model design led to a less-than-satisfactory outcome. In this case, the issue stemmed from an assumption of objectivity in the data. To explain:
In order to predict whether someone needed extra care, the model used a patient’s healthcare cost to encode overall health needs. To be specific, the algorithm was designed to use a patient’s prior healthcare expenses as an indicator of future costs, and in turn their potential need for extra medical care. Using healthcare costs for a patient made sense theoretically; after all, taking care of a more ill patient will cost more than someone who is not as ill.
What the developers failed to realize was that, in practice, healthcare costs are highly correlated with a patient’s race. Years of abuse and mishandling of Black patients has created distrust in the medical system among many people of color, meaning that these patients are less likely to use health services unless they are experiencing a more extreme level of illness or pain. As a result, low-income patients and patients of color will have smaller healthcare costs, even if their medical need is the same as their white counterparts. Thus, the racial difference in costs is imbued into the algorithm, even without using race as a feature of analysis.
Developers of this algorithm likely thought that by omitting race from the inputs to the model, they were removing a key component of bias, and that focusing on a “hard number” such as overall cost made the data and model more objective. However, all data are a reflection of the social and economic context that they come from, meaning that even a supposed neutral number can reflect systemic biases.
The danger of assumed objectivity becomes apparent once these models are put into production and used to make decisions about people’s well-being. Therefore, we must be willing to look beyond the data and ask why we are using a certain tool or method to address a problem.
In sum, it will not always be true that an algorithmic approach to a challenge will surpass human understanding nor be more objective than a human. AI is built on the decisions and data that we provide, and as a result, is subject to the same issue of bias.
Reinforcing inequalities#
The danger of assumed objectivity in data and algorithms is closely related to another danger from irresponsible AI: reinforcing existing inequalities.
Irresponsible AI systems can reproduce social inequalities, creating a cycle of biased data and unfair outcomes. Let’s explore an example of resume screening tools to understand how this happens in practice.
Example: Bias in hiring tools#
In 2014, Amazon engineers built and deployed an AI recruitment tool that filtered out candidates from the talent pool based on information in their resume. The developers trained the algorithm on the resumes of prior applicants to learn which keywords were associated with candidates that ultimately received a job offer. Using these keywords, the filter would automatically reject candidates that were least likely to make it to the offer stage. At first glance, this system seems unbiased as the model used historical data to make predictions about the future.
However, if we look more critically at the set of resumes used to train the model, we would find that they favor men, as women apply in far fewer numbers than men and those who do apply experience a lower success rate because of human hiring bias. In turn, the algorithm integrated bias from the historic data and learned that words that more frequently appear on the resumes of women tend to be associated with less successful candidates.
Thus, the data and algorithm reinforced past outcomes so that female candidates were disadvantaged (and even automatically rejected) despite the developer not including gender as a feature of the model. Further, the model created a feedback loop to future re-trainings of the model that continued to discriminate against female candidates.
Once again, the intention was not to reinforce the bias, but the way the data and model pipeline were built amplified the same inequalities over and over again
Deployment failures#
As the use of AI proliferates across industries, we can expect misunderstanding and misuse of models to scale proportionately. Even a responsibly built model has the potential to cause unintended consequences when its deployment and monitoring are not executed well.
Deployment bias can stem from different causes:
Using opaque or black-box models.
Training models on degraded data (data drift).
Making decisions with the wrong model.
These causes of irresponsible deployment are some of the most common, especially in financial services, healthcare, and the judicial system.
You may have previously heard related terms such as Explainable AI, white-box models, or interpretability. These concepts all address one reality: using a model to make decisions without understanding how it was built can lead to harmful outcomes for end users.
We’ll do a deep dive into explaining and mitigating deployment bias in our concept on deployment bias.