close x


Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form


A.I. Bias In Healthcare

The Medical Futurist

Where does A.I. bias come from, how does it appear in healthcare, and what can we do about it?

Logical, reasoned, and rational masterpieces of human intelligence that are assumed to make objective, logical, reasoned decisions and choices. Instead, what the scientific community had to follow lately was how praised smart algorithms proved to be just as biased and judgmental as their human masters, sometimes even leading to scientifically questionable or discriminatory outcomes. Where does A.I. bias come from, how does it appear in healthcare, and what can we do about it?

Are You AI's Favourite?

Two years ago, Google came under fire when research had shown that when a user searched online for “hands,” the image results were almost all white; but when searching for “black hands,” the pictures were far more derogatory depictions, including a white hand reaching out to offer help to a black one, or black hands working in the earth. Not much changed – if you search for “hands” or “black hands”, you still come up with similar results, although the supportive white hand disappeared.

Similar racial bias follows through the story of A.I. We could hear from a lot of news outlets how facial recognition software favors white faces, but a study out of the MIT Media Lab published in February 2018 actually found that facial-recognition systems from companies like IBM and Microsoft were 11-19 percent more accurate on lighter-skinned individuals. They were particularly bad at identifying women of color. The smart algorithms were 34 percent less accurate at recognizing darker-skinned females compared to lighter-skinned males. In another example, when A.I. was implemented in the U.S. criminal justice system to predict recidivism, it was found to disproportionately suggest that black people were more likely to commit future crimes, regardless of how minor their initial offense.

It’s not only racial prejudice, but A.I. algorithms also often discriminate against women, minorities, other cultures, or ideologies. For example, Amazon’s HR department had to stop using their A.I.-based machine learning tool, which the company developed for sorting out the best job applicants, as it turned out that the smart algorithm favored men. As the tech scene is mainly dominated by men, and the data that the software was fed contained resumés from the past 10 years, the program taught itself that women were less preferable candidates. While programmers tried to tweak the A.I., it still didn’t bring the expected results, so in the end, they decided to scrap the program entirely. But what happened here? What went wrong with the algorithm? What’s the difficulty with teaching A.I.?

A.I. Bias

A Quest for Unbiased Cat Pictures

In order to discern why an A.I. algorithm could be biased, let’s take everyone’s favorite machine learning example: an algorithm recognizing cats in images. You will need millions of photos about all kinds of cats labeled as cats, and feed them to the algorithm, which will eventually learn to categorize the animal – without actually instructing it that cats are furry animals with four legs and two eyes. Such description would anyways exclude sphynx cats or our three-legged buddies. But what if those hairless creatures got ignored for other reasons, too?

Here are the three main reasons for biased algorithms:

1.Judgmental data sets

Algorithms are trained on datasets, thus the quality of the data is crucial in the process. If the dataset is incomplete, not diverse enough, stems mainly from one area of study, the A.I. software could work flawlessly in the test environment, but come up with its inherent bias in the ‘real world’. For example, if our cat-spotting algorithm never gets to see any sphynx cat, it will fairly believe that cats are furry – and when eventually encounters a hairless animal, it won’t recognize it. That’s what often happens with facial recognition software

2. Deeply ingrained social injustices

Another, more complex issue is when the dataset is representative and diverse enough, but the algorithm still arrives at discriminative conclusions. The reason for that could be a social practice ingrained so deeply in society that will automatically be transferred into the judgment process of the A.I. For example, as cats have not worn sweaters for centuries, a smart algorithm might miss out a modern-day cat in a pullover. In a nutshell, that’s the reason for Amazon’s gender-biased HR algorithm: the program was fed with applications from the previous ten years, whose majority came from male candidates. As a consequence, the A.I. started to believe that the correlation between gender and qualifications in this area also meant causation – and a point of reference for selection.

3. Unconscious or conscious individual choices

And what if the programmer must choose or leave out some parameters to help the program learn? By describing the cat in a certain way – hairiness, color, legs, eyes, etc. -, they already include their hidden and frequently unconscious bias. When a shelter wants to decide which cat to offer for adoption, how will the parameters look? And turning to people, when banks screen through loan applications with the help of algorithms, who decides who can get the loan? The programmer, the bank, or a human being? In such cases, the software developer can unconsciously include their own values and beliefs about the world into the code, and in an even more sensitive situation, perhaps with even riskier outcomes, the programmer could set some variables selecting specific characteristics for individuals or groups – which might have a biased outcome. Either way, individual choices can greatly influence how smart algorithms ‘behave’.

A.I. Bias
Health Data is Mostly White and Male

Thus, the source, the quality, and the diversity of the data, the historical social practices ingrained into the data – meaning the bias of the deeper social structure, as well as the individual, conscious or unconscious preferences of individual programmers, determine whether and to what extent an A.I. will become biased. Now, let’s look at some examples from healthcare where many could believe that as smart algorithms look at medical images, ECG strips or electronic medical records, the “bias factor” must be less prevalent.

Well, we shall bring some disillusionment. Even comedian John Oliver said that bias in medicine, in general, is a serious issue with consequences for American society. Healthcare data is extremely male and extremely white, and that has real-world impacts. A 2014 study that tracked cancer mortality over 20 years pointed to a lack of diverse research subjects as a key reason why black Americans are significantly more likely to die from cancer than white Americans.

In another area of research, meta-analysis looking at 2,511 studies from around the world found that 81 percent of participants in genome-mapping studies were of European descent. This has severe real-world impacts: researchers who download publicly-available data to study disease are far more likely to use the genomic data of people of European descent than those of African, Asian, Hispanic, or Middle Eastern descent. And these distorted datasets would be the starting points for A.I. development.

Sometimes, ignorance of inherent bias in data could even jeopardize the applicability of an algorithm. Winterlight Labs, a Toronto-based startup, which is building auditory tests for neurological diseases, realized after a while that their technology only worked for English speakers of a particular Canadian dialect. That might be a serious problem for other companies, too, which are working with voice-to-text technologies, vocal biomarkers, or digital assistants such as Siri or Alexa for healthcare.

A.I. Bias
Getting Out of the Cognitive Cage

So, what should we do to eliminate these prejudices from programming smart algorithms? It is actually a very difficult task as human beings have their own bias in their thinking – and that has been a useful trait for thousands of years as it shortens the time needed for making snap decisions. It’s also likely that human bias is here to stay, and technologies that are fed by information that is created in the real world could fundamentally have the same outcome. So now the question is, how do you think about that when you’re actually shifting a cognitive task completely into a machine, where you don’t have the same kind of qualitative reaction that human beings will have?

The response might be twofold and still evolving.

  1. We have to raise awareness of inherent bias in algorithms. It’s a great step that is already applied in some places. Recently, police officers have raised concerns about using “biased” artificial-intelligence tools, a report commissioned by one of the UK government’s advisory bodies revealed. The report said policemen were worried about both data bias and becoming more reliant on automation. Another similar example was banning facial recognition software from the streets of San Francisco. Activists and politicians, who pushed for the ordinance, cited studies that showed A.I.-based facial recognition technology is less accurate when distinguishing between individual women and people of color.
  2. We might have to re-create these functions, such as facial recognition technology, to represent a more balanced attitude through minimizing bias. That’s a tricky and a difficult process, especially because most A.I. algorithms are trained on biased datasets and researchers are just starting to bring them to the real-world.

Also, in many cases, it must be difficult to admit how biased we, human beings, are, and it’s kind of embarrassing that machines are pointing that out for us. But well, at least, we hope we are learning something about ourselves and how to make the world a less biased place. That would just be fantastic.

A.I. Bias

For more related stories visit The Medical Futurist

more posts