close x


Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form


Choosing Between Life and Death During COVID-19: The A.I. Trolley Problem

The Medical Futurist

What if, instead of leaving such morally-challenging decisions to a human, an A.I. would handle this burden? Would it alleviate the psychological toll on the medical staff?

Suppose you’re the sole witness of a trolley that has gone out of control, hurtling towards 5 people tied to its track, with no way to stop it in time. Good news: there’s a lever you can pull to alter its direction. Bad news: the other track isn’t safe either as it has one person tied to it. What will you do in this situation? Let the trolley continue on its initial course and kill those 5 people on the way or pull the lever to save them at the expense of that other person’s life?


This ethical thought experiment, known as the Trolley Problem, was put forth by Philippa Foot back in 1967 to challenge different schools of moral thought. In Lodi, Italy, this dilemma is far from theoretical. Doctors had to decide whom to allocate ICU beds to, due to the shortage of resources amidst the COVID-19 crisis. Dr. Di Bartolomeo reports that some patients “are not candidates” for ventilation support due to their old age or frail condition.

What if, instead of leaving such morally-challenging decisions to a human, an A.I. would handle this burden? Would it alleviate the psychological toll on the medical staff?

However, with A.I. in the mix, people are more likely to be among those on the tracks rather than the one behind the lever. This is because the A.I. will be influenced by the couple of hundreds of programmers deciding on its code and feeding it datasets, as well as the authority overseeing its use. The general public in all that? They are far from the lever’s reach.

How can the contentious issue of the marred tracks for the A.I. trolley be addressed? Can an algorithm bypass this conundrum altogether? Join us as we take a ride along those tracks.

The Choice

As the numbers of those infected with COVID-19 climb, hospital beds around the world are also filling up. As a result, certain healthcare institutions are facing a drought of key resources like personal protective equipment, ICU capacities and ventilators. This has prompted authorities to issue ethical guidelines; directing medical staff as to whom to prioritize should it come to that.

One such guideline published in the New England Journal of Medicine (NEJM) by a collective of academics and physicians recommends, among others, to prioritize frontline medical staff; and, among severely ill patients, the younger ones with fewer coexisting conditions.


Because maximizing benefits is paramount in a pandemic, we believe that removing a patient from a ventilator or an ICU bed to provide it to others in need is also justifiable and that patients should be made aware of this possibility at admission,” the authors write.

Another guideline issued to Italian doctors notes that “an age limit for the admission to the ICU may ultimately need to be set”. The New York Times analyzed publicly available guidelines for different American states. Some exclude ventilator support to those suffering from neurological impairments and conditions like dementia or AIDS.

Would an AI Help?

We can all imagine how difficult taking such decisions can be; more so for someone who has pledged the Hippocratic Oath. The authors of the NEJM correctly note that this “will be extremely psychologically traumatic for clinicians; and some clinicians might refuse to do so”.

Moreover, despite guidelines, there can be outliers that don’t fit those rules. What if the younger patient has a higher risk of breast cancer than the older woman who is financially supporting her son through university? What if those prioritized have other comorbidities unbeknownst to them or their physician that will lower their lifespan?

On the other hand, an A. I. could mine for insights based on a patient’s genomic data, their health records and their familial history. It could deem that a patient would react better to a drug under trial; and would benefit from ventilation support until the drug reaches the hospital. Thereby, monitoring this patient’s progression would help even more people down the line; superseding the guidelines which would otherwise exclude that person from priority treatment.


However, when it comes to A.I’s decision-making process, bias is not an issue that can be overlooked. If the data that such an algorithm is fed is influenced from judgemental datasets, ingrained social injustices and individual choices, then the software’s output will reflect such bias. Given that healthcare data is “extremely male and extremely white”, an A. I. will factor its decisions based on this demographic, at the detriment of others.

The A.I. trolley problem incorporates this layer of complexity in its decision making. While it could lift the psychological burden off the medical personnel of whether to switch the lever or not, the algorithm’s built-in ethical framework makes its decision more of a planned action. This brings us to our next topic:

Who is in control of the lever?

Back in 2014, researchers at the MIT Media Lab released the Moral Machine. This online platform crowdsources the decisions people take when presented with variations of the trolley problem. It allows participants to choose between two outcomes that a self-driving car should follow. The scenarios include information of the soon-to-be victims’ age, gender and socioeconomic status that can influence the participant’s decision. After it gained traction, with over 40 million decisions from millions of people worldwide, the researchers presented their findings in a paper published in Nature in 2018.


According to their analysis, decisions of the Moral Machine platform’s participants diverged across culture, economics, and geographic location. Those from collectivist cultures like China and Japan, where respect for the elderly is emphasized, were less likely to spare the young over the old. Participants from countries with significant economic inequality showed greater gaps in the fates of the virtual victims based on their social status. Those from individualistic cultures like the UK and US, where the authors noted the emphasis on each individual’s value, were more inclined to spare more lives given all the other choices.

It might also turn out that a software programme developed in one region of the world might lead to different outcomes than one developed elsewhere. As pointed out in the introduction, in a healthcare setting, most people are more likely to be on the “tracks”; at the mercy of the A.I.’s decision. How do we ensure uniformity in this case?

Would We Even Need a Lever?

When it comes to the A.I. trolley problem in deciding whom to support with available medical resources, the issue really boils down to a faulty supply chain and inadequate preparation. Germany performed early tests and has sufficient intensive care units, which helped in its comparatively better management of the pandemic.

While Qvetus has developed an A.I.-based model to help hospitals better manage their resources, a predictive algorithm can help even before healthcare institutions are put under pressure by a public health crisis. BlueDot’s A.I. helped epidemiologists send the first alerts of COVID-19’s impending spread. If such A.I.-based tools are used on a larger scale, prompt action can be taken sooner; without having to overburden the healthcare system or its staff with morally-taxing situations.


Ethically-speaking, doctors should never be put in a position where they have to choose between a 40-year-old or a 75-year-old patient to give ventilator support. With proper management and adequately equipped facilities, such choices would not be the topic of discussion.

Nevertheless, if we cannot escape such a dilemma, leaving the decision solely to an A. I. might not be the right course of action. A proper one will require the collaborative efforts of the general public, ethicists and the A.I.’s programmers. If there are demands over more transparent functioning of software behind such a sensitive issue, then these demands will have to be met.

For more related stories visit The Medical Futurist

more posts