“Morality in everyday life is ‘messy’. We are not all bad. And we are not all good. Sometimes we are vicious. Sometimes, purely because of the scent of cinnamon buns in the mall, we are generous.” –Dr. WHu
Leadership today is grounded even more on moral authority, in addition to organizational authority and abidance by laws and regulations. In a 24-7 connected world, the mere perception of non-ethical conduct can cause irreparable damage to reputations of corporates or individuals.
Autonomous machines, such as companion robots, autonomous vehicles, autonomous vacuum cleaners, and hospitality-industry concierge robots are becoming part of our society. They interact with human beings in close quarters constantly. We need Isaac Asimov’s Robotics Laws for that robot in our kitchen.
To do this, we need to first examine what is ethics and morality, in the eyes of human beings. On June 20, 2003, a runaway string of 31 unmanned Union Pacific freight cars, carrying 3800 tons of lumber and building materials, was hurtling towards the Union Pacific yards in Los Angeles. The dispatcher thought a Metrolink passenger train was in the yards. The dispatcher ordered the shunting of the runaway cars to a new track, Track 4: a track leading to an area with lower density housing of mostly lower income residents. Track 4 was rated for 15mph transits, much lower than the transit speed the cars were at. Derailment occurred. A pregnant woman in the lower income neighbourhood narrowly escaped death.
Was the dispatcher’s decision ethical? What would you do if you were the dispatcher? If the Metrolink train was in the yard and had only one passenger, would you have shunted the cars? What if there were 15 passengers in the Metrolink train, would you have decided differently?
Philosophers, moral psychologist, decision science researchers, and business ethicists have examined various facets of this case. Difficulty of ethical decision making then and there was exacerbated by the ‘fog of decision’ due to incomplete set of necessary data, the unknown knock-on effects, and the ambiguous guideline in this scenario. A decision had to be made, under risk and under uncertainty.
The dispatcher, and decision makers in other difficult situations, can use an AI ethic engine, which in theory can do unemotional button pushing, if and only if it has been trained by sets of unbiased data and given a set of clear and non-contradictory ethical guidelines. The advantage is not that AI engines will necessarily deliver optimal ethical solution. Instead, cognitive biases inherent among human beings are removed and the society has one less factor of concern. In fact, surveys have shown that people are more willing to accept decisions made by machines than by other human beings.
Leaders and followers want to be on righteous missions. Remember “Don’t Be Evil”? The challenge is what is evil, or not evil, under what set of often impossible-to-know nuanced facts at decision time. Furthermore, the projected outcome can only be best described as probabilistic due to unknown factors and beyond decision makers’ control.
An autonomous vehicle industry needs its AI system to decide when faced with the following scenario: two kids suddenly dashed to the front of the vehicle, ignoring the vehicle’s warning signals. If there was one person on the right, should the rule be to veer right and likely kill that one person? What should be the ethical rule if there were three persons on the right? If all three were members of a violent gang, should the vehicle ram towards them? In the last scenario, the mug shots in visual database were useful data points in AI’s decision-making. However, what if the three were on their way to a local school as reformed role models to teach kids what not to do?
Morality in everyday life, according to empirical studies by moral psychologists and philosophers, is “messy”. We are not all bad. And we are not all good, regardless of whether we are religious or not. Sometimes we are vicious. Sometimes, purely because of the scent of cinnamon buns in the mall, we are generous.
Morality in the abstract, a topic since the dawn of human civilization, is equally messy. Deontology moral philosophers, such as Immanuel Kant, believe that there are absolute rights or wrongs. “Thou shall not steal”, sanctity of human lives, etc. Utilitarian moral philosophers, such as Jeremy Bentham, believe that moral decisions and actions should be taken for the most positive outcome for the society.
MIT researchers conducted an experiment on a website called The Moral Machine. Millions of people from over 233 countries and territories provided over 40 million decisions on the moral dilemmas the autonomous vehicles face. One of the findings is that people are overwhelmingly utilitarian moralists; if the situation is unavoidable, the vehicle should veer right to kill two, not left to kill three, so to speak.
Read the full story and more related stories on Asian Robotics Review