BLOG

Artificial Intelligence Emotion AI Automotive

The Future of AI: Ethics & Morality of Emotion Enabled Cars

11.03.17

At the Emotion AI Summit, we brought together a panel of ethicists and futurists, entitled “The Future of AI: Ethics, Morality and the Workforce.” The panel was moderated by Eric Schurenberg, Editor in Chief & President of Inc. Magazine, to discuss the Future of AI in the context of ethics. Panelists included Richard Yonck, Futurist & Author, "Heart of the Machine", John C. Havens, Executive Director, The IEEE Global AI Ethics Initiative, and Co-founder & CEO of Affectiva, Rana el Kaliouby, PhD.

A number of ethical challenges were discussed in this panel: here’s what the group had to say about ethics in a particular scenario involving AI and self-driving cars.

Screen Shot 2017-11-02 at 3.48.46 PM.png

Eric Schurenberg posed the following scenario: the year is 2020. An autonomous Tesla and its passenger are travelling down a 2-lane road when a little girl chases a ball into the road - right into the car’s path. A human driver would have to make an agonizing split-second decision: to continue, and kill the child, or swerve into oncoming traffic and eventually die in a head-on collision. The AI controller of the car, however, does not agonize: it just does what it was programmed to do. What should it be programmed to do? Who decides what it does? A software programmer at the company? A yet uninvented traffic and safety organization that sets rules and standards for this type of scenario?

John Havens: The work that we are doing in EEE embodies this thought experiment in general. Our mission statement is to prioritize applied ethical considerations. There is a whole field by Batya Friedman created called Value Sensitive Design which essentially asks the question, “what’s one thing we can agree in this scenario is the good, or correct, decision?” What’s good is that we are discussing it now before autonomous vehicles are used en-masse in the general society. AI is different in that what it offers in challenges and benefits, and in terms of human emotion and agency. In that sense, we don’t know yet the unintended negative, positive and even end-positive consequences. But when you ask these types of ethical questions, it allows you to identify some of these end-positive consequences as well.

Rana el Kaliouby: This is a hard and complex question. I serve on WEF Future Global council of Robotics and AI where I interface with global leaders and policy makers. It’s hard to converge on standard set of rules that everyone is going to follow. If you step back - the human race has not done that yet - national politics, different points of views, cultures - it will be a very hard question to tackle. There’s conflicting agendas.

Richard Yonck: Agreed - especially with cultural differences demonstrate significant variations in different parts of the world. Attitudes about who essentially will be killed in that situation to a degree can vary just by what location that scenario is in.

Philosophically, this falls under something like the Trolley problem, which is a thought experiment in ethics. The general form of the problem is this:

“There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track.

You have two options:

  • Do nothing, and the trolley kills the five people on the main track.
  • Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?” (Source: Wikipedia)

shutterstock_44958433.jpg

We have been talking about this for decades as a pure theoretical question - and we find ourselves in a point in time now where technology can make this a real issue, beyond theory and put into practice.

It’s also interesting that the assumption that in this situation, the human action / choice / decision at that time is considered superior to what the machine has been essentially programmed to do. That is very highly emotional, instantaneous response that so many different people - even the same person under the exact same circumstances - might respond very differently. You also can’t assume that human beings can make the optimum choice or correct decision under the same circumstance, either.

Eric Schurenberg: It’s just that the human decision in that moment would not be as a decision made in the cold light by software.

Rana el Kaliouby: Exactly - it essentially becomes much more deterministic when it becomes a machine following certain set of rules or learned / reinforced behaviors that it has acquired over time. Looking back to Danny’s Lange talk about the reward function: what are you trying to optimize for? Is it about the amount of people that you save? For the car, is it about saving the occupant? Does it depend on age / gender / culture / ethnicity of the person being saved?

ethics_in_AI_Autonomous_cARS-1

John Havens: To add another perspective: Jason Millar had the wonderful idea of moral proxy which I think would apply here. (Source, “Technological Moral Proxies and tHE Ethical Limits of Automating Decision-Making in Robotics and Artificial Intelligence” by Millar, Jason) In this situation, what we don’t talk about is how the rider would have their wishes honored. Think of if you have a data representation of yourself that has your terms and conditions about many different things which reflect your values. When we sit in that vehicle as humans, there’s representation or way to project, “Here is my moral truth.” A similar example is today, depending on your faith, you make certain medical decisions in a hospital. Jehovah’s witnesses, for example, do not believe in blood transfusions. That is a piece of information you have dictated and in various states say healthcare providers know not to give you a blood transfusion.

While there is a lot of research that shows lots of people have this nobility and say, “I would not kill the girl!” but then when you push them, they say “ok, I would probably let the girl die because I don’t want to die.” Nonetheless, these are the moral questions that if we face them individually and as a society and figure out a way to develop this moral proxy we can digitally and algorithmically represent our identity to say what we would say / do in that situation.

Interested in seeing the full panel? Access the Emotion AI Summit session recording here, “The Future of AI: Ethics, Morality and the Workforce

Emotion AI Summit Recordings

Artificial Intelligence Emotion AI Automotive