BLOG

AI Affectiva Automotive AI Emotion AI Summit

Building in Concepts of Trust in Next Generation Vehicles

11.08.18

The Affectiva 2018 Emotion AI Summit brought Trust in AI within the autonomous vehicle as one of many thought-provoking topics discussed by industry leading experts. After delivering a panel introduction on Trends in Automotive, Morgan Stanley Managing Director Regina Savage moderated a panel session entitled “Building Trust in Next Generation Vehicles.” Panelists included Ola Bostrom, Ph.D., VP of Research & Patents at Veoneer, Karl Iagnemma, Ph.D., President & Co-Founder, nuTonomy (acquired by Aptiv) and Bryan Reimer, Ph.D., Research Scientist at MIT AgeLab & Associate Director at the New England University Transportation Center.

If you weren’t able to attend, here were some of the key points discussed by the panel:

Industry experts discuss how to build trust in next generation vehicles

What is more important, technical trust or emotional trust? It’s a heavy question that goes well beyond standard rating systems. According to Dr. Karl Iagnemma of nuTonomy (acquired by Aptiv) they are equal and both extremely important. From a technical perspective, engineers work to specifications and requirements, systems, subsystems, and go from there. But building perceived trust is really hard: there are different dimensions to it - and you cannot decouple comfort and safety.

For example, one parameter of trust would be the overall ride experience of the car:  that is, how comfortable the ride is, and does the vehicle drive in some way you - a human driver - would drive?  Additionally, a car can be built as a safe system (technically worthy of trust), but if it's accelerating at full throttle and constantly slamming on the brakes, you will want to get out of that car as soon as possible because you don't trust it. And in what’s called a “shared mental model,” the car can build trust by communicating to riders what it is thinking and what it’s going to do next.

Transparency in self-driving car accidents. From Uber to Waymo tragedies, the occasional coverage of autonomous vehicle incidents can spark public outcry, while also prompting questions of, “would I have made that mistake?” If we have more of an understanding in how these systems work, it may forge the way for a more trusting relationship with these vehicles.

Dr. Bryan Reimer, Research Scientist at MIT AgeLab & Associate Director at the New England University Transportation Center, made the comparison between these incidents and the airline industry. One of the reasons aviation systems are trusted is that following a plane crash, both a preliminary report and final report are released by the long-trusted authority, The National Transportation Safety Board (NTSB). This report explains what happened to cause this crash, so stakeholders and constituents understand that information. Today, after a self-driving vehicle crashes, often police or local authorities speak out on what they believed may have happened - a practice that’s hard to think about after a plane goes down today. This system problem is a source behind the distrust, and it must be addressed in order to progress as an industry.

So, should we trust drivers less? Dr. Iagnemma explains that this is a good question that he believes has a clear answer - yes. In the US, we have historically seen accident rates decrease over time as we’ve introduced new technology. And yet, over the past few years accident and fatality rates have increased, because drivers are getting worse. This is due in large part to drivers not paying attention to the road and multi-tasking.

Sadly, this creates an opportunity for in-cabin sensing - which has transformed from a “nice to have” to something much more urgent than that. He recommends that in human-piloted cars, these driver monitoring systems ensure that when drivers should be paying attention they are actually doing so - and if they are not, there must be a fallback system in place.

shutterstock_1042116904 (1)

Dr. Reimer also pointed out that the increase in fatalities is particularly weighted towards vulnerable populations, such as pedestrians and the cyclists. Yet the reason is the same - because these people are heads down in their phones, they are not seeing the situational cues that are around them. But the distracted pedestrian is more of a system issue - and creating an autonomous pedestrian is a whole other difficulty to model.

Reconciling technology and regulations into the user experience. Dr. Ola Bostrom of Veoneer gave a personal example of taking a government official for a ride within a geo-fenced area for a technical demonstration. That is, the car understood the hyper-locale of where it was operating. When Ola instructed him to take a left onto a one-way street, the car did not slam the brakes but stopped and told the official that he was going the wrong way. This illuminated the point that the problem is not the technology - it’s more about getting the technology out there in a user-friendly way. Putting geo-fenced areas into regulation enables more of a collaboration between the driver and the technology, where the car is able to assist the driver with its knowledge of where it is driving.

Interested in learning more? Download the session recordings from the Emotion AI Summit, where this panel discusses establishing safety and trust in autonomous technology so we can evolve into mass adoption of autonomous vehicles in the future.

Download Emotion AI Summit 2018 content now

AI Affectiva Automotive AI Emotion AI Summit