In 2018, over 40,000 people died on American roads, according to figures from the National Security Council—which is also about the same number of women that die every year from breast cancer in the US. Certain studies of accidents assert that 94% of accidents are caused by human error. Autonomous vehicles are never drunk, they're never distracted by looking at their phones, and they're never drowsy: so one could argue that the potential for increase in road safety could be compared to having line of sight to a cure for breast cancer.
In our latest Affectiva Asks podcast, Affectiva's Co-Founder and CEO Dr. Rana el Kaliouby, interviews Dr. Karl Iagnemma, President of Autonomous Mobility at Aptiv. During their interview, he talks to us about his background, his early work with nuTonomy (which has since been acquired by Aptiv), and how autonomous vehicle technology will transform the future of transportation and mobility.
Dr. Karl Iagnemma, President of Autonomous Mobility at Aptiv speaking at the 2018 Emotion AI Summit
You have a really fascinating background going from academia at MIT to starting nuTonomy to getting acquired by Aptiv. Tell us about that trajectory.
Well, I grew up in the Detroit area, so cars were in my DNA. My father worked as an engineer in the automotive industry. When I was an undergraduate at the University of Michigan, I did an internship at General Motors to see what the industry was like. This was in the early '90s and I would say at that time, the auto industry was not the place to be if you wanted to develop new technology. I left that internship and I thought the auto industry wasn’t for me.
I then went to MIT and I started studying robotics. But a funny thing happened, the robots I was building were robots that had four wheels and rolled around. At the beginning of my research, they were small, you could hold them in two hands, but they started to get bigger and bigger. And at some point, I realized that all the technology I was developing for mobile robots was the same technology that could be applied to intelligent vehicles or driver assistance systems and eventually, fully autonomous vehicles.
So in 2013, together with a colleague at MIT, Emilio Frazzoli, we launched nuTonomy, which was a startup that was focused on building software for fully autonomous vehicles. We launched simultaneously in Cambridge, MA and in Singapore. We decided those would be the right places to be early adopters for autonomous vehicles, and grew the company of about 120 people from 2013-2017. In 2017, we joined Aptiv, which is an automotive tier one supplier, and we've been growing ever since within Aptiv, but still focused on the same mission of developing software and technology for fully autonomous vehicles.
What's the biggest challenge that you're facing right now?
There are two major challenges associated with autonomous driving: one is perception. It's helping the car look out around itself and really comprehend the surroundings. So to put together a picture of the world from all its various sensors and in our car, there are a couple dozen sensors to stitch all that together and interpret it. For example, identifying people, other vehicles, cyclists, and here's what they might do next.
This leads us to the other area that's very hard for autonomous vehicles today, which is the policy problem, and that is deciding what the car should do next. As human drivers, we are very good at predicting what other actors on the road are going to do next, so that's computationally very complex. Not only this, but adding in the rules of the road and any kind of driving preferences or norms in any particular city you might be in.
Humans have the advantage of being able to use memory and synthesize various inputs to make a right decision. From a technology perspective, we rely heavily on cutting edge AI, which I know of course is one of Affectiva's strong areas of expertise, but it remains very difficult. At Aptiv, we rely on a technology called Structured AI to make the right decision on the road.
There is a public debate on the timeline for rolling out these vehicles. What's your view on when we are going to actually see a lot more of these vehicles on the road?
That's a question I get asked a lot because I think as an industry, there's a lot of excitement around autonomous vehicles because of all the promise and technology. I think in the early days of development, there was optimism around when we were going to deploy these systems at scale from media and early developers. Then we started getting into development and understood some of the real technical challenges associated with some of these hard problems around perception and decision making.
Now, I think reality has set in. People understand what's in front of them. I think we have a much clearer sense of when this technology is gonna hit the road. You can experience a driverless car on the road today. If you go to Las Vegas, you can hail one of our cars. We've given over 50,000 rides to members of the public over the last year on the Lyft app network. We still have safety drivers in the car behind the wheel, but generally they're not even touching the wheel: they are just there to monitor the system in case anything unexpected happens.
2020 is when we expect to have our first truly driverless systems. They will be operating in a very constrained environment, as we call it, so a narrow operating domain. They won't be picking up and dropping off paying customers on the Las Vegas strip: it'll be a much simpler driving environment that will be expanded over time. At Aptiv, we envision 2022 is when we will be actually picking up and dropping off paying customers with a truly driverless car, no one behind the wheel. Again, we'll be operating in certain cities in that fashion, it won't be the case that any listener of this podcast will be able to hail a car in any city around the world and drive in a driverless car.
What is the role of data in training these systems, and exposing them to a wide range of scenarios?
We rely extremely heavily on data to improve our technology: we look at what the car, see the decisions that it makes based on that data that it collects, and we retrain and improve our systems every day through feedback. A significant part of our budget in fact is allocated toward collecting, transmitting and storing data. That is the technical dimension of it. There's also the non-technical part of it, where we collect a lot of data around the user experience. How do people feel when they're in our cars? What is their feedback at the end of the trip? What did they think of the trip?
What do they think?
They seem to like it: we get a 1-5 star rating at the end of every trip, as when you take a normal Lyft ride. Our star rating today is 4.95 out of 5 among more than 50,000 trips, so we're very proud of that. But for the rides that are less than five stars we want to know why. Although driverless technology is an incredibly hard technical problem and extremely interesting one for the engineer in me, it also has the societal dimension that really can't be overlooked. We could develop a technology that's a radical breakthrough, but if society is not ready for it, and if we haven't really created a great product, then people won't use it. We won't realize the benefits of driverless technology and that will be a great shame.
That's a great tie in with Affectiva's work and mission, which is to humanize technology. We're very focused on building human-centric AI. You've talked a lot about how it's really critical to think through the experience of riding in these vehicles. Can you comment on how you think about that?
Based on our experience, we think that the experience in the car is really the determining factor around whether this technology will ever be widely used by society. To us, we realized that it's not just enough to have a technology that's safe, it has to be also comfortable. So when a passenger is riding in one of our cars in Las Vegas, they have to not only experience a safe trip, but also have the perception that the trip was safe and have confidence in the technology.
One of the ways that we try to do this is by creating a shared mental model, which displays the right information at the right time to the passenger in the back seat. This helps them understand what the car is thinking. So an example would be the car having to make a swerve within the lane, and the person in the back seat may wonder, "What the heck just happened?" We want to make sure we can explain to them why the car did what it did. It swerved to avoid an obstacle in the road, like an animal. It's a big difference from a rider's perspective and having that understanding and being able to have that confidence in the technology.
Human-centric AI to me also is about personalization. And automotive technology is no different than any other form of technology in that it's becoming much more personalized. It used to be that you bought a Chevy, your neighbor bought a Chevy, it was the same car, you didn't have a choice. It did the same thing, no matter who the passenger or the driver was.
The expectation for the future will be that your car adapts to you and learns how you drive. Your preferences for the cabin, for comfort, for any kind of entertainment will be table stakes. And so, being able to capture that information in an efficient way and really create the personal experience will be one of the next frontiers in automotive.
Last year we had you speak on a future of mobility panel at our Emotion AI Summit. This year's theme is human-centric AI, and we'll have a number of automotive talks around road safety, as well as in-cabin experience and the vehicle being a wellness and health, health and wellness pod. Can you share with our listeners what was your experience being at the Summit? And would you recommend that people attend?
I can definitely recommend it. Automotive technology and AI are coming together at a really fast rate. What was great about the Emotion AI Summit was to get individuals in the room who were domain experts from both sides, and bringing together these two communities. It's exciting because I think we're just scratching the surface, we're understanding what's possible when we apply AI, and kind of human-centric AI to the passenger vehicle.
I think one of the really exciting things is exploring what's possible around personalization of the ride experience, I think that's going to be incredibly powerful. Obviously Affectiva's technology is a great example of how we can use AI to understand emotion. I think there's many applications in the vehicle for that, and that we are going to start to see AI technology coming into automotive features very, very soon. So these conferences are coming just in time.