How can we create trust in mobility? That is the question Ola Boström, the Vice President of Research wrestles with at Veoneer. Veoneer designs, compiles and sells software, hardware and systems for active safety, autonomous driving, occupant protection and brake control.
We recently caught up with Ola after a component of Veoneer’s CES 2019 demo featured Affectiva’s Emotion AI technology, to learn a little bit more about his work within the space of automotive safety electronics. You can also listen to the full podcast here.
Tell us a little bit more about Veoneer.
Headquartered in Stockholm, Sweden, Veoneer is a spin-off of Autoliv that has a long history in automotive safety. We design, compile and sell state-of-the-art software, hardware and systems for active safety, autonomous driving, occupant protection and brake control, working with cutting-edge technologies— like vision systems, radar, night vision, electronic controls, and human-machine interface. Read more about Veoneer.
What has your career path been and how did you get to where you are today?
So personally I started at Autoliv 24 years ago as a Senior Research Engineer. Throughout my time here, I have covered research in order to implement new technologies into the car to make them safer. Today, I am the Vice President of Research, Innovation and IPR.
Last year, we had you speak at our Emotion AI Summit at a panel session on building trust in next generation vehicles. Could you share some thoughts on how OEMs and tier ones could go about building that trust between consumers and their vehicles?
Yes, that was a great event, and it coincided with Veoneer forming. And our purpose at Veoneer is actually to create trust in mobility, so it was definitely in par with that theme. We have understood from research and from our customers that the big challenge to implement technology to make mobility safe or safer is to create trust in mobility.
With new automated driving technologies slated to become available to consumers in 2019—most notably enabling a greater number of vehicle conditions to function with limited human involvement—the need will emerge to be able to not just use, but trust automated assistance in quickly emerging and complex situations.
Together with Annika Larsson, you recently wrote a LinkedIn pulse article on Trust in Mobility, where you discussed how we can better build trust with vehicles and how you can put these systems in place to better enable that. Can you speak a little bit more about those thoughts?
Yes, it's something we've been vocal about lately, as we see the need to focus and take on the challenge of collaborative driving. Autonomous driving has been hyped for some time now, and we are on a journey towards that. But now, we must address the 1.4 million traffic deaths annually, as the number of people on the roads will more than double in the coming decades.
We believe in collaborative driving. We refer to collaboration as being between the human and the car, and between cars and humans outside these cars, etc. Trust comes in where we must make sure you don't under-trust and actually switch off functions or not ask for them. Conversely, it’s important not to over-trust, which we see so many examples of today with distracted driving, where people believe that they can do all sort of things on their phones while driving. The challenge is to find the true trust where you understand the limitations of the technology, but you also would like to use technology.
Also earlier this year Veoneer had a demo at CES, so I was wondering if you could tell us a little bit about it for those who didn't attend, and what the technology was trying to accomplish. I know Affectiva was part of that as well.
Yes, they certainly were. So what we showed this year was our Learning and Intelligent Vehicle, LIV, for the third time in a row. For us, the learning point is key to create a trust in mobility, to enable safe driving. The technology needs to learn. We have to implement new technologies, and we have to learn from what is happening on the roads and move forward.
LIV was a user experience for invited guests—I think we had almost 1,000 people from OEMs, investors to media enjoying this user experience. How it worked was you drive on a track and experience use cases, which we think of as corner cases. For one example, the car comes into a tunnel, it's dark, it's foggy. And all of the sudden the car moves itself to the left, and you realize that there's actually a pedestrian who has walked just in front of the car. The car has made sure that you don't run over that pedestrian.
Another example we showcased featured our 5G network with an edge cloud (thanks to Veoneer’s partnership with Ericsson). We had road workers connected via 5G, so the car knew exactly where they were on the road. We also had a control tower interacting with the driver in the car when the car was obstructed to move forward in automated mode, so the control tower was actually able to reroute and go in a way which is not legal to do for the car.
Then when it came to Affectiva, we were using their Emotion AI technology, where we recorded the emotions of drivers throughout the experience, and showed their expressions after they had completed the course. The driver could then fast-forward or go through and look into what really happened. (For example, “you were surprised when you went into the foggy, dark tunnel,”) This whole experiment was a good experience to show that there are corner cases and it's important to understand the driver.
Now, when you were able to detect moments of surprise or happiness, are there any ideas that you have based on detecting those reactions? Have you thought how the car would react to that emotional response in context?
Yes and no. These moments, actually, we call them “Moments of Truth.” We realize that creating trust in mobility and introducing new technology, going to a high level of collaboration, means that we are entering new ground, and we don't know how the human will react. We don't know how the human will cope with the fact that the car will take over. Similarly, the car needs to be better technology-wise in order to understand when it's appropriate to take over.
In these “moments of truth,” it is important to understand the emotions of the driver. We understand that it's important to track what is happening and learn from that, and make sure that the drivers trust the car, that the car trusts the driver—and this doesn't come for free. We must approach this from a supplier point of view, and need to do a lot of joint research in order to understand what to do.
So different people react in different ways. Have you thought at all about customization for different people over time and creating that personalized experience based on what occurs within a vehicle and what that looks like?
Definitely. And making the technology adaptive and personalized is what I believe is key to create this trust. We know from experience that there is a huge difference between a driver who is using the car once a month in Boston, and someone who is a professional who drives all day in Bangalore. So yes, introducing these new technologies, we really need to understand how to adapt to the circumstances and to the person, and again, we need to do research. I think what we started at CES is giving us a base for collecting data to do such research.
I'm curious if you can talk about anything that you're currently up to now, or in the near future?
So of course I can't reveal exactly what we're coming up with, but the next thing is to implement the technologies we have experimenting with (such as what we have done in the LIV vehicle), into customer products.
So at Affectiva, we're working with OEMs and tier ones on human perception AI, which is detecting all things human within the car. So from where you're sitting, how might this be most helpful to car passengers and drivers, and what would a successful implementation of this technology look like to you?
Understanding the driver and the passenger is key; the trusting part goes to both drivers and passengers in any level of automation. I believe that there's a lot of research to do, and we have to have a lot of very thoughtful and good limitations.
It's also important to make sure that the privacy is guaranteed for the drivers and the passengers. We have the GDPR in Europe which is helpful, but beyond that, all this new tech should not be a threat to ethics and privacy. I'm mentioning that because I think that could be a backlash if one is not thinking about that.
I think there is a lot of room for innovation here. How much detailed information is needed from occupants, what type of emotion, cognitive load, and engagement is the key to really understand the drivers and the passengers. We have come a long way already, driver monitoring is becoming a requirement by Euro NCAP, which means there will be driver monitoring in five star cars very soon. The same thing is happening in North America and Asia. So the expansion of this requirement will see a lot of innovation, and as a result we will learn what is most important and what is needed.
How can we help improve road safety?
I use the word collaborative driving, and I think that when it comes to collaboration, that means both within the industry and society. I think we are in an era now where the automotive industry is changing so rapidly that it is increasingly important to collaborate via open source and with transparent, open discussions, with projects that include industry, universities, authorities, towns, and regions.