BLOG

How Affectiva is Building a Responsive and Adaptive In-Cabin Environment

09.19.19

occupant experience monitoring

Cars and the way we use them are changing. With the ubiquity of mobile devices and information at our fingertips, people expect their cars to provide a similar experience: in fact, 52% of consumers believe connected car solutions will add significant value to the transportation experience. They want an in-cabin environment that’s adaptive and tuned to their needs in the moment. To fulfill that, automakers and mobility service providers need a deep understanding of what’s happening with people in the vehicle: how are they reacting to the in-cabin environment during the trip? How can the car adapt to provide the best possible experience? This is crucial not only for ridesharing and the cars we drive today, but also for the future of mobility.

In our latest Affectiva Asks podcast, we interview Affectiva Senior Product Manager Abdelrahman Mahmoud. During the interview, he talks to us a bit about his background in software engineering, what he sees as challenges within the automotive industry and how Affectiva’s Human Perception AI technology aims to enhance the occupant experience in next generation vehicles.

Abdo Mahmoud on occupant experience monitoring in vehicles

Let's start with your background. Can you speak to your career trajectory and how you arrived at your role at Affectiva today?

 I grew up in Cairo, Egypt. I was passionate about machines and human-machine interaction in general. During college, I was lucky enough to get involved in a project with the Affective Computing Group at the MIT Media Lab, where we were detecting facial expressions. Then, Affectiva spun out of the Affective Computing Group to commercialize the technology on facial expression detection. The main application was in market research, and we’ve since been working on automotive as well. 

I joined Affectiva as a software engineer: I wanted to learn more about what it would take to actually build cloud software and also help productize a technology that I worked on in the Media Lab. Eventually, I wanted to play a more critical role in defining use cases, defining what human-machine interaction should look like and how this emotion recognition technology should be used. That's how I ended up being a product manager.

You're currently focused on the in-cabin occupant experience of next generation vehicles. Can you talk a little bit more about that, what that means, and maybe what you're working on?

Generally speaking, I work with OEMs on defining use cases around how AI technology could be used in smart cabins. That is, from telling the smart AI, the virtual assistant or the entertainment system in the car where people are sitting, all the way through to how they're feeling, to what the objects are that they interacted with or they left in the car. I focus more on the holistic view of the cabin. I typically work with cameras more than the microphones (although we have both technologies) that see the whole cabin and analyze the scene to try to understand information about everyone in the cabin.

What are some of the key challenges the automotive industry faces in trying to improve these in-cabin occupant experiences within the car?

I think, like any new technology, AI specifically comes with its own challenges around productization. One big challenge is running on platforms in the car that mostly weren't designed to run AI systems to begin with, or the designer of the system or chips didn't have from the get-go in mind the idea of implementing complex AI pipelines on their hardware. 

Another challenge is how users are going to receive these interactions and how to make the interactions learn about the behavior of the user. A third challenge is around how to design HMI systems that compensate for failures in an AI model, because AI models are never perfect. There's always a trade-off between the accuracy of a model and the speed of the inference, for instance. How do you compensate for that by using smart HMI systems? 

Understanding these challenges, what are the opportunities for shaping the new in-cabin sensing solutions of the future? And what role do you see Affectiva technology playing in this?

At Affectiva, we're not just focused on emotion. We want to build an AI system that can analyze and understand different aspects of the cabin state, and that is critical for smart cabins in the future. With the push into having autonomous cars or level three to level four cars, the next challenge in automotive is going to be about the HMI in the car being able to differentiate experiences across different brands. This also applies to ride-sharing use cases, where we have a complete robo-taxi: how can you differentiate one ride-sharing company from another?

Another challenge is around how to monetize this. For instance, the time people are spending inside the cabin, and how to capture their attention and monetize that. For all of these challenges around the smart cabin of the future, the systems need to understand what is happening inside the cabin. So they need data around where people are sitting, what objects are in the scene, are they paying attention, and so on. That's essentially the next frontier after self-driving cars: how do you customize or personalize the driving experience?

How do you see the role of OEMs and tier ones transforming with all of this industry change, and are you seeing any trends in your conversations with them, or our partners?

I think the industry is changing: traditionally, requirements were handed down between the OEMs to the Tier 1 suppliers to the Tier 2s. Generally speaking, it’s becoming more of a collaboration process with regard to requirements between these three parties. The idea is that because these systems are becoming more complex, you have to get involved in the design from day one, and that it's becoming a bit more collaborative.

Abdo Mahmoud speaking at M:Bility California

You just spoke at Drive World 2019, M:bility California, and will host a workshop with Mike Gionfriddo at our Emotion AI Summit. Could speak a little bit about what you will be covering in that presentation?

I'll talk a little bit about the technical challenges of deploying an AI-based product, more specifically, for in-cabin sensing in the car. What we have learned so far from our experiences, and when you're thinking about designing such a product, what vehicle considerations you should take into account.

This also would be your 3rd Emotion AI Summit, but the first one where you're a speaker. Can you tell us your experiences with this Summit, what the event's like, and what you thought about prior years?

The Summit over the last couple of years has been an excellent opportunity to interact with people across functions and organizations who are working on AI, building AI systems, or building HMI systems that are using AI. It's an excellent opportunity for people to talk about things like designing these systems in an ethical way and avoiding bias. That was one of the main themes in last year's Emotion AI Summit. This year, the theme is even more closer to my heart, as  it's about how to design human-centric systems and HMIs. It's going to be an excellent opportunity to talk with people in different industries, especially in the automotive industry, around that topic.

Anything else you'd like to touch on, or if you have any asks of people listening, or recommendations where people can go to learn more?

Definitely come to the Emotion AI Summit! We would love to have you and to talk. Check out our website to know more about our automotive products, whether it's driver monitoring systems or in-cabin sensing systems. Feel free to reach out to me over Twitter, or through our Affectiva Twitter to talk more about in-cabin sensing. We'd love to hear more on what data you are looking for inside the cabin, how you envision these systems, and the interaction between users and these systems in the car.

emotion-ai-summit-2019-recording-access