BLOG

Emotion AI Automotive Affectiva Automotive AI In Cabin Sensing (ICS)

BMW: How In-Cabin Sensing Helps Build the Ultimate In-Vehicle Experience

07.24.20

The automotive industry is undergoing a lot of disruption. Every day, there are more and more exciting developments, with a lot of automation driving change in this space. Traditionally, the focus has been on what's going on outside of the vehicle. At Affectiva, we're passionate about turning it inward and really trying to understand what is happening inside the vehicle with our In-Cabin Sensing (ICS) technology: what's happening with the driver? What's happening with the other occupants in the car? What is the state of the cabin? How can we use that information to re-imagine what the future of our driving and riding experiences look like?

Affectiva CEO Rana el Kaliouby speaks with BMW

In a recent Girl Decoded Book Tour, Affectiva CEO Dr. Rana el Kaliouby hosted Sean Batir, Senior Machine Learning Engineer at BMW. Sean has been in the automotive industry for a while, is very passionate about the future of transportation and has been a huge champion of the work we do at Affectiva. Sean Batir actually worked with Affectiva about two years ago when BMW did a project with Affectiva to examine the emotional in-cabin experience. He wanted to understand particularly, when people are driving, how do you understand if they're falling asleep or if they're fully attentive.

Let’s take a look at some of the broad themes from their discussion:

 

 

Technical Capabilities, Pitfalls and the Shared Mental Model in the Automotive Industry

Sean and Rana kicked off their discussion about how, when thinking about the future of automotive technology, it can be challenging as an entrepreneur to paint a vision of the world that doesn’t exist yet, and bring people onboard to that vision. When asked about how he thinks about human machine interfaces in the context of a car and where he sees the future going, Sean pointed out that there’s a lot of active debate within the automotive industry of how each company wants to frame the individual inside the context of the car.

In general, humans convey a rich canvas of emotions through our face, and one possible approach is to use this enriched data from the facial canvas to augment our own abilities of getting from point A to point B. He highlighted the example of driving into his office early in the morning, getting drowsy, his vehicle picking up on his drowsiness then enters a collaborative model where the car intervenes and indicates its systems will be more vigilant after noticing his drowsiness. 

Rana also chimed in to explain how at Affectiva, we talk about the autonomous vehicle continuum: in today’s L1/L2 vehicles, the driver is pretty much in control with a little bit of assistance. In L3, there’s some semi-autonomous capabilities and a transfer of control between driver and car. Then L5 is the fully autonomous world. 

So what is the role of Emotion AI throughout this continuum, beyond driver state monitoring? 

Sean stressed that, in his opinion, fully autonomous vehicles with zero human intervention were not technically impossible. Rather, what’s limiting is city and municipal infrastructures and potentially governmental support, depending on the country. Because in order to achieve this, there must be a real-time update of the highway, the capability to gather all the information and context of all other drivers in addition to nearly complete information of vehicles’ context. The difficulty is that this requires the integration of sensors, which requires integration with systems that are beyond the space within the car. This is more of an administrative human issue that is blocking that, rather than purely the technical. We have seen these latent capabilities in some military and defense applications. So, really the money is there, and, a good amount of the technology exists as well. It's really coordinating people to make it happen.

About 2 years, ago, Sean also met with Bryan Reimer from MIT's AgeLab, who told him that the more you automate, the more you need to educate where, when and how. Yes, existing GPS technology today can—if there's full information—create a very simple path navigation from point A to point B, but the issue and the responsibility falls to the human. If the driver is not paying attention behind the wheel and the entire communication system (that the car relies on) shuts off, and the driver hasn't been educated on the risks, a potentially fatal accident would occur. 

In the more near term L3 universe, there will need to be a seamless transfer of control between human co-pilot or driver and the car, and these two systems must be in-sync. From what he’s seen at BMW, humans have a tendency to take a back seat and not be as vigilant in a semi-autonomous vehicle. The CDC reports that about 94% of all serious auto accidents are due to human error, yet human beings won't completely be out of a driver seat any time soon. 

Another use case brings the driver to a stop when a pedestrian wants to cross the street. Typically, you make eye contact and flag them along: but how would a fully autonomous vehicle do this. Sean said there could be an entire data fabric, which includes an integrated sensor notification system between the pedestrian’s phone and the car. Replicating the more simple emotional intelligence and awareness developed over 200,000 years of evolution could involve the development of a shared mental model. This naturalistic, not over-taxing model that we all have on what it’s like to drive or be on the street with other vehicles and pedestrians would ideally be built into the vehicle of the future. 

Another interesting opportunity around shared mental models is with Unity, who are working on virtual and augmented reality. This idea of using simulation for data synthesis makes it a perfect platform to iterate on a prototype of what this model could look like, as real-world data around these examples is sparse.  Many companies like Affectiva, Unity, and NVIDIA (with their affordable GPU) are leading in innovative ways to bring emotional intelligence to computer science usines neural networks, Region Proposal Networks, RNNs, LSTMs. Like machine “swarm intelligence,” humans are much more intelligent together, and we have the capacity to link all of this together and bring into the vehicle of the future.

shutterstock_597937937

The Convergence of Well Being and Automotive

Let’s imagine that Emotion AI is already integrated into our vehicles: our cars have amazing in-cabin sensing AI, which can understand the driver's state. It can sense how many occupants are in the vehicle, and what their activity is (are they eating, drinking, or sleeping?) 

Sean spoke about how to unlock the convergence of health and well-being with the automotive space. Gathering in-cabin data (with user opt-in and consent, of course) gives potential to use this data to provide recommendations, or help improve your condition. He listed some examples, such as understanding if someone is going through depression, especially if we have historical data on them; the deviation from their normal big broad postures to more reticent displays for a few days, could be an early warning indicator that could connect to clinical data sources. 

Sean also highlighted that since the COVID pandemic has kept him from commuting to work, he has gained 2-4 hours a day from not driving. He has chosen to invest that time into productivity, which presents an opportunity for fully autonomous vehicles to increase how efficient people are at their jobs. He also sees how this could backfire, as the car today often acts as an instrument breaking up our routine and helps us gear up for the day, or wind down from it. 

Rana also asked about his experience in the healthcare industry, and how in the future we could leave our cars in a better mental and emotional state then when we enter them. Sean is a big meditator, and while he doesn’t recommend meditation while driving (yet), he can see a future where cars enabled with Emotion AI can deploy nudging behavior to motivate mindfulness, for both passengers and drivers. 

The Bottom Line

Rana el Kaliouby and Sean Batir’s discussion had some great points on what the in-cabin experience will look like in the future and the role that Emotion AI plays in this. They spoke about the technical capabilities of the technology today, and how the automotive industry can strive towards a shared mental model between cars on the road for a fully autonomous future. Most importantly, they covered the intersection of well being and automotive, and how vehicles can help us arrive in a better mental and emotional state than when first buckling in. 

To see more, watch the full Virtual Book Tour video here

Sean Batir also delivered a technical workshop at our 2019 Emotion AI Summit. Watch his session to see more about his work at BMW, and visit his YouTube channel and Medium to learn more about his science and engineering outreach.

emotion-ai-summit-2019-recording-access

Emotion AI Automotive Affectiva Automotive AI In Cabin Sensing (ICS)