BLOG

Artificial Intelligence Machine Learning Science Company Computer Vision

5 Cool things we are doing in Machine Learning and Computer Vision to Humanize Technology

05.10.22

5-cool-things-affectiva-smart-eye-are-doing-in-machine-learning-and-computer-vision-to-humanize-technology

For several decades now I have been on a mission to humanize technology – bridging the gap between human and machine. In 2009, I co-founded Affectiva where we invented the exciting new tech category of Emotion AI and evangelized its many applications. 

Affectiva is now a Smart Eye company. For over 20 years Smart Eye has developed AI-based eye tracking technology that is widely used in both academic and commercial sectors, including automotive, aerospace, aviation, assistive technology, behavioral science and more. Together we have a lofty vision: to build Human Insight AI, technology that understands, supports and predicts human behavior in complex environments. 

Today, our focus areas are Automotive and Behavioral Research. There is a lot of momentum in automotive where regulation and legislation are accelerating demand for Driver Monitoring Systems and where we see an evolution to Interior Sensing. This new area combines driver monitoring with occupant monitoring to understand what’s happening in a vehicle – the state of the driver, cabin and the passengers in it.  

We develop our technology using deep learning, computer vision, and massive amounts of real-world data. In order to build systems that can detect nuanced human emotions, complex cognitive states, activities, interactions and objects people use, we need an amazing team of talented scientists and engineers to make this mission a reality. 

If you are checking out our open job positions or know someone who might be, we thought it would be interesting to give a look behind the curtain, and see some of the inner workings at Smart Eye. Here are 5 cool things that we are doing in machine learning and computer vision today: 

 
1 - We Work with Synthetic Data 

Machine Learning algorithms are very data-hungry. Smart Eye has an internal synthesis tool which lets us simulate inside of the car cabin, switch drivers, replay in-car behavior measured from our SDK and simulate / analyze performance of features to new unseen cameras, lenses, placements, carcabins etc. This synthesis tool can generate not only car cabin images, but also all the pixel perfect labels without requiring human annotation. It helps us to experiment with different camera placements within a vehicle, if we were to move it up or down, we can see if we can still have visibility of the driver in a simulated environment. This virtual environment is especially useful when the vehicle you are designing for doesn’t exist yet.

We’re also investing in extending our capabilities into a wider set of features, simulating multiple car-cabins in service of rapid development of new features. Car interior segmentation is currently leveraging synthetic data (not currently public as it’s very early research). Our team is also always submitting their work for publication: check out this recent paper published around  an innovative approach to combating driver distraction with gaze estimation, submitted together with MIT Age Lab’s Advanced Vehicle Technology Consortium (AVT).

 

2 - Real work on embedded engineering

We also work with a best-of-breed team that has a track record of bringing AI features to the edge, embedding computer vision and machine learning into embedded platforms. While it’s common for machine learning practitioners to work exclusively on nameless features, models, or use-cases that exist on a cloud server somewhere and never interfaces with the consumer at all, at Smart Eye we are deploying our technology into real vehicles that are driving on roads today.

Already our Driver Monitoring technology is on the roads today, with 93 makes and models hitting the road over the next several years.

 

3 - We work on next-generation sensors and Systems on Chips (SoCs)

Getting our tech to run on bleeding edge, offline, and with embedded platforms involves optimizing computer vision algorithms for latest state-of-the-art cameras and optical sensors not yet available on the market, as well as on automotive-grade hardware.

We are already working with the newly announced Omnivision OX05B1S 5MP RGB-IR camera, which represents the next generation of interior sensing cameras. Similarly, we are working with the latest automotive next-generation embedded SoCs being designed for in-vehicle edge AI computing. These powerful SoCs, such as the Texas Instruments TDA4 and the Qualcomm 8155, have AI accelerators built to handle AI workloads in excess of 1 Tera Operations Per Second (TOPS). It’s really important that we have the in-house expertise to run our deep learning algorithms with a minimal footprint, as it represents a big jump forward in enabling Interior Sensing AI.

 

4 - We are improving future road safety by building innovative Interior Sensing AI

Every 24 seconds a human life is lost in traffic. The majority of these accidents are caused by human error – drivers that are distracted or fatigued. Driven by regulation and legislation, car manufacturers are now deploying Driver Monitoring Systems that can detect driver impairment and enable appropriate interventions.

Today, we are leveraging AI to improve upon, and in some cases replace, existing in-vehicle sensors. While sensors are typically cheap, those are additional pieces that must be put into a vehicle and only do specific things  — for example, a seat belt buckle sensor can tell when a buckle is physically connected. However, an AI based vision system can also tell when a belt is improperly positioned. Extending to the second or third row, AI can also determine seat belt positioning, whereas sensors are not cost effective or technically challenging to integrate.

Another example is Occupant Detection System, which today relies on an air bladder sensor to measure anything over 50 kg as a person sitting in a seat. In-vehicle AI can detect that a human being is in a seat, and how they are positioned. This augments the existing weight sensors installed in driver and passenger seats to help improve safety outcomes by understanding when it is appropriate and safe to engage safety systems like airbags.

Additionally, for future vehicles to get the highest Euro NCAP safety rating, they must be equipped to detect children left behind unattended in the vehicle. This is to reduce avoidable tragedies of children being left unintentionally in vehicles by detecting the child’s presence using interior sensing AI.

All of these examples show that by using a single camera placed in the vehicle, you can use AI to build all these additional safety features without the need to install multiple sensors all over the vehicle. 

 

5 - We are building the next generation Human-Machine Interface (HMI)

It’s exciting to work on any number of these projects, but why stop at the present? Working with a high quality sensor and a powerful automotive SoC, car makers are actively looking for what else is possible inside the vehicle including working on the next generation of human machine interaction (HMI). 

These features include things like gesture control, where occupants can perform gestures to interact with the vehicle infotainment without needing to avert your eyes from the road: a safety feature, but also a very cool futuristic one! Our team is also exploring things like emotion recognition for adaptive interactive voice response (IVR) and in-vehicle virtual agents. On our project pipeline that constantly evolves to meet the needs of our automotive customers, we have additional in-vehicle use cases such as driver recognition in service of in vehicle personalization, object detection for objects left behind, animal presence detection, and so much more. 

 

5-cool-things-affectiva-smart-eye-are-doing-in-machine-learning-and-computer-vision-to-humanize-technology-2

 
The Bottom Line: We’re just getting started, come join us! 

From working with the latest and greatest hardware to creating the mobility experiences of the future, our team has the exciting opportunity to shape how we get from Point A to Point B in the cars of tomorrow. We are hiring in a critical position across the AI-pipeline: be it in data engineering to build and scale this pipeline, or in computer vision and machine learning to research and develop these classifiers, or in software and embedded engineering to bring these features to automotive grade chips. It’s an incredible opportunity to build technology that will be deployed in millions and millions of vehicles worldwide and potentially save and improve people's lives. If any of this sounds exciting, we hope you’ll consider joining us today to help make a difference in how the world experiences mobility. Apply now.

 

New call-to-action

Artificial Intelligence Machine Learning Science Company Computer Vision