By: Affectiva staff
Affectiva just hit a major milestone…we have collected and processed 1 MILLION face videos!
This global dataset, from over 58 countries, is the largest one of it’s kind, to date – representing spontaneous emotional responses of viewers while they watch media content (i.e, ads, movie trailers, television shows and online viral campaigns)!
HOW FAR WE’VE COME
Just two years ago Affectiva had about 25,000 face videos. Last year that number grew tenfold to 250,000! And now, to have hit the one-million mark is a sign of the unprecedented growth and adoption of Affdex, and solidifies the fact that facial coding technology is going mainstream!
PUTTING THINGS IN PERSPECTIVE
Collecting a lot of natural (spontaneously-occurring) face data is extremely hard to do. When I first started building facial coding technology 10 years ago, I had about 4,000 face videos at my disposal – and that was already ahead of what was typically used in research. Even today, benchmark datasets are still limited –only having a few hundred videos and comprised of only posed data or data from just a single population. So, being able to use our platform to leverage any device that has a camera, is crucial for us in collecting natural face data, globally.
CROSS-CULTURAL INSIGHTS AND NORMS
This dataset has allowed us to build what is by far the world’s largest facial expression normative database, a benchmark of what responses to expect in each region of the world. Now, we’re mining the data to understand the way emotion is expressed across cultures and are seeing fascinating differences – for example, how Americans emote versus viewers in Southeast Asia. It’s also necessary in examining how certain factors (i.e., whether we’re collecting data from people’s homes or in venue) and the type of content people are watching (e.g., ads, movie trailers, TV shows) affect the expression of emotions.
IMPROVING ACCURACY AND ROBUSTNESS OF AFFDEX METRICS
We leverage our facial video repository to train and re-train the Affdex facial expression classifiers. This is actually an incredible notion that our technology works as a positive feedback system – growing more intelligent every day by looking at more of its own data. To enable this, we’ve built the first version of something called Active Learning, a software system that automatically decides which data can help the system improve more rapidly – this is machine learning with big data as we might often imagine it.
WHAT’S NEXT?
We believe this is only the beginning. Right now, we are sitting on a gold mine of rich stories about how the world responds to the media from their laptops, tablets and smartphones – objects that are now the lifeblood of the digital world we live in. Today we capture a small snapshot of a person’s response to content. Our vision is digitize emotion to improve our daily lives. In order to do this, we have to capture longitudinal data, about the emotional experiences of people around the globe, and integrate that with contextual data. This opens up the possibility of new applications that capture emotion data and enrich our everyday digital experiences.