© 2016 AFFECTIVA, INC.
PRIVACY POLICY | TERMS OF SERVICE

Contact

Send Us a Message

Contact Information

294 Washington Street
Boston, MA 02108

Email: info@affectiva.com
Phone: (781) 996-3037
Fax: (781) 996-3038

BLOG

SDK on the Spot: Peppy Pals Educational Apps Teaches Children SEL/EQ Skills
March 23, 2017 Responsive Gaming SDK Emotional Engagement Emotional Intelligence

SDK on the Spot: Peppy Pals Educational Apps Teaches Children SEL/EQ Skills

By: Ashley McManus, Global Marketing Manager; featuring Peppy Pals Founder & CEO Rosie Linder

Face Video Milestone!

Author: Affectiva

By: Affectiva staff

Affectiva just hit a major milestone…we have collected and processed 1 MILLION face videos!

This global dataset, from over 58 countries, is the largest one of it’s kind, to date – representing spontaneous emotional responses of viewers while they watch media content (i.e, ads, movie trailers, television shows and online viral campaigns)!

HOW FAR WE’VE COME

Just two years ago Affectiva had about 25,000 face videos. Last year that number grew tenfold to 250,000! And now, to have hit the one-million mark is a sign of the unprecedented growth and adoption of Affdex, and solidifies the fact that facial coding technology is going mainstream!

PUTTING THINGS IN PERSPECTIVE

Collecting a lot of natural (spontaneously-occurring) face data is extremely hard to do. When I first started building facial coding technology 10 years ago, I had about 4,000 face videos at my disposal – and that was already ahead of what was typically used in research. Even today, benchmark datasets are still limited –only having a few hundred videos and comprised of only posed data or data from just a single population. So, being able to use our platform to leverage any device that has a camera, is crucial for us in collecting natural face data, globally.

CROSS-CULTURAL INSIGHTS AND NORMS

This dataset has allowed us to build what is by far the world’s largest facial expression normative database, a benchmark of what responses to expect in each region of the world. Now, we’re mining the data to understand the way emotion is expressed across cultures and are seeing fascinating differences – for example, how Americans emote versus viewers in Southeast Asia. It’s also necessary in examining how certain factors (i.e., whether we’re collecting data from people’s homes or in venue) and the type of content people are watching (e.g., ads, movie trailers, TV shows) affect the expression of emotions.

IMPROVING ACCURACY AND ROBUSTNESS OF AFFDEX METRICS

We leverage our facial video repository to train and re-train the Affdex facial expression classifiers. This is actually an incredible notion that our technology works as a positive feedback system – growing more intelligent every day by looking at more of its own data. To enable this, we’ve built the first version of something called Active Learning, a software system that automatically decides which data can help the system improve more rapidly – this is machine learning with big data as we might often imagine it.

WHAT’S NEXT?

We believe this is only the beginning. Right now, we are sitting on a gold mine of rich stories about how the world responds to the media from their laptops, tablets and smartphones – objects that are now the lifeblood of the digital world we live in. Today we capture a small snapshot of a person’s response to content. Our vision is digitize emotion to improve our daily lives. In order to do this, we have to capture longitudinal data, about the emotional experiences of people around the globe, and integrate that with contextual data. This opens up the possibility of new applications that capture emotion data and enrich our everyday digital experiences.