By: Ben Anding, Integration Engineer
As Artificially Intelligent computers mature, it is exciting that machine learning is also becoming more accessible and portable. Soon the fundamental way that we interact with our machines is going to change.
Picture if you will, a future in which your kitchen is also a chef. After grilling your vegetables and burger it may grill you with questions about how you enjoyed your meal. The computer will collect emotional data from you in addition to your verbal response and base future decisions on how you truly felt about its culinary skill.
With the advent of machine learning on mobile devices, the initial setup ritual undergone when purchasing a new computer will seem less as we know it now and instead more closely resemble training a pet in the manner of how to behave and respond to your commands.
Underlying emotion recognition capabilities will enhance the interaction between your device and yourself. This emotional layer, along with utilization of machine learning algorithms, will allow your device to recognize your mood, and react appropriately. As time goes by, the relationship you and your device share will result in a suffuse, positive experience.
Speculation about the future is often rife with exaggerated optimism (or pessimism). Many who are reading this may be thinking that these ideas are the stuff of science fiction or even what one might call a person’s “pipe-dreams.” If so, he or she may be surprised to read that technologies similar to the ideas discussed above are possible now. In fact, scientists, engineers, and makers are currently working around the clock to ensure that this seemingly distant tomorrow is accessible today.
We here at Affectiva provide easy to integrate SDKs and APIs that empower developers, doers, and dreamers to create products and experiences that are emotion-aware. We recently released the third iteration of our SDKs (Version 3.0) which provide machines access to accurately trained classifiers that recognize 7 high level emotions, 15 nuanced facial expressions, 14 emoji expressions, as well as the ability to detect the presence or absence of glasses and identify gender simply by being exposed to imagery of a person’s face.
The availability of the technology is nothing though without talented creative minds at work using the tools to create apps, systems, and products that utilize the tech to provide an experience to an end user. To address this, we at Affectiva are reaching out to the maker community with an open invitation to give our software a spin, by downloading our SDK for free for forty-five days. Also, we will host our #EmotionLab16 Hackathon March 4-6, 2016 at the Microsoft NERD Center in Cambridge Massachusetts for which seats have already sold out.
At Emotion Lab ’16 we will provide a variety of Internet of Things hardware for use during our hackathon, including Arduino development boards, Philips Hue light bulbs, a Nest thermostat and more. We are excited that Beyond Verbal, the voice-driven emotion analytics company, are making their API available at our hackathon. Together with our Affdex SDK this provides limitless possibilities for emotion enabling these devices. For inspiration, please read our previous blog post Emotions as Feedback Versus Control and The Mood-Aware Internet of Things.
If you are participating, we look forward to seeing you at our hackathon. If you were unable to secure a spot, don’t let this stop you! If you know of any hackathons going on in your area, don’t hesitate to try out Affectiva’s software at your local event. You can begin developing with our emotion recognition software by signing up for a 45 day free evaluation license of our SDK. Simply fill out this short form to start right away. We are always interested in hearing feedback regarding our product, as well as hearing about the interesting and exciting use cases that you imagine and develop, so please contact us to tell us about your ideas and finished applications.