BLOG

Emotion AI Media Analytics Ad Testing AI Facial Coding AI

The Rise of the Machines: Why the Research Industry’s Obsession with AI May Be Clouding Our Thinking

10.02.24

Emotion AI

It wasn’t surprising that AI dominated the discussions at the ESOMAR Congress in Athens a few weeks ago. What was surprising, however, was the extent to which it seemed to overshadow almost everything else. By my calculations, at least half of the presentations focused explicitly on AI, and even those that didn’t still managed to mention it.

Without a doubt, how to effectively use AI is the hottest topic in the research industry—and beyond. But it’s not the only important topic. For instance, how many sessions discussed consumers’ views on climate change? Perhaps five out of more than 100. Responses to global conflict? Maybe a couple. This shows a growing obsession with AI—along with a lot of confusion about its place in the broader context of insights.

You might be wondering: Why isn’t someone who works for an AI company celebrating this? After all, Affectiva is an AI business and has been a pioneer in this space for over a decade. However, our real mission is delivering insight—and this is where my concerns lie.

Emotion AI

Generative AI and Large Language Models (LLMs) undoubtedly offer huge benefits, such as delivering more data, faster, and at lower cost. Affectiva’s Emotion AI, for instance, reveals instinctive reactions that can’t be uncovered through surveys. But does more data automatically lead to better insights? I’m not so sure. Here’s why:

Generative AI: increasing depth of data

Generative AI was understandably a hot topic (we have written our own eBook on the subject here) with presentations showcasing its ability to enhance automated qualitative research. Phil Sutcliffe from Inca demonstrated how Gen AI can be used to generate more realistic, nuanced follow-up questions in open-ended surveys. In an era of participant disengagement, this holds real promise for adding depth to both qualitative and quantitative research.

Several presentations highlighted the use of AI to draft questionnaires and discussion guides. While some  of this was promising, and  could streamline the early stages of research design, I would still question the extent to which AI can in isolation write a questionnaire that meets the nuanced and changeable briefs that research buyers write.

The Synthetic Data Debate

More controversial was the use of AI to create synthetic data. Tools like agent-based modeling have long been used to fill in gaps in quantitative datasets. In certain contexts, this approach makes sense, especially when it's unfeasible to ask respondents every question. However, using LLMs to generate synthetic responses in qualitative or quantitative research was divisive—and for good reason.

While LLMs can mimic real responses, we should remember that these responses aren’t real. Imagine you’re about to spend millions on launching a new product. Would you base your decision on a group of students reading social media posts and then writing a fictional post in the style of a consumer? That’s essentially what we’re doing when we use synthetic responses.

It’s interesting, yes—but does it truly reflect the views of your target market? Maybe. But it’s just as likely to reflect the quirks of the AI model. As Vivienne Ming, a 25-year AI veteran who closed the conference, aptly said, “I wouldn’t let one of those models write a single word of a tweet for me.” Why? Because LLMs rely on known commonalities, not novelty or creativity. And, as several presenters acknowledged, synthetic responses tend toward generalization, hyperbole, and cliché.

Synthetic data has its place in certain industries, but in market research—an industry built on understanding real people—data from real people should be most valuable.  As all AI businesses know, models only work if the ground truth data is strong. So I agree with ESOMAR President Ray Poynter's suggestion that synthetic data should always be clearly identified as such. We don’t want buyers to be unsure whether the data they’re getting is authentic or not.

ad testing AI

Drawing out insights

Most of the papers at ESOMAR focused on how AI can generate more data, faster, from harder-to-reach audiences. What was striking, however, was the lack of focus on how AI can actually interpret that data to generate meaningful insights.

Sure, LLMs can summarize text, but summarizing is not the same as insight. Human researchers bring a deep understanding of cultural and social context—something that AI, by its nature, cannot fully grasp. AI models are trained to mimic understanding, not to actually understand.

Perhaps this is why we see few examples of LLMs writing compelling quantitative research reports or generating novel ideas from data. AI can accelerate data collection, but it’s humans who must remain central to interpreting that data and drawing out the insights that truly matter.

Conclusion

AI is already transforming the research process, making it faster and unlocking data that was previously hard to access. Affectiva’s Emotion AI technology is a great example of this progress. However, we keep humans in the loop to provide the critical contextual understanding that AI alone cannot deliver. While some aspects of research will inevitably be automated, it’s essential that humans remain at the heart of generating real insights from data, and acting on them.

Emotion AI Media Analytics Ad Testing AI Facial Coding AI