AI is changing the way we work, play, interact, drive and ultimately live our lives. It’s stepping in to fill roles traditionally held by people, and is ripe with potential to make us happier, healthier and more productive. But amidst the promise of AI, there are significant implications for privacy and ethics. AI that is designed to closely interact with people will “know” a lot about us -- and we, as consumers, may not be okay with that in all instances. Where should the tech world draw the line? And how can the ecosystem ensure that AI benefits people without aggravating inequality or replicating societal biases?
AI ethics will be one of the major themes explored at Affectiva’s 2019 Emotion AI Summit. During one of the panels, titled “How Do We Realize Ethical AI? The Ethical Development + Deployment of AI,” we’ll have an open dialogue on actionable steps that organizations and individuals must take to ensure that AI is transparent, ethical and equitable. Panelists will include Terah Lyons, the Executive Director of the Partnership on AI, Heather Patterson, Ph.D., J.D., Senior Researcher at Intel Labs, and Deena Shakir, Partner at Lux Capital, with WIRED senior writer Will Knight moderating the discussion.
We caught up with one of these panelists, Terah Lyons, to get her take on this year’s Summit theme, human-centric AI, and the implications as the technology plays an increasingly large role in our lives. Her organization, the Partnership on AI, brings together academics, researchers, civil society organizations and technology companies to advance the public understanding of AI and develop best practices for how AI should be developed and deployed with fairness and transparency.
Read on for her thoughts:
What have been the biggest changes you’ve witnessed in AI?
That’s a big question. The field has obviously been through some significant transitions in the last several years. One of the largest, I think, is the scale of attention now being paid to interdisciplinary ways of understanding AI-based technologies and their impact. It has been really heartening to see the traditional AI research field evolve beyond the technical research domain and embrace other ways of understanding how humans and technology interact, drawing on work done by scholars and experts in science and technology studies, humanities and social studies, and other fields and methods. And this transition has obviously informed the way the technology sector itself has evolved; AI can’t be divorced from the contexts in which it is built and into which it is deployed. Hopefully we continue to meaningfully grapple with that.
Where do you see Human-Centric AI in five years? Ten?
AI is already integrated in so many products and services in daily use by a significant number of people. I think we’ll see AI increasingly make decisions for and with us -- that could either be a good thing, or it could be very bad. I’m hopeful that it will increase individual productivity and enhance the quality of people’s lives, including in specific, high-potential domains like healthcare and other areas ripe for scientific and structural advancement. We’ll have to work carefully to ensure that disparate impacts aren’t brought by these gains, however, and that the benefits of AI are distributed widely instead of just to a privileged few.
As Human-Centric AI continues to evolve, what do you believe will be the biggest impact on consumer behavior?
I don’t work in consumer technology but I have to assume based on what we’re seeing in the field that more and more people will allow companies to collect fairly intimate data about them -- their lives and daily patterns -- and that the data will increasingly be used to “assist” us in making decisions about what to buy, eat, where, and more. Algorithms, for some, may help alleviate some of the decision fatigue of daily life, even as the world around us gets more and more complicated. Hopefully these technologies also help us navigate the world more effectively -- whether the healthcare system or our daily commute to work, or otherwise.
There are some pretty significant concerns about ethics and privacy violation involved with AI. How do you see these concerns been addressed in the future?
Again, I am optimistic about the attention being paid to these issues in this moment. I am hopeful that considering these issues carefully as a regular part of the product development lifecycle or scientific process starts to become the norm. Practically, we need to build the right processes, systems, and decision frameworks within organizations in order to do that -- and also to ensure that those systems reflect a long history of work and lessons that came before AI and can be applied again in these circumstances.
What are you most excited to see from Human-Centric AI in the future?
I’m looking forward to AI that serves all of us -- though we have a long way to go. I also am really excited about the potential that AI has to help assist with scientific advancement. This type of technology won’t necessarily be consumer-facing, but it will result in breakthroughs at a scale and pace that we haven’t been able to achieve before due to increasingly advanced capacity to process and analyze data. There are some organizations doing groundbreaking work on using machine learning to understand protein folding, for example, which will fuel drug discovery for diseases that we haven’t yet been able to cure. AI will bring some of those grand challenges within our grasp.
Interested in learning more? Visit www.partnershiponai.org, and don’t miss Terah’s panel at the 2019 Emotion AI Summit!