Rana el Kaliouby is CEO and co-founder of Affectiva, an AI-based company that aims to bring emotional intelligence to the digital world – to create machines that understand human emotion.

El Kaliouby is a former research scientist at the MIT Media Lab, holds a Ph.D. from the University of Cambridge and will be a featured speaker at True North, Communitech’s “tech for good” conference taking place May 29-31.

Affectiva’s product, Emotion AI, aims to “bring emotions back to our digital experience.” El Kaliouby took a few minutes recently to answer some questions for Communitech News about her company’s mission, and about her plans for True North.

Q – Your focus, and your company’s focus, has been to bring “emotion to the digital experience,” to quote from a talk you gave several years ago. I wonder if the absence of emotion, empathy, and other uniquely human attributes has played a role in the fall from grace of so many tech companies and tech executives in the past year or so. Is tech’s problem tech itself? – that it’s too algorithm-based, rather than emotion- or people-based?

A – Today, technology is very smart, and has a lot of IQ. But a number of years ago, I realized that it was lacking emotional intelligence, or EQ. I don’t think the problems in the industry can be solved entirely by Emotion AI, but I do think that technology that helps us reconnect with ourselves, and gain empathy – including Emotion AI – can help to rehumanize the interactions we have with technology, and in turn, the interactions between tech companies and consumers, and people as a whole.

Q – For those who aren’t familiar with your company and what it does, can you briefly update us about Affectiva and Emotion AI?

A – Affectiva is a category-defining company and the pioneer of artificial emotional intelligence, or Emotion AI, as we call it. We spun out of MIT Media Lab in 2009, and it all started with a vision to re-humanize our interactions with technology and devices. How much better would our interactions with devices be if they could tell how we were feeling, and react accordingly? For example, if my fridge sensed that I was stressed and suggested my favorite dessert? Or my phone saw I was happy, and told me a joke?

We started with Emotion AI for facial expressions of emotion. We’ve compiled the world’s largest emotion data repository, having analyzed more than six million faces in 87 countries around the world. Last year, we also launched an API to detect emotion in speech, getting us one step closer to multi-modal Emotion AI – or AI that can detect emotions based on multiple channels. This is incredibly important, as only seven per cent of what we communicate is conveyed through actual words.

The applications of Emotion AI span industries including healthcare, market research, advertising, social robotics, conversational interfaces, and many more. At Affectiva, we’re increasingly focusing on the automotive space as well, as we see the potential for Emotion AI to help make driving safer and more personalized, especially as semi-autonomous and autonomous vehicles come to the fore. To that end, we recently launched Affectiva Automotive AI: the first multi-modal, in-cabin AI sensing solution that can identify complex and nuanced emotional and cognitive states of a vehicle’s occupants from face and voice in real-time.

This will allow original equipment makers and Tier 1 suppliers to build advanced driver monitoring systems that can detect signs of driver distraction to potentially save lives, and eventually allow for the adaptation of the travel experience based on occupants’ moods and reactions.

Q – Can you envision a machine’s “emotional intelligence” playing a role in tamping down Twitter’s trolls or Facebook’s fake news? Can teaching emotion to a machine play a role in improving our experience with technology in general?

A – I believe that injecting our devices with the ability to tell how we’re feeling will improve not just the way that we interact with technology, but ultimately, with each other. For example, if you’re talking to Amazon Alexa, and she doesn’t understand you, today she might just say “I’m sorry, I don’t understand.” But with Emotion AI, Alexa would be able to sense your frustration, and perhaps adapt behavior accordingly. When it comes to interacting with one another, Emotion AI can help deepen the communication as we increasingly talk to one another through screens. So much of what we’re saying, how we say it, and the feeling behind it, is lost in today’s digital experiences. I believe Emotion AI will bring some of that humanity back – to the internet, social media, and other places where our communication is breaking down.

Q – You mention above about the benefit of having artificial intelligence assist people in tasks like driving vehicles – that AI can recognize when we’re drowsy or distracted or not paying attention to the road and provide an alert before an accident takes place. Everyone would recognize that as a benefit. At the same time, many people would resist the intrusive nature of the technology – being “watched” or monitored by an app. How do you get people comfortable with the intrusion?

A – Privacy is very important to us at Affectiva. My view on this is there has to be value in it for the rider or driver. In the semi-autonomous world, it won’t be an option to turn off driver monitoring for safety purposes, so it’ll be mission critical if you want autonomous driving. In how we’ve collected data so far we’ve been clear that the recording covers driver and occupants. We’ve required consent and that it’s made clear there’s a camera. In deployment mode, the data isn’t being recorded. You’re running technology on-device, in real time, so there’s no concern with the data being stored or collected elsewhere. This is an issue we’ll continue to proactively consider as the technology reaches production, to ensure that people’s rights to privacy come first and foremost.

Q – The question above ultimately leads to the debate about AI being a force capable of improving human life versus a force to be feared – that machines will inevitably usurp human masters. Where are you on that question?

A – It’s no secret that the notion of increasing automation and AI is of concern for many, who wonder how these technologies will impact jobs, relationships and the like. But if harnessed for good, these technologies can significantly enhance our lives, giving way to new kinds of jobs and opportunities, and more personalized connections between people, our devices and the world around us.

We believe that we inevitably need regulation and legislation on ethical and transparent use of data in technology in general and AI specifically. The industry needs to come together to set guidelines and best practices for what ethical and transparent AI looks like. To that effect, Affectiva has joined the Partnership on AI to help advocate for this.

Q – What are your plans for True North? Can you give us a sneak preview of what you’ll talk about?

A –  I am excited to speak at True North. True North is all about the intersection of humanity and technology in today’s world and in the future, and what it means to be human in a tech-driven world. This is in essence what my work is about: how do we engineer empathy in a way that connects us rather than divides us. My talk will explain why we need artificial emotional intelligence, or Emotion AI. I plan to demo the latest AI technology that reads your facial expressions of emotion. This "emotion engine", or Emotion AI, has big implications, and could change not just how we interact with machines, but with each other. Today, Emotion AI is disrupting a number of industries, from media and advertising to healthcare, education and automotive. My talk will showcase some of these disruptions. I will also cover ethics in Emotion AI as that is a very important topic.