Soon, AI assistants will recognize and respond to the emotion in your voice

Seen the movie “Why Him“? funny movie right? Now that could be the future of AI in no distant time, liked the fact that i saw Elon Musk though… could that be really him? 😀

Previously, emotion-tracking AI company Affectiva developed facial recognition technology for identifying how you’re feeling based on pictures. This company has moved a step further, and now doing the same thing for your voice.

We all know it’s easy to tell if someone is angry from the way the speak or the sound of their voice right? Well, very soon smart assistants such as Amazon’s Alexa or Apple’s Siri will be able to tell that too, that is, if these companies decide to use new technology developed by emotion tracking artificial intelligence company Affectiva.

As said earlier on, Affectiva’s work was focused on identifying emotion in images by observing the way that a person’s face changes when they express particular sentiments. The AI company’s latest technology is able to detect emotion in speech. It was developed using the power of deep learning technology and is capable of observing changes in tone, voice quality, volume and speed, it then uses this to recognize emotions such as laughter, anger, laughter and arousal in recorded speech.

According to  Rana el Kaliouby, co-founder and CEO of Affectiva, “The addition of Emotion AI for speech builds on Affectiva’s existing emotion recognition technology for facial expressions, making us the first AI company to allow for a person’s emotions to be measured across face and speech. He went on further to say that “This is all part of a larger vision that we have. People sense and express emotion in many different ways: Through facial expressions, voice, and gestures. We’ve set out to develop multi-modal Emotion AI that can detect emotion the way humans do from multiple communication channels. The launch of Emotion AI for speech takes us one step closer.”

The data collected by Affectiva was labeled by human experts for the occurrence of what the company calls “emotion events.” These human generated labels were used to train and validate the team’s deep learning models, so that over time it grew to understand how certain shifts in a person’s voice might indicate a particular emotion.

This seems like a nice idea, considering the fact that It could similarly be used to allow automated assistants to change their approach when they hear anger or frustration from a user, and also to learn or know the kind of responses that will trigger the best reactions and repeat these strategies.

Affectiva is an emotion measurement technology company that grew out of MIT’s Media Lab which has developed a way for computers to recognize human emotions based on facial cues or physiological responses. Among the commercial applications, this emotion recognition technology is used to help brands improve their advertising and marketing messages. Another major application has been in political polling. In 2011, the company partnered with Millward Brown, which is itself a part of the Kantar Group, the market research, insight and consultancy division of WPP plc, a London-based public company.

There are no comments

Add yours