Affectiva has made its new cloud-based API for measuring emotion in recorded speech available to beta users, with an aim to create a multi-modal Emotion AI platform that is able to distinguish emotions across multiple communication channels. It will help speech classifiers identify emotions in real-time and in conversations.
The new API is developed by using an existing deep-learning framework with expert data collection and labelling methodologies. When all these are combined with its existing emotion recognition technology for analysing facial expressions, it allows a person’s emotions to be measured across both face and speech.
Dr. Rana el Kaliouby, co-founder and CEO, Affectiva, said: “More often than not, humans’ interactions with technology are transactional and rigid. Conversational interfaces like chatbots, social robots or virtual assistants could be so much more effective if they were able to sense a user’s frustration or confusion and then alter how they interact with that person. By learning to distinguish emotions in facial expressions, and now speech, technology will become more relatable, and eventually, more human.”
A similar AI-based Chinese education company, Liulishuo, has developed an automatic assessment engine for spoken and written English. The company has a vast data collection of Chinese people speaking English. It has also recently raised $100 million in a series C funding round in order to develop the existing smart teaching platform.
Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.