EMOS is infusing conversational artificial intelligence with emotional intelligence, providing a social interface between consumer and software.
Our technology is the product of three decades of world class development in multilingual speech, text and affect (emotion, sentiment, personality) understanding.
Deep learning frameworks provide real-time multimodal affect processing for your devices, revolutionizing the concept of user experience.
As we interact more frequently with voice and gesture-controlled machines, we will expect them to recognize emotion and understand high-level communication features such as humor, sarcasm, and intent. To make such communication possible, we need to endow machines with an empathy module - a software system that can extract emotional cues from human speech, text, and behaviour and can guide the response of a robot accordingly.
Our Virtual Agents
Embed your product with emotionally responsive virtual agents to manage the entire interactive experience.
A virtual interviewer equipped with in-depth natural language understanding for effective user interaction.
Through dialogue, Zara works to understand you. Extracting user insight over time, Zara packages personality recognition, spoken language understanding, and expressive text-to-speech to provide a custom experience for each user.
A virtual assistant that can provide music recommendations based on users’ preferences and mood.
By interacting with you, Emi is able to understand your moods, likes, and dislikes. With this understanding, Emi will share music and ambient lighting to enhance any mood, situation or feeling. Emi will also set alarms, report on weather conditions, and much more to help you start your day on the right foot with the music you love.
Spoken Language Understanding
Access state-of-the-art natural language understanding techniques to extract entities, identify intents, and track context over time.
Enable your device to understand affect states of user and craft unique interactions for each of them.
Automatic Speech Recognition
Employ a model that has been trained with a massive dataset of varied inflections and intonations to obtain the most accurate representation of user input.
Generate responses that are not just semantically appropriate, but also convey emotional understanding through volume, pitch, and intonation.
At EMOS we have set a new standard for human-computer interaction. Our approach is designed to provide a unique context-driven interaction between the user and the software. Doing so we have brought about a completely new dimension to user experience. We provide you with
powerful deep-learning APIs and SDKs to enrich your devices with personality and emotional intelligence.
We're masters of design and user experience, software engineers, and entrepreneurs. We believe that the ability to understand and emulate the complexity of social interaction is the cornerstone of intelligent systems. If you're interested to join our incredibly diverse team, please reach us at:
Founder, Chief Scientist
Pascale is a Professor at the Department of Electronic & Computer Engineering at The Hong Kong University of Science & Technology. She is an elected Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her “contributions to human-machine interactions”, and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”. She has cofounded several startups, launching the first Chinese natural language search engine (2001), Best Input Method (Internet World), and first Chinese smartphone virtual assistant (2010).
Co-Founder, Head of Product
Anik is the Head of Product at EMOS. He holds BEng and MPhil degrees in Electronic and Computer Engineering from HKUST and is focused on translating his extensive research in deep learning for speech and emotion recognition into creating immersive and engaging human-computer interactive systems. Anik has served as the project lead for Moodbox, the world's first smart speaker employing state-of-the-art emotionally intelligent design to provide a highly customized user experience, and Zara the Supergirl, an interactive dialog system for detecting user's emotion and personality from natural conversation, which debuted at the "Robots in Action" exhibition at the World Economic Forum in 2015.
Ricky Liu Zhixiong
Ricky is the Chairman and Founder of 3NOD Digital Group in Shenzhen that is founded in 1996. With 20+ years of experience in company management, he is a competent entrepreneur who is also recognized as “Top 10 Businessmen of the Year”, “China’s Most Socially Responsible Entrepreneurs” and “Top 10 Outstanding Youths of Shenzhen City”. Ricky got his MBA from Canada Royal Roads University.