Researchers at the Massachusetts Institute of Technology (MIT) have developed a system that utilizes a wearable device and artificial intelligence to detect whether the tone of a conversation is happy, sad, or neutral based on a person's speech patterns and physiological activity.
The prototype uses a Samsung Simband to collect physiological data, such as movement, heart rate, blood pressure, and skin temperature, while audio data is captured to analyze the speaker's tone, pitch, energy, and vocabulary during conversation. A neural network processes the mood of a conversation across five-second intervals, providing a "sentiment score."
The team trained two algorithms using data gathered from 31 conversations of several minutes each. One algorithm classified the overall mood of a conversation as either happy or sad, and the other labeled each five-second increment as positive, negative, or neutral.
The algorithm correctly identified several indicators of mood. For example, long pauses and monotonous vocal tones were associated with sad stories, while energetic and varied speech patterns correlated with happier stories. On average, the model was able to label the mood of each five-second interval with an accuracy that was 18% above chance, and 7.5% better than existing methods.
From MIT News
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found