Researchers at Brown University used a brain-computer interface to reconstruct English words from neural signals recorded in the brains of rhesus macaque monkeys.
The researchers recorded the activity of neurons in their brains while the primates listened to recordings of one- or two-syllable individual English words and macaque calls.
The team processed the neural recordings using algorithms designed to recognize neural patterns associated with particular words; then, the neural data was translated into computer-generated speech.
The research showed that recurrent neural networks produced the highest-fidelity reconstructions compared to other tested algorithms.
Brown's Arto Nurmikko said, “The same microelectrodes we used to record neural activity in this study may one day be used to deliver small amounts of electrical current in patterns that give people the perception of having heard specific sounds.”
From Brown University
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA
No entries found