New York, Jan 30 (IANS) US engineers developed an Artificial Intelligence (AI)-enabled
system that can translate brain signals into intelligible speech, a breakthrough that may help those who cannot speak to communicate with the outside world.
The study, led by Columbia University researchers, showed that by monitoring one’s brain activity, an AI-enabled technology can reconstruct words a person hears with unprecedented clarity, Xinhua news agency reported.
A team of neuroscientists from the varsity trained a voice synthesiser or vocoder to measure brain activity patterns of epilepsy patients already undergoing surgery while those patients listened to sentences spoken by different people.
Those patients listened to speakers reading digits between zero to nine while recording brain signals via the vocoder.
Then, they used a neural network, a type of artificial intelligence, to analyse the signals, and gave robotic-sounding voices, according to the study published in the journal Scientific Reports.
“We found that people could understand and repeat the sounds about 75 per cent of the time, which is well above and beyond any previous attempts,” said Nima Mesgarani from the varsity.
Previous research showed that when people speak or even imagine speaking, distinct patterns of activity take place in their brain and those pattern of signals also emerge when we listen to someone speak or imagine listening.
Mesgarani and his team planned to test more complicated words and to run the same tests on brain signals emitted when a person speaks or imagines speaking.
Mesgarani called it a “game changer” that may give anyone who has lost their ability to speak a new chance to connect to the outside world.