As detailed in Frontiers in Neuroengineering, a team of American and German neuroscientists led by Stéphanie Martin at UC Berkeley and Peter Brunner at the New York State Department of Health have made significant progress in reading imagined speech from brain activity.
To achieve this remarkable feat, participants were first instructed to read text from a computer screen while researchers recorded electrical activity from their brains. The recordings were made using a technique called electrocorticography (ECoG), in which electrode arrays are implanted directly on the surface of the brains of patients awaiting brain surgery. These ECoG recordings were matched with simultaneous audio recordings of the patients reading the text out loud.
By finding patterns of brain activity that correlated with particular speech sounds, the researchers were subsequently able to predict what new voice recordings would sound like from brain activity alone. They were able to reconstruct the imaginary inner voice of participants instructed to read the text silently. (Listen here for an example of speech reconstructed from brain activity, from earlier research).
This builds significantly on Martin and Brunner’s previous research, which had already shown they could reconstruct speech from the brain activity of participants who were talking out loud.
“A major inspiration [for this research] was the success that has been seen in developing neural prosthetics for restoring movement function—for example, operating a prosthetic limb by decoding brain signals from the motor cortex,” said Brian Pasley, a UC Berkeley postdoctoral fellow and corresponding author on the study. “The long-term goal would be to develop neural prosthetics that can restore communication by decoding signals from speech cortex.”
Progress in motor neural prosthetics has also been propelled in part by UC Berkeley researchers, who most recently developed a cursor-controlling device that reciprocally learns from the brain as the brain learns to control it.
Of course, the question remains: is this technology capable of decoding thoughts more generally? Pasley is cautious to say yes. “A ‘thought’ may or may not be associated with an auditory imagery component—for example some people silently repeat words to themselves as they read, while others do not,” he says. “Our approach is only capable of picking up signals associated with auditory imagery. So we might pick up vivid sound imagery associated with a thought, but likely not the brain signals associated with the thought itself.”
Still, when put together with other recent neural decoding efforts by researchers at UC Berkeley and elsewhere—demonstrating, for example, that brain activity alone can be used to generate rough semblances of face images and film clips people are looking at—it seems only a matter of time before brain recording devices can be used to peer more holistically into the mind.
Engineers have already begun predicting what brain reading devices might be capable of in the future. A group of UC Berkeley engineers recently envisioned a future neural interface system that would rapidly accelerate progress in neural decoding—they call it neural dust, an array of thousands of microscopic sensors that could be sprinkled onto the surface of the brain. In addition to its prosthetic capabilities, a neural recording device with such resolution might make it possible to reconstruct memories, imagination, and even dreams.
Although this technology is in its infancy, scholars can already begin thinking more about the broader scientific and societal implications. Just be aware that some of those thoughts may be audible to others.
Department
- Psychology
Article Type
- Research Highlights
Add a Comment