Scientists played Pink Floyd for people, then used their brain activity to recreate the song
Findings seen as crucial step for people who use speech-assistance technology to convey emotion and tone
Ludovic Bellier was thrilled when he first heard the muffled, barely decipherable version of Pink Floyd's Another Brick in the Wall (Part 1) generated from the brain activity of epilepsy patients.
Neuroscientists played the song for patients who had electrodes implanted in their brains. Then Bellier, a computational research scientist, trained a computer to analyze their recorded brain activity and recreate the tune — lyrics and all.
"When I heard the metallic quality of the sound, I was like, OK we're on to something," Bellier told As It Happens guest host Paul Hunter. "And then I heard the words."
It's far from a perfect copy. It has an almost eerie quality, as if it was recorded underwater. But if you're familiar with the song, you can pick it out.
"It's not like the clearest you could imagine, for sure," Bellier said. "But it proves that it's feasible to do such an approach, to reconstruct music from the neural activity."
The findings — published this week in the journal PLOS Biology — are being touted as a crucial step in understanding how our brains interact with music. And scientists say they could be used to create far more expressive speech devices for people who have difficulty communicating verbally.
'Song information is incredibly rich'
"This study is exciting," Jessica Grahn, a cognitive neuroscientist who was not involved in the study, told CBC in an email. "It's a real technical achievement to acquire these data, and to be able to reproduce such a complex auditory signal."
Grahn is a professor in the psychology department of Western University in London, Ont., where she studies the neuroscience of music.
"Song information is incredibly rich," she said. "There's a reason you can't just sing a tune into Google and have it identify the song for you, despite years of research into analyzing auditory signals."
The study was more than a decade in the making.
Neuroscientists at the Albany Medical Center in New York recorded data from the brains of 29 epilepsy patients between 2009 to 2015.
As part of their treatment, the patients had electrodes implanted in their brains to pinpoint the origin points of seizures.
Because implants allowed for a more in-depth look at their brain activity than scientists can get from more common and less invasive methods, the participants also volunteered to be part of other brain studies — including one mapping which regions of the brain respond to music.
So why Pink Floyd?
"It's just because everyone loves Pink Floyd," Bellier said. "The experimenters, and coders who recorded the data at the medical centre were fans of Pink Floyd. So that's why they chose this song."
Bellier joined the research more than a decade later while he was a post-doctoral fellow at the University of California, Berkeley.
He identified which parts of the brain were stimulated during the song and which frequencies they were reacting to, and trained computer models to analyze that data to recreate those frequencies.
'Expressive freedom' for those who need it
Robert Knight, co-author of the study and UC Berkeley neuroscientist, called the results "wonderful."
In 2012, Knight and his colleagues became the first to reconstruct the words a person was hearing from recordings of brain activity alone.
These new findings move that work forward. He says it will have major implications for people who have difficulty communicating verbally because of brain damage from neurological conditions, stroke or paralysis.
"As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who's got ALS or some other disabling neurological or developmental disorder compromising speech output," Knight said in a press release.
Current speech assistance devices have a robotic quality. Think late physicist Stephen Hawking.
That's because the technology is only able to decode words, but not prosody, which refers to tone, rhythm and harmony — in other words, not just what people want to say, but how they say it.
Bellier calls this "expressive freedom."
This study is just one of a myriad of ways the neuroscience of music is helping to paint a fuller picture of how the brain works.
Grahn currently studies the links between music and movement, with the goal of using music to help people with movement disorders like Parkinson's disease.
"I am personally excited about how the techniques in this study will help us understand how all the different brain areas work together to produce our responses to music, from dancing to emotions to memories," she said.
Interview with Ludovic Bellier produced by Sarah Jackson