Musical mind reading: Only with the help of brain scans and EEG data is it possible to decipher the music that the test subject listens to, as experience shows. An appropriately trained AI selected the appropriate piece of music based on the recorded non-invasive neural signals with a hit rate of 71.8 percent. The findings could be a first step on the road to reading speech non-invasively from brain waves.
Music is deeply rooted in our nature. When we hear familiar sounds, our brain recognizes them within milliseconds. Brainwave measurements show that music can set off real fireworks of signals, accompanied by strong emotions and chills. Different research teams have already looked at what brain waves can reveal when listening to music – for example about test subjects’ emotions or about the music itself.
A combination of EEG and fMRI
Ian Daly of the School of Computer Science and Electronic Engineering at the University of Essex in Great Britain has now shown that brain waves can be used to determine what music a person is listening to. For example, while previous studies of reading speech from brain activity often used invasive methods such as electroencephalography (EKoG), in which electrodes are placed in the skull, Daly used data from electroencephalography (EEG) measurements.
To increase the accuracy of predictions, Daly combined EEG data with functional magnetic resonance imaging (fMRI) measurements, which show blood flow in the brain and thus provide information about brain regions that are particularly active when listening to music in a given person. The researcher used this information to accurately select the EEG data for further analysis that corresponded to these regions.
Reconstructing music from brainwaves
The data came from a previous study that originally focused on the emotions of music listeners. The 18 subjects included in the analysis listened to 36 short pieces of piano music while their brain activity was recorded by fMRI and EEG. Daly then trained the deep learning model to decode the patterns in the EEG in such a way that it could reconstruct the respective piece of music heard by the test subject during the measurement.
Indeed, the model was able to partially reproduce the tempo, tempo and amplitude of music. The similarity to the original pieces of music was high enough for the algorithm to predict which of the 36 pieces of music a person had heard with a scoring rate of 71.8 percent.
To validate the results, Daly used an independent sample of 19 other people who had also heard the corresponding pieces of music. Since only EEG data and no fMRI data were available from these individuals, Daly used the information from the first sample to select relevant EEG data.
“Even in the absence of the subject’s fMRI data, we were able to identify the music we were listening to from the EEG data alone,” says Daly. However, he notes, the localization of music-related brain reactions varies from person to person. Accordingly, if the model could not be fed with the person’s fMRI data, it would be less accurate and achieve a hit rate of only 59.2 percent.
The long-term goal: speech recognition
Daly sees his model as a first step towards larger goals. “This method has many potential applications,” he says. “We have shown that we can decode music, which suggests that we may one day be able to decode speech from the brain.” Experiments have shown that to some extent this is indeed possible. However, this has so far only been successful with invasive technology such as brain electrodes.
For people with latching on who are unable to communicate with others due to paralysis, this may open a gateway to the outside world. “Obviously, there’s still a long way to go, but we hope that one day if we can successfully decode language, we can use that to build communication tools,” says Daly. (Scientific Reports, 2023, doi: 10.1038/s41598-022-27361-x)
Source: University of Essex
#Brainwaves #reveal #hearing #music