Google introduces an AI model to translate brain activity into music
Google describes how it used the ability of AI to decode brain activity into melodic songs in a research study that generated interest both in the scientific as well as music sectors.

Highlights
- Researchers developed ‘Brain2Music,’ which turns thoughts into music
- AI analyses brain activity to create music based on individual preferences
- The study explores diverse genres, including blues, classical, disco, jazz, pop etc.
In a groundbreaking leap in the world of artificial intelligence, Google has unveiled its latest research endeavour, named ‘Brain2Music.’ Scientists have created an AI-driven system which employs brain imaging information to create music that mirrors brief segments of songs individuals were hearing while their brain scans were taken. This procedure was explained by the researchers in an article published on the preprint site, arXiv, on 20 July 2023. However, this study has not yet been properly reviewed.
Decoding brain activity into songs
Google describes how it used the ability of AI to decode brain activity into melodic songs in a research study that generated interest in both the scientific and music sectors. Researchers utilised previously gathered brain scans using a method known as functional magnetic resonance imaging (fMRI). This technique monitors the movement of oxygen-rich blood within the brain, revealing its most active areas.
The brain scans were taken from five volunteers while they were exposed to 15-second musical samples encompassing various genres like blues, classical, country, disco, hip-hop, jazz, and pop.
From brain patterns to musical notes
The training of a deep neural network is at the heart of this ground-breaking innovation. Google's AI experts carefully trained this neural network to uncover the subtle relationships between brain activity patterns and the numerous components that comprise music, such as rhythm and emotion.
The neural network was trained to make music that resonated with the original musical pieces on the basis of meaning using the capabilities of MusicLM, a technology intended to create music from text.
Remarkable results
Researchers and fans were astounded by the project's successful conclusion. Using the MusicLM principles to build compositions of the original musical stimulus, the AI model was able to synthesise music from the fMRI data. It's vital to remember that the focus was on musical components rather than literary content, even if the AI-generated music had characteristics like genre, mood, and instrumentation similar to the original compositions.
The created audio, although not a perfect copy of the originals, shared startling similarities, demonstrating the astonishing capacity of AI to interpret and reproduce delicate features of human musical perception.
Opening new vistas in music & neuroscience
The Brain2Music project by Google represents a significant breakthrough that intersects AI, music, and neuroscience. This accomplishment not only demonstrates the creative potential of AI but also offers the possibility of enhancing our comprehension of how the brain comprehends and reacts to music.