GOOGLE

Brain activity can be recorded using functional magnetic resonance therapy (fMRI). And these recordings can in turn be used to reconstruct music. Scientists from Google and Osaka University are working on creating an artificial intelligence on this basis that can make music from images that you have in your head. They call the model Brain2Music.

The researchers have published how the AI ​​should work in a paper that is available via arXiv. It states that “reconstructing experiences or events from the brain provides a unique way of understanding how the brain interprets and represents the world.” Music is, of course, only one aspect of current research aimed at reproducing brain activity. The scientists use music retrieval, i.e. a specific type of extraction and aggregation of audio files. Alternatively, the AI ​​model MusicLM is used, which is then based on embeddings based on the fMRI data. This means the structure of the model, the embeddings represent the relationships as vectors. fMRI measures the oxygen content of the blood in the brain, which shows which areas are active.

For the experiment, music was repeatedly played to five people – each a 15-second excerpt from several genres from classical to blues to reggae. A deep neural network was then trained with the recorded brain activities. This allowed connections between the elements of the music and what was happening in the brain to be recorded. The scientists categorized the moods and described the music.

How exactly the descriptions are written in the paper? An example of blues is: “It’s lazy blues with a relaxed tempo and atmosphere. The band structure is simple, with the background rhythm broken by bass and guitar cutting. The lead guitar’s haunting phrasing gives the piece a nostalgic feel.”

Brain2Music is now the part that was made individually for each test person and should convert the brain activities into music. Google’s MusicLM is a generative AI model that can generate music from text. The information was passed on to him precisely for this purpose. “The music produced resembles the musical stimuli experienced by human subjects in terms of semantic properties such as genre, instrumentation, and mood.”

(Image: Paper Google)

The researchers note, however, that each brain functions so uniquely that it will not be possible to extrapolate the model created for one person to someone else. A large amount of data from the fMRT is also required, which in turn means hours of recording for the test subject.

LEAVE A REPLY

Please enter your comment!
Please enter your name here