Skip to main content

Riffusion’s AI generates music from text using visual sonograms

posted onDecember 20, 2022
by l33tdawg
Arstechnica
Credit: Arstechnica

On Thursday, a pair of tech hobbyists released Riffusion, an AI model that generates music from text prompts by creating a visual representation of sound and converting it to audio for playback. It uses a fine-tuned version of the Stable Diffusion 1.5 image synthesis model, applying visual latent diffusion to sound processing in a novel way.

Created as a hobby project by Seth Forsgren and Hayk Martiros, Riffusion works by generating sonograms, which store audio in a two-dimensional image. In a sonogram, the X-axis represents time (the order in which the frequencies get played, from left to right), and the Y-axis represents the frequency of the sounds. Meanwhile, the color of each pixel in the image represents the amplitude of the sound at that given moment in time.

Since a sonogram is a type of picture, Stable Diffusion can process it. Forsgren and Martiros trained a custom Stable Diffusion model with example sonograms linked to descriptions of the sounds or musical genres they represented. With that knowledge, Riffusion can generate new music on the fly based on text prompts that describe the type of music or sound you want to hear, such as "jazz," "rock," or even typing on a keyboard.

Source

Tags

Industry News

You May Also Like

Recent News

Friday, November 8th

Friday, November 1st

Tuesday, July 9th

Wednesday, July 3rd

Friday, June 28th

Thursday, June 27th

Thursday, June 13th

Wednesday, June 12th

Tuesday, June 11th

Friday, June 7th