Turn any sound into an instrument with Google’s Tone Transfer Magenta project
Google’s latest Magenta project uses machine learning to take any inputted sound and convincingly output as a chosen instrument.
Google’s Magenta projects use machine learning to create art and music, while providing open source data for all to use and learn. Tone Transfer is no different, using DDSP to convert sounds into musical instruments. DDSP learns characteristics of a musical instrument and maps them to different sounds.
Play with the sample sounds or record your own music instruments, vocal performances or sound effects to accurately hear what they would sound like performed on flute, saxophone, trumpet or violin. For example turn your acapella into a saxophone solo or dog barking into a trumpet performance. You can then mix the original recording with the machine learned output.
Tone Transfer could be a great way to add some new instruments into your mix without sequencing, thus still retaining a real “recorded” element. Explore, create and input your new musical instrument into your next big project.
Google worked with five artists including Gabriel Garzon-Montano, Andrew Huang, Mija, Adam Taylor and Tems to turn their instrumental performances into machine learning models:
This development is important because it enables music technologies to become more inclusive. Machine learning models inherit biases from the datasets they are trained on, and music models are no different. Many are trained on the structure of western musical scores, which excludes much of the music from the rest of the world. Rather than following the formal rules of western music, like the 12 notes on a piano, DDSP transforms sound by modeling frequencies in the audio itself. This opens up machine learning technologies to a wider range of musical cultures.
Nida Zada – UX Lead, Google Research