Suno 5.5 adds “Voices” to make AI music creation more personal
Suno has added new personalisation and recommendation features to make AI music sound more like you.
Suno is back with another update, and this time it is leaning even harder into the idea that AI music should feel personal. Version 5.5 of its model introduces a range of new features aimed at making generated music sound more like you, or at least something close to it.
As Digital Music News reports, the company is calling this release “our best and most expressive model yet”. That is a familiar claim in AI circles, but the features themselves do show a bigger step towards less generic output and more identity-driven creation.
The headline addition is Voices. This allows users to upload or record their own singing or rapping, which can then be used to generate new songs. There is a built-in verification step that requires users to match their voice to a prompted phrase. It is a safeguard, but also a reminder of how seriously voice ownership is now being treated.
Suno is keen to stress control here. “And your Voices on Suno are private, meaning only you can use them to create new songs,” the company says. Voice sharing may come later, but for now, the focus is on keeping things locked down to the creator.
Alongside this is ‘Custom Models’, which lets users train Suno on their own music. In simple terms, the AI learns your style and starts to generate music that mirrors it. Pro and Premier subscribers can currently create up to three of these models. It is a logical next step, although it does raise the quiet question of how much of “your sound” can really be replicated by a system trained on patterns.
There is also a more familiar feature called ‘My Taste’, which, as Digital Music News notes, works in a similar way to recommendation systems on streaming platforms. It learns your preferences relating to genres, moods, habits, etc., and adjusts outputs accordingly. Unlike the other tools, this one is available to all users.
If all of this sounds like a push towards human-centred AI, if that isn’t too much of an oxymoron, that is very much the point. Suno CEO Mikey Shulman framed the update as part of a bigger shift, saying, “We’re opening a new chapter in music creation, in partnership with artists and the music business.”
He also made it clear that this is just the beginning. “The capabilities we’re putting in place today — voice fidelity, personalized sound, custom models — lay the foundation for the next generation of music models we’re launching with the music industry later this year.”
That mention of “partnership” is doing a lot of work. Suno’s relationship with the music industry has not always been smooth, and while the tone has softened, some major licensing questions are still unresolved.
The updates demonstrate how AI music tools are moving away from one-click generation and towards something that feels more tailored and collaborative. Features like Voices and Custom Models suggest a future where the technology is shaped around the creator, not just the prompt.
Whether that balance holds is another question. For now, the idea of AI reflecting personal style is interesting, even if it still relies heavily on patterns and data under the surface.
Clearly, AI music creation is not going anywhere any time soon. It is becoming more embedded, more personalised, and perhaps (somewhat scarily) a little more convincing.