Vocal AI deepfakes of major artists are cropping up everywhere – should artists be worried?

Drake, Grimes, The Weeknd and Holly Herndon are all part of the AI vocalist boom – and they all seem to have different perspectives.

When you purchase through affiliate links on MusicTech.com, you may contribute to our site through commissions. Learn more
Drake AI deepfake

Drake at Lollapalooza Chile. Image by Marcelo Hernandez / Getty

Ever wished you could hear Freddie Mercury sing over a Daft Punk track? How about a collaboration between Public Enemy and DJ Shadow? Maybe not, but these past few weeks have brought these once-thought fantasies into an audible, eerie reality.

READ MORE: Fake AI-generated Frank Ocean songs sold for $13,000 by scammer 

Fan-made tracks using AI-trained – or ‘deepfaked’ – vocals from major recording artists have been steadily cropping up around the internet. Most recently, UK band Breezers created an entire fake album by Oasis, while Heart on My Sleeve, a track featuring vocal clones of Drake and The Weeknd, drew some 15 million views on Tiktok and hundreds of thousands of plays across streaming platforms in less than 48 hours.

These aren’t the first songs to emulate an artist’s voice using AI. But the increasing quality of the audio, and the speed with which these tracks are going viral, has left many wondering if the world’s most popular recording artists will soon be drowning in a monsoon of mimicry.

“This is the final straw, AI,” Drake wrote in a post addressing the recent surge in music deepfakes. Fellow artists have similarly joined the fray, with producer Young Guru stating that: “People should not be able to take your name, image and likeness without permission.” On this point, major labels and streaming platforms seem to be in agreement.

Universal Music Group, which represents both Drake and The Weeknd, promptly sent an email blast to the relevant parties stating that the company “will not hesitate to take steps to protect our rights and those of our artists.” Overnight, the song vanished from platforms such as Spotify and Youtube.

Of course, the appeal of these deepfakes isn’t hard to comprehend: fan-made musical supergroups might be ethically questionable, and legally perilous, but they probably won’t be boring. Given time, the AI tech might make them sound amazing. And, if the audience response to Heart on My Sleeve is anything to go by, fans may well want to hear them.

It’s not impossible to imagine new, thriving music communities that view AI imitation as a form of creative flattery. Since 2021, Holly Herndon has been encouraging her fans to make new content using an approved model of her own voice and to ethically share any royalties generated by their creations. In the wake of the Heart on My Sleeve controversy, electronic music auteur, Grimes, was quick to take a stance against copyright and open-source her own vocal tone. Shortly after encouraging fans to make use of her voice “without penalty”, the Canadian artist released Elf.Tech, an AI-powered vocal clone that allows fans to mimic and monetise Grimes’ voice.

The only caveat was the artist’s polite request for users to “please be tasteful.” Assuming fans honour such wishes, then Grimes and Herndon’s approaches will have demonstrated that fan imitations can flourish without harming the original creator. Twilight fanfiction certainly hasn’t lowered people’s interest in Stephenie Meyer’s original books. If anything, the genre has become part of a community-powered ecosystem that keeps readers deeply engaged in that literary world.

Grimes
Grimes. Image: Theo Wargo / Getty

However, what’s currently happening with music deepfakes feels different. We’re dealing with real people, not fictional characters, and the goal of these deepfakes is not to create a new song ‘in the style of’, but to achieve a one-to-one imitation of an artist’s voice. Musical style can at times be a nebulous concept, but your vocal tone is something you inherit; it’s an intimate part of your physiology, and to have it mapped and replicated in code, without your consent, raises alarm bells. Possible misuse of this technology surely won’t be isolated to fans either – predatory companies pushing exploitative contracts that literally strip artists of their voice is a very real concern. Herndon recently took to Twitter to caution artists against signing any contracts related to the usage of their voice in AI contexts – at least until the legal dust has settled.

In its worst application, this technology appropriates a singer’s voice and turns it into a toy. This toy can be picked up and played with by people the artist doesn’t know and may never meet. It can sing words the artists never wrote and express emotions the artist never felt; it can be mixed and matched with other stolen voices in perpetuity. Something uniquely personal becomes as special as a build-your-own-pizza franchise.

An alternative perspective is to view all this as the next phase of audio sampling. Unauthorised samples have been used to create myriad beloved and influential songs in music. It’s easy to forget that, even as it ignited an explosion of creativity and birthed whole new musical genres, the early days of sampling drew legal and artistic criticism in equal measure.

Eventually, a series of landmark copyright infringement cases eventually provided a legal basis for musical sampling, but in the case of music deepfakes labels and streaming platforms seem eager to avoid messy legal fights if they can. The speed at which Heart on My Sleeve was taken down by Spotify and Apple Music—even though there was no settled law forcing them to do so—is a clue to how these companies see the legal landscape shaping up and, most recently, it’s been reported the industry’s largest players are trying establish a voluntary system for taking down AI-generated voice imitations.

If the issue does reach a courtroom, then the main barrier to establishing a new legal framework that covers music deepfakes likely won’t come from fans demanding the right to generate musical mashups but from AI software developers refusing to disclose how the sausage gets made. The world of generative AI is worryingly opaque, and we currently have little or no insight into the training data used by the biggest companies to create these awesomely powerful musical tools.

It’s widely assumed that many of these models are only able to function as they do by sucking up vast quantities of copyrighted material. UMG certainly seems to think so, having recently stated that a number of AI services have been trained on copyrighted music “without obtaining the required consents.”

Ultimately, that’s what this all comes down to: consent. Some artists won’t be bothered by fan-made vocal imitations – Liam Gallagher simply responded “I sound mega” upon hearing the recently posted Oasis deepfake – while others will want to engage with the technology on their own terms. Still, other artists will want no part in any of this, which is a decision we should respect.

All signs indicate that AI will affect the music industry at practically every level and it’s imperative that this change is guided by those most impacted. The recently launched Campaign for Human Artistry has offered a broad set of principles for ethical AI in music, and while it’s unlikely to please the most ardent AI enthusiasts, it is a step towards finding a middle ground.

Artists aren’t running from AI; they just want to steer the growth of this technology in ways that feel mutually beneficial and empowering, rather than dystopian and exploitative. Put simply, imitation is the sincerest form of flattery; identity theft is just plain creepy.

logo

Get the latest news, reviews and tutorials to your inbox.

Subscribe
Join Our Mailing List & Get Exclusive DealsSign Up Now
logo

The world’s leading media brand at the intersection of music and technology.

© 2024 MusicTech is part of NME Networks.