Music labels have been scrambling to come up with a legal path for removing songs from major streaming services that have been  featuring AI-generated songs sounding like your favorite artists. 

The rise of these “AI artist clones” are spreading pretty quick and attracting considerable attention. In mid-April, a track titled “Heart on My Sleeve” surfaced on streaming platforms, claiming to use AI technology that replicated the vocal style and tone of Drake and The Weeknd. You’d think it was them if you weren’t told it was AI. However, the song was promptly removed from the platforms without crediting the original artists, although social media posts alluded to their involvement.

The proposed approach on removing these songs could resemble the Digital Millennium Copyright Act (DMCA) but focus on violations of rights of publicity rather than copyright infringement. Notably, this arrangement appears to be voluntary, unlike the obligatory nature of the DMCA.

The DMCA, established in 1998, offers online services a “safe harbor” against secondary liability for copyright infringement. This protection applies as long as the services comply with a notice-and-takedown system, allowing copyright holders to request the removal of infringing content. However, most AI-generated soundalike tracks would not fall under the purview of the DMCA since they do not infringe on protected aspects of copyrighted recordings or compositions. Instead, they potentially encroach upon trademarks or rights of publicity, which safeguard celebrities from unauthorized commercial exploitation of their names and likenesses.

Addressing violations of rights of publicity is more intricate than dealing with copyright issues due to variations in state laws in the United States and limited legal precedents. The rights bestowed by these laws vary from one state to another, with even wider disparities regarding protection for deceased artists. Additionally, using soundalike vocals for creative purposes may, in some cases, be defended as an exercise of free speech. It is worth noting that these rights predominantly belong to artists, rather than labels, implying that the labels would require authorization to file notices on their behalf. Presently, utilizing rights of publicity serves as the most apparent legal argument to prevent major streaming platforms from hosting AI-generated soundalike tracks.

During an earnings call in April, Lucian Grainge, the CEO and chairman of Universal Music Group (UMG), hinted at this approach when addressing investors. He acknowledged that the rapid advancements in generative AI technology present challenges with respect to existing copyright laws, as well as laws governing trademark, name and likeness, voice impersonation, and rights of publicity, not only in the United States but also in other countries. Grainge mentioned that UMG’s commercial contracts include provisions to offer additional protection. However, it remains unclear whether takedown requests by the major labels would rely on these provisions, state law, goodwill, or a combination thereof.

Some industry executives have expressed concerns that AI-generated soundalikes emulating popular artists’ voices may confuse consumers.

Sony Music Group chairman Rob Stringer recently spoke on AI, saying:

“We’re in the early stages of AI in terms of how it can be developed for the music business,” Stringer said. “We are particularly interested in the tech that can protect our content so that when musical content goes through the generative AI process we basically know if it’s our content or not. We are particularly interested in how we tag our content” for this reason.

Stringer also said AI will help uncover greater levels of insight and potentially new licensing channels and “avenues for commercial exploitation.”

“There is a lot of opportunity in this area to be excited about throughout our company,” he said. “We are greatly aware of the challenges ahead too. We will protect our creators on every level possible whether it be creative, financial or legal in basis. Infringement and unauthorized usage of their rights should be the basis for a unique new set of artist and songwriter protections industry-wide. Tech does not simply overrule art.”

While the major labels and streaming platforms are seeking ways to address potential legal and consumer confusion issues, Companies like Audius (a music platform based on blockchain technology) is embracing the technology and giving artists the option to incorporate AI-generated works into their platforms.

The topic of AI-generated music raises complex legal questions. The focus on rights of publicity rather than copyright infringement demonstrates the unique challenges posed by AI-generated soundalike tracks. State laws and the limited legal precedent surrounding rights of publicity make it a nuanced area to navigate, especially when it comes to deceased artists and artistic freedom protected under free speech.

As the technology continues to advance, artists and companies are both exploring and capitalizing on AI-generated music. Some artists see it as an opportunity to experiment with new sounds and creative possibilities, training their own AI voice models and sharing the results with their audience. At the same time, companies specializing in AI voice replication are entering the market, catering to the demand for synthetic AI voices.

The discussions between major label groups and streaming services are ongoing, and it remains to be seen how they will ultimately address the issue of AI-generated soundalike tracks. While concerns about consumer confusion and legal complexities persist, the future of AI-generated music continues to unfold, presenting a landscape where technology and creativity intersect in intriguing and sometimes contentious ways.

We’re eager to see where this lands.