Skip to main content
All blog posts

Spotify Rolls Out New Rules Against AI Spam, Impersonation, And Fraud

  • Michele
  • 29 September 2025, Monday
Share this article on
Smartphone in black stand displaying green Spotify logo on dark screen, with blurred wooden interior background.

Spotify has announced the introduction of a new set of AI protections to safeguard artists, songwriters, and listeners. The changes include stricter rules against impersonation, a powerful music spam filter, and clear disclosures for tracks created with AI tools. These steps aim to protect creativity and royalty distribution, and bring greater transparency to music streaming.

Streaming Platforms and AI in 2025

Throughout history, music has consistently evolved alongside technology. While technology has made music production more accessible, the recent, rapid advances in artificial intelligence have felt particularly unsettling for many artists. The rise of AI has forced companies to balance innovation with accountability, and many have been rather slow to act. Music streaming platforms have found themselves in a delicate situation, having to decide to what extent they are willing to welcome the new technology or hold it at arm’s length.

Over the past several months, some platforms have cautiously started allowing artists to upload tracks that incorporate AI. Reactions from both artists and listeners have been mixed. Many remain uneasy about the growing normalization of AI in music, while others welcome it as a chance to experiment and expand creative boundaries. There are also voices arguing that restricting AI too heavily could ultimately do more harm than good, since it is quite clear that the technology is here to stay.

Nevertheless, platforms are under pressure to protect artists from AI, prevent copyright abuse, filter out “AI slop”, and reassure both creators and audiences that, if permitted, AI music gets labeled as such. Deezer is one example. The platform has put in place detection tools to tag content clearly when AI has been involved, excludes those tracks from recommendation algorithms, and limits their participation in editorial playlists.

Spotify’s AI Policy Update: Concrete Measures to Protect Artists

While Deezer reacted rather quickly, Spotify’s stance on the use of AI in uploaded tracks has been rather vague—at least until now. While it generally allowed artists to upload AI-generated or AI-assisted music (as long as it didn't violate any of its policies or copyrights), it was primarily focused on moderation policies, preventing royalty manipulation, and spam controls rather than specific AI-related measures.

However, a recent announcement suggests that Spotify has started taking AI more seriously. In a newsroom article, the platform revealed that it will be taking concrete measures to protect artists from the harmful consequences of AI by introducing new rules around impersonation, spam, and deception.

Although Spotify affirms its commitment to giving artists the freedom to use AI, it warns that the technology can be misused by “bad actors and content farms to confuse or deceive listeners, push “slop” into the ecosystem, and interfere with authentic artists working to build their careers. That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors.”

But what exact measures is Spotify intending to take? Let’s look into the details.

Protecting Artists Against Impersonation and Deepfakes

One of Spotify’s top concerns is impersonation. With AI-powered voice cloning becoming more advanced, musicians face growing risks of having their voices used in deepfakes or fake songs. Unauthorized vocal cloning exploits an artist’s identity, undermines their artistry, and threatens the integrity of their work.

To address this concern, Spotify has introduced a new impersonation policy outlining how such cases will be handled. Under its updated rules, vocal impersonation is only permitted when the artist has given explicit permission. At the same time, the company acknowledges that some artists may choose to license their voices to AI projects, emphasizing that this decision should remain in the artist’s control.

Spotify is also stepping up protections against fraudulent uploads that place AI-generated or stolen tracks onto legitimate artists’ profiles. Internally, Spotify is expanding its content mismatch system, reducing review wait times, and giving artists the ability to flag mismatched content even before release.

Tackling AI-Driven Spam & Royalty Fraud

Additionally, Spotify is stepping up its efforts against spam and fraud. This is no new territory for Spotify, which has invested heavily in complex systems to detect fraudulent activity.

The main issue is that AI makes it much easier to churn out low-quality music at scale. For this reason, Spotify decided to take further steps and roll out a music spam filter to identify and tag suspicious uploaders and tracks. This way, it aims to stop bad actors from generating royalties that should flow to professional artists and songwriters.

The change is urgent: in the past 12 months alone, amid the generative AI boom, Spotify removed more than 75 million spam tracks. That said, the process will be rather slow and cautious, since Spotify wants to avoid penalizing legitimate artists.

Building Transparency Around AI in Music

Finally, Spotify decided to move toward greater transparency around AI, responding to growing demand from listeners. The main challenge is that AI use often falls on a spectrum, from subtle assistance in production to fully generated songs, making a simple “AI or not AI” label too limited. To address this, Spotify is backing a new disclosure standard developed by DDEX, giving labels, distributors, and partners a way to indicate where and how AI was used in a track.

These disclosures will appear directly in the app, covering everything from AI-generated vocals and instrumentation to post-production work. The aim is to strengthen trust across the platform and to work towards a future where artists remain in control of their creative choices, with safeguards in place to protect against abuse.

Looking Ahead: Spotify’s AI Policy in 2025

Spotify’s new policies mark a turning point in how the music industry addresses the rise of generative AI. With these measures, Spotify is signaling that the future of AI in music will be shaped not just by technology, but by the choices and protections put in place to support the people behind the music. For artists and listeners alike, the path forward will depend on protecting creative integrity while embracing the possibilities AI can offer. Or, as the company puts it, “aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers.”

Ready to get your music out there?

Distribute your music to the widest range of streaming platforms and shops worldwide.

Get Started
Share this article on
Always stay up-to-date

All You Need. All in One Place.

Get tips on How to Succeed as an Artist, receive Music Distribution Discounts, and get the latest iMusician news sent straight to your inbox! Everything you need to grow your music career.