Deezer Slashes Payouts for AI Music, Says 85% of Streams Are Fraud

Deezer Slashes Payouts for AI Music, Says 85% of Streams Are Fraud - Professional coverage

According to TechRepublic, Deezer announced on January 29 that it is demonetizing up to 85% of streams linked to fully AI-generated music after identifying widespread fraud in 2025. The Paris-based service detected over 13.4 million AI tracks on its platform last year alone, with uploads now averaging more than 60,000 AI songs per day, which is roughly 39% of all daily music deliveries. Despite this flood, AI music only accounts for 1-3% of total listening, and Deezer’s analysis shows the vast majority of that activity is manipulated. In comparison, overall streaming fraud on the platform was around 8%. In a major strategic shift, Deezer also plans to commercialize its proprietary AI-detection technology, which it already uses and has licensed to Billboard, selling it to the broader music industry.

Special Offer Banner

The sheer scale of the problem

Here’s the thing that really jumps out: 60,000 AI-generated songs are being uploaded to just one platform every single day. That’s an insane volume. It basically means the pipes are getting clogged with content nobody really asked for. But the real kicker is the fraud rate. Finding that 85% of streams for these tracks are fake, compared to an 8% average across all music, tells you everything. This isn’t about artists experimenting with new tools or fans discovering weird AI jams. This is a straight-up, industrial-scale attempt to game the royalty system. It’s spam, but it’s musical spam that’s now good enough to sometimes slip past a casual listener.

How do you even spot a fake?

Deezer’s whole play here hinges on its detection tech, which it’s now turning into a product. They’ve filed patents on methods to find “unique signatures” that distinguish synthetic content. I think the interesting challenge isn’t just identifying a purely AI-generated track from scratch—that’s probably getting harder, as they admit the music is nearly indistinguishable. The bigger task is linking those tracks to fraudulent streaming farms and behavior. That’s where the real value of their system must be: connecting the content identification to the fraud pattern analysis. And by getting Billboard on board, they’ve already scored a huge credibility win. If the charts can’t be trusted, the whole industry narrative falls apart.

A new business model from chaos

So Deezer went from having a massive problem to potentially creating a new revenue stream. That’s a pretty clever pivot. They’re essentially saying, “We built this tool to protect our own platform’s economics, and now we’ll sell you the antivirus software for your ecosystem.” Licensing this to rights organizations like Sacem makes perfect sense. Every label and publisher is terrified of their royalty pool being diluted by bot-driven AI streams. But it does raise a question: is there a conflict of interest? If Deezer sells its detection tech to a competitor, how does it ensure its own system remains the best? They’re betting their tech and their patents are strong enough to become an industry standard.

The bigger picture for streaming

This isn’t just a Deezer story. It’s a warning flare for every streaming service. The economic model of streaming is fragile—it’s a giant pool of money split by total streams. Dump millions of fraudulent AI streams into that pool, and the pennies that go to human artists get even smaller. Deezer is drawing a very public line in the sand: fully AI tracks get tagged, demonetized if fraudulent, and booted from playlists. That’s a stance others will have to respond to. The report they cite, warning of €4 billion in creator revenue at risk by 2028, makes this an existential fight. The weirdest outcome? Streaming platforms might end up spending more on fraud detection and content filtering than on, you know, actually promoting music. What a time to be alive.

Leave a Reply

Your email address will not be published. Required fields are marked *