Music has always been important in my household. I have fond and vivid memories of mornings starting off the day with my dad playing Ottis Redding’s Sitting on the Dock of the Bay on the record player. Music was how we connected. So over the years, he’d send me Spotify recommendations, mostly stuck in the ’60s, oftentimes some repetitive blues songs, I’d happily listen, grateful for that small bridge between generations and to connect with my dad.
Then one morning, that bridge cracked.
The fake artist incident
My dad texted me: “Have you heard of Mel McCoy from the Country Bayou Blues Band?” Attached was a Spotify link and an image of a haggard old blues man in a cowboy hat. While my knowledge of wrinkly blues men ain’t bad, I hadn’t heard of him…because well, he didn’t exist. The song was AI-generated, a synthetic blues track wrapped in a fake persona.
That small exchange, sent a jolt through me. It made me realize how quickly the ground beneath us was shifting. Spotify was now quietly filling with music made by absolutely no one at all.

Spotify’s AI crackdown (the numbers that don’t add up)
Recently, Spotify announced that it had removed more than 75 million “spammy” AI-generated tracks from the platform. Hard to believe as they purport to have just over 100 million tracks in total, which would mean more than half of its entire catalogue was removed if taken at face value. The company framed it as a victory for authenticity, but the scale strains belief and it its true that nearly two-thirds of their music was AI generated, then that is a grave proposition.
If there were truly that many synthetic songs circulating, then AI-generated music isn’t some fringe experiment, it’s a massive part of the musical ecosystem. And if the number is inflated, then Spotify is trying to control the narrative rather than the problem. Either way, the message is clear: AI music is no longer a nightmare of Spotify’s future. It’s the present.
In some ways, it feels like Spotify has been preparing us for this shift for years. Its earliest “AI” features felt benign verging on kinda fun! Personalized recommendations, Discover Weekly, and shared “Blend” playlists that merged two people’s tastes to create a musical mutant baby. Then came Daylists, those weirdly specific playlists titled things like spaced out international wednesday afternoon. Note by note, Spotify has moved from being a place to explore music to a place where music finds you.
I admit, these features are annoyingly convenient. But they beg a deeper question: what problem is Spotify really trying to solve?
Unfortunately the answer is obvious, like every platform these days, its engagement. Keep you listening for as long as possible. That’s why songs automatically roll into “radio” mode after they end, and why playlists refresh endlessly.
At first, I loved it. It felt like I had a tiny DJ whose only job was to find me music to listen to. But eventually I realized that those recommendations don’t actually mirror how I used to discover music. I used to discover music through friends, conversations, cafés, record stores, or god forbid, even the radio!
It might sound quaint or cheugy, but discovery used to be an act of active participation. Music discovery was something you got to do, not something that was done for you.
Music is a deeply personal experience, which is very different from a deeply personalized one. And because we don’t understand how Spotify’s algorithms steer us, it’s hard to know whose interests they really serve. Maybe they’re agnostic, maybe they just wanna be like an enthusiastic record store employee (I know it’s an oxymoron) but more likely, their purpose isn’t sharing art that they care about, it’s retention!
The quiet rise of AI music
Over the past year, AI-generated music on Spotify has exploded. Despite public statements about regulation and moderation, the company’s behavior suggests a more grey relationship with synthetic sound.
In 2024, Spotify updated its User Agreement to give itself broad, perpetual rights over anything users upload or create. Many artists feared that language could allow the company to train AI systems on their recordings or generate derivative tracks, a concern Spotify dismissed but never fully clarified.
The fear is obvious, Spotify could use its vast dataset, every stream, skip, and playlist to generate music that sounds “good enough” to replace the real thing. By leveraging artists’ work, it could create AI versions of them, cut out the original creators, and pocket the difference. It sounds dystopian, but economically, it makes sense. Spotify’s business model rewards quantity over quality. AI-generated music offers infinite supply at almost no cost.
You might think, well, I would never listen and fully enjoy AI music but I know from experience that more and more you won’t know. A lot of people, myself included have become passive participants in this weird musical system. When we pick playlists “for vibes” Chill Jazz, Study Lo-Fi, Relaxing Guitar, we rarely check who made the tracks. And those playlists are rife with artists that are aliases for production collectives or outright AI projects. The music is serviceable, mood-fitting, and free of friction. But every AI track streamed is a quiet rehearsal for a world where human artistry might become optional.
My mom, for example, listens to AI music happily. She finds it catchy and harmless. She doesn’t go to concerts, so why does it matter who made the song? My dad, meanwhile, has become paranoid. He now messages me links, asking, “Is this one real?”
Between them lies the cultural crossroads we’re all standing at: comfort on one side, paranoia on the other.
Follow the incentives
For Spotify, the math is hard to ignore: AI-generated songs mean no royalties, no scheduling conflicts, no egos. Just endless, on-brand audio that never sleeps. For musicians, that means competing not with each other but with tailor made algorithms that can make a million forgettable songs overnight.
We’re already seeing this battle spill into court. Universal Music Group and the RIAA have sued AI startups like Suno and Udio for training on copyrighted works without permission. Those cases will define whether training an AI on music constitutes infringement and the outcome will determine how far platforms like Spotify can go before crossing a legal line.
If those lawsuits fail, the floodgates open.
In my opinion, Spotify isn’t resisting AI, it’s refining it. The company’s policies and incentives point toward a future where the platform doesn’t just curate music but creates it. Probably a bit paranoid, but in this age of AI where everything seems to quickly get gobbled up and regurgitated by LLM’s its not an unreasonable paranoia. I hope for a future where we could simply filter out AI generated music. Spotify doesn’t seem to mind so far and their morally grey track record makes me worry. I hope things don’t go in this direction, because for me, music without a maker isn’t music at all.



