
“If music be the food of love, play on”, so the famous aphorism goes. But what if the music didn’t come from a human? What if Daft Punk’s Robot Rock came from an actual robot? Would that change music’s unique ability to move us, transcend boundaries and cut through to our corporeal core?
Musicians are concerned about AI, and rightly so. Popular sites such as Suno and Udio give anyone with a laptop the ability to create a song through a simple prompt; AI tools and bots can artificially inflate artists’ streaming numbers; and fake artists can channel listeners away from actual musicians.

Earlier this year saw the great and the good of the music industry — from Sir Elton John to Dua Lipa — protest the Government’s controversial Data (Use and Access) Bill, which makes it easier for big tech to access artists’ copyrighted material. In the end, the protest was in vain, and the bill was passed in June, leaving musicians in a vulnerable position.
When it comes to AI in music, aesthetics and creativity are not the problem, argues Baroness Kidron, an advisor to the Institute for Ethics in AI at the University of Oxford, who led the recent charge in the House of Lords against the Data (Use and Access) Bill.
For Kidron, it’s a question of authorship, training and government regulation. “The tech sector likes to think what comes out of their AI models is the only important factor, but creators are concerned about what goes into the models,” she says. “If I was a carpenter who makes a beautiful table and you steal it in the middle of the night and chop it up to make a window frame, it does not make the table yours.”

In the US, a litany of court cases were filed against both Suno and Udio on this very matter in 2024 from major record labels, including Universal Music Group and Warner Records. They claimed Suno, which later admitted to doing so, was taking copyrighted material to train its AI, also known as “data-scraping”.
The who?
Fake artists on Spotify pose a different, but equally threatening, problem to the industry. By diverting streams from musicians, they have created a perfect storm for a demographic who are living in challenging times. Around half of musicians in the UK earn under £14,000 per year from music work. There have reportedly been over 500 fake artists reported on Spotify.
Along with the UK government’s reverent stance towards big tech with its refusal to adequately regulate, Spotify CEO Daniel Ek has done little to reassure musicians of protection from AI generated artists. In a 2023 BBC interview, Ek confirmed he would ban the use of artists’ music to train AI models, but refused to prohibit AI-made music from the platform.

AI-generated bands made headlines this month with The Velvet Sundown debacle, an entirely artificial group on Spotify whose streams soared into the millions. Some claimed the project as their own, including Andrew Frelon, a Canadian who said he was the group’s spokesman. He later told Rolling Stone the claim was an “art hoax”, after the anonymous band denounced him as a pretender.
Though some still maintain this is all a genius PR stunt, Deezer’s AI detection software said the music was 100 per cent likely to be AI-generated and the group has never taken an interview or performed a live show. You can hear it in the music itself: a derivative and generic blend of 1970s psychedelia and indie rock. It’s not great, but it’s also far from terrible. The vocal production has all the hallmarks of AI, with its strange clipped quality.
But what scared people about the music was its indiscernibility. You could easily hear The Velvet Sundown playing in a bar and not bat an eyelid. It’s background music: the middle of the middle-road, lacking an ineffable human soul. It’s hardly the soundtrack to the music industry’s funeral cortege, as many might have you believe.

The future sound of AI
However, with The Velvet Sundown’s soulless mediocrity comes a glimmer of hope for the music industry. AI in the arts could force artists into being more creative and less generic to distinguish themselves from algorithmic slop.
It also gives people another tool to work with creatively. When I played around on Suno, I gave the challenging prompt of “intelligent dance music, 3/4 industrial breakbeat, 101 beats per minute, layered synths” and was able to create a mediocre dance track with ease, though it sounded nothing like how I intended.
If I — with my minimal musical capability and grade 1 piano — was able to make something in dialogue with the algorithm that was recognisably a song, imagine the prospect of an immensely talented producer wielding the resource.
“Being open to AI as a tool in music production can bring huge creative potential,” says Matt Farrow, aka Kepler, a Leeds-based producer and DJ. “AI can also fast-track learning by breaking down music theory in a practical way. Keeping an open mind and embracing new technology has always been key in music production. I get the hesitation from purists, and I agree that music should always have a soul and human feeling.”
James Ballaró , guitarist for the London based group Truthpaste and coder, can also find the positives. “AI in music is commercially exploitable, but I don’t think the musical creative process is ever going to become fully redundant,” he says. “I think it’s a lot more exciting to think of it from a technological standpoint, in the sense that AI can and will unlock a wide range of creative tools, similarly to the impact MIDI had on musicians in the 1980s.”
It’s easy to catastrophise about the future of AI in music. However, from the camera to the computer, technology has always influenced art and there should be a kernel of positivity taken from its mind-bending capabilities for music production.
But whether this will push artists to different types of creativity, bring in a wider demographic to making music, or swallow musicians up all together, is yet to be seen. For now, the Government seems to have chosen big tech over its creative sector, and ultimately, the artists will suffer.