Press Clipping
07/24/2025
Article
“AI doesn’t exploit musicians, people do”: What if artificial intelligence doesn’t have to hurt the music industry?

Mixmag.net
News
Video
Music
Features
Tech
Shop
More

International

Features
“AI doesn’t exploit musicians, people do”: What if artificial intelligence doesn’t have to hurt the music industry?
The many issues and misuses of AI in the music industry are well documented, but there are still artists and industry professionals who believe it can be a positive force if used correctly. Nadia Says reports, speaking to Benn Jordan, Holly Herndon and Mat Dryhurst, Imogen Heap, Portrait XO, Robert Owens, and AI music companies

Nadia Says 24 July 2025
As Western societies seem to become more publicly and vehemently polarised since the advent of social media, so does the music industry. There is not a day where we are not exposed to news of how corporations, big tech, and especially AI impact music creativity and livelihoods. Often this is related to major labels or artists suing AI companies that use copyrighted material to train AI models to (re)produce music without compensating the original creators, or AI companies that have received venture capital or private equity from possibly murky mixed sources, with said questionable sources also infiltrating festivals, venues, ticketing, content, music gear, and more. Music is growing to become an industry profitable for those at the top, while sidelining those making the scene possible, and many feel we are witnessing the top of the pyramid attempting to replace already squeezed musicians by illegally training cheaper bots.

All that being said, many artists and startups still want to see the creative and legitimate income potential for AI in the music field, while others are finding ways to fight the downside of lawless AI training; this Mixmag feature explores several of these advocates and presents different initiatives where AI is part of a functional creative ecosystem.

June saw the debut edition of SXSW London, the festival and conference which has been showcasing up-and-coming talents annually for many years already in Austin. SXSW is primarily a cultural festival, but also a place to discuss societal and big tech issues. In parallel to some talented humans playing music on diverse stages, a lot of the conference talks revolved around AI — and maybe like the tear we see in the fabric of our scene, SXSW London was torn between championing exciting talent and making too much space for the 1%. There is ample reason to have uneasy feelings around tech and AI when thinking of current crossovers with the music industry, for example with the latest scandal of Spotify CEO Daniel Ek investing in military AI development, and SXSW went there, promoting warmongers and tax-exempt royalty, making many wonder whether our industry is still about music or about accommodating misused wealth and art-washing.

Read this next: Multiple artists pull out of SXSW London line-up in protest of Tony Blair and David Cameron involvement

Still, some participating artists presented worthy segments on the cross-section of music and AI. Among them, AI music pioneers, and co-founders of data governance tool suite Spawning, Holly Herndon and Mat Dryhurst, presented a “smaller presentation” of a work they first showcased at the Serpentine Gallery last year. “[We] created a new protocol for training AI, which involved writing a songbook and touring it across 15 choirs throughout the UK to create a large dataset co-owned by all the singers,” they explain. “We then trained models to produce and exhibit new songs, played in the space and through a GPU fan organ we built.”

The piece and how it was proposed to the audience commented on “collaboration with AI being, in an abstract sense, collaborating with the human archive and yourself,” they explain. “And as our emphasis on choirs shows, AI is no replacement for being in the room with other people. ... Ultimately, contributing to an alien public intelligence is a leap of faith.”

A lot of criticism levelled at AI relates to its environmental impact and emissions. When asked to comment on the ecological aspect of their installation and of AI in music in general, Dryhurst and Herndon respond that “the power we used to train and run the models is less than had we played a video piece for the duration of the installation. Power usage for AI is deeply misunderstood. Time spent prompting models is less energy intensive than that same time spent playing a video game.”

“Data centers use 1% of global energy and AI is a small fraction of that,” they add. While some analysts believe this data centers figure could be higher than we are led to believe, it is still lower than other relevant industries which Dryhurst and Herndon believe face less accountability, stating: “In comparison, the fashion industry accounts for up to 10% of global energy use, and is far more deeply entangled with the culture industry, yet it is rarely scrutinised as much as AI.”

Holly Herndon & Mat Dryhurst: The Call | Credit: Nadia Says
Musician and technologist Imogen Heap also helmed a session at SXSW London on the topic of “Artistry in the Age of AI: Protecting Music's Integrity”, with a demonstration of Auracles.io - the creator’s digital ID and rights management all-in-one platform which could be seen as a development from her previous community-building project Mycelia. Heap’s work often addresses the problematic aspects of copyright administration, especially for self-managed artists, and of attribution and permission in the context of legal AI-training and other uses of art.

Read this next: The rise of AI music: A force for good or a new low for artistic creativity?

"Using music and AI for good has always been a no brainer!" says Imogen Heap, an AI native user who make her comments together with ChatGPT about fair use of AI and music. "Auracles is the latest manifestation [of my advocacy] to ensure technology genuinely supports artists — largely in reflection as to how little the current industry supports us. Ideally, companies, not just AI, but ALL companies would embrace a system of transparent, artist-approved lightweight data licensing, facilitated through digital ID verification, which would incentivise ethical retraining and ensure fair compensation for creators, as together we build the landscape in tandem, piece by piece."

She continues, "the essential shift is moving from viewing AI as a threat or a purely commercial tool, to seeing it as a collaborative partner. We need leaders and creators to commit to transparency, ethical responsibility, and genuine cooperation, recognising AI’s potential to augment human creativity rather than replace it."

Imogen Heap | Credit: Fiona Garden
Some companies already work with the mindset of AI supporting artists at the core. Take Musical AI, a “rights management platform that ensures fair attribution and compensation for creators by securely tracking rights holders' contributions to AI-generated music“ as described by co-founder and CEO Sean Power.

If we consider that AI does not really create anything, but rather puts micro bits and pieces together according to learned patterns, we understand the importance of such a tool. “Generative AI is trained to learn the probabilities of patterns in existing data with the objective of outputting stylistic imitations,” Power explains. “First it 'atomises' audio into micro-samples and then recombines them, preserving the probabilities in the dataset. [Our] technology is able to break down AI-generated tracks to percentages of influence from original works, enabling transparent reporting and proportional royalty payments to creators.”

Björn Ulvaeus of ABBA, another guest speaker at SXSW London, revealed that “right now [he’s] writing a musical assisted by AI, but it is a misconception that AI can write a whole song, it’s lousy at that, and bad at lyrics as well.” Ulvaeus also added: “AI music generators train on copyrighted material, they train on all the world’s music, and for that we feel that they should be paying something towards the songwriters and artists.” On the same point, Power comments that “generative AI is built on the creative labour of artists, often mined without consent, to produce endless imitations that serve platforms and profits, not people. It’s a form of data laundering, where original music is stripped of its source and repackaged as ‘new’, enabling a massive transfer of wealth and credit from creators to tech companies.”

Power thinks that “AI doesn’t exploit musicians, people do. It’s the AI industry’s responsibility to design ethical systems and refuse to hide bad decisions behind the algorithm.” Similarly to Heap, he states: “What we need is a fundamental shift in power, a move from gatekeeping to stewardship, from appropriation to attribution. A future where creators own their influence, audiences know where the art comes from, and AI is used to enhance human expression, not erase it.”

Wyclef of The Fugees shared a similar view at SXSW London: “I want to tell creators, are we getting lazier because of technology, or are we gonna push the technology forward? The belief has to come from you, there is nobody greater than the creator, which is you.”

Timbaland, on the other hand, is a high profile producer who believes that AI making aspects of production quicker and easier expands his role as an artist. “What used to take me three months only takes me two days,” he said to Billboard. “I’m not just producing tracks anymore. I’m producing systems, stories, and stars from scratch.” In 2020, Timbaland co-founded his own AI entertainment company Stage Zero, which recently signed its first “artist”, TaTa, a musician he powers via Suno, an AI music company that faces multiple lawsuits from major labels and has admitted to illegal training on copyrighted material.

Voice-Swap is a company that centres human artists in its AI x music business model. Chief Creative Officer Declan McGlynn describes it as “a platform where AI models will sing vocals for tracks you produce”, adding that it “was founded with a vision of showing that another way is possible in this new AI future. It is possible to both safely use this incredible, innovative technology without exploiting artists, and also create a brand new revenue stream: the AI voice.”

“A robust system of transparency, training data audit trails, attribution, and compensation is necessary for a stable and fruitful AI adoption for the creative industries,” he continues. “That means preventative measures to stop unauthorised data scraping, only allowing models to be built with consent, compensating artists for the use of their data, and allowing artists to monetise those models if they choose.

“Without ethical and legal frameworks for AI training and usage, we risk oversaturation and exploitation of artists. The streaming age has already decimated artist incomes; as a new tech era dawns, we mustn’t repeat the mistakes of the early days of streaming and make sure artists are at the table, and not on the menu.”

Read this next: How the #BrokenRecord campaign is fighting to fix the music industry

Not only is legal AI model training more fair and actually supports musicians having new income streams, it also guarantees better quality: “A lot of unauthorised AI models, especially for voice, are built using data scraped from the web and contain either low-quality MP3 rips or use stem separation technology that leaves behind artefacts. Compounded, these sonic traits significantly affect the quality of the resulting voice model,” notes McGlynn. “At Voice-Swap, we record all training data from scratch, making sure it’s done in the same room, same mic, even the same audio interface, with the highest quality possible. Consistency in the training data is key and will result in a much more professional and usable output from the model.”

McGlynn agrees that AI should support and not replace artists: “We see AI voice as a new form of IP that can be monetised. This doesn’t have to be a replacement technology. We’ve had several examples of producers using Voice-Swap to create demos, and then go on to hire that singer to re-sing their AI selves. AI is ultimately an efficiency tool to get your idea out of your speakers as quickly as possible, and to stay in the creative flow. Like synthesisers, drum machines, auto-tune, samplers, and VSTs, it will absolutely disrupt, but can ultimately become a new sonic palette for artists to tap into.”

One of the artists on the Voice-Swap roster, Chicago house legend Robert Owens - who is no stranger to having to battle to get his copyright dues, tells Mixmag that “working with Voice-Swap was a lovely and highly interesting experience. Sending in all of the different takes from vocal dynamics of tone to expression, and hearing the ending result on the AI’s take of what it felt was a reflection of my character — not yet duplicating a full essence of my character, but still, it was fascinating.” He also says that as a musician, he thinks “AI is a tool for expanding artists’ thoughts or perspective on how they would convey or write lyrical content. I hope AI becomes a useful structure to help up-and-coming artists understand their own creative formula. AI can be an additional format to expand individuals’ views of what they can achieve creatively by, first and foremost, understanding the uniqueness of their self and what their music can give back to humanity.”

Robert Owens | Credit: Arad Boaz
Portrait XO
Speaking as a data activist and musician who has also trained their own model, Portrait XO says they have been “very intentional with every decision I’ve made in the way I incorporate AI into my workflow for composing and producing. Training AI models with my personal writings and recordings are the most exciting ways I find in using AI. I don’t want to use approaches like Suno’s because it involves unconsented data. I love the more intimate and unique sounds I get when working with my own material, because it becomes an extension of my personal sound. This approach gives me a better understanding of how the data impacts the AI generated output without any other pre-trained data.”

And bouncing back on the general use of AI in the music industry, they ask: “Why are we building upon extractive and exploitative ways of working? We can create better infrastructures like vetted and consented datasets and every company building new businesses to remunerate fairly. At the very least, creative AI companies that use unconsented data in their AI model training, should be taxed heavier to give back to society.”

They continue: “We need accelerated care to match the accelerated burnout that’s currently happening due to the impact of AI. There isn’t enough work about how to integrate AI with care, and we desperately need it if we want to reclaim humanity in the age of AI. We still can, since not many regulations have been made, with much needed nuanced dialogue about how to facilitate transparency, trust, and protection.”

Read this next: ​82% of artists are “concerned” about the use of AI in music, study reveals

On the subject of ecology, optimisation, and safe-keeping “human labour of craft and theoretical work”, Portrait XO explains that “training large language models has made such a tremendous impact on the environment; and as researchers point out, these models aren’t giving better results with emotional cognition heavy inputs.” Similarly, taking more and more of a medicine does not improve the positive effect but worsens side effects; the rush to train models already at capacity seems dangerous and wasteful. “When we don’t know how to use AI in ways that help us, I question how it’ll work against us. If we stopped large language model training now and agree we’ve reached a point of enough innovation with that approach, leaving a lot still to build upon, I think that would be a healthy decision.”

If you would like to try Portrait XO’s approach to AI in music, they suggest Neutone and DataMind Audio as platforms that have “very low environmental impact, and are great examples of building the ecology of AI in music to empower artists, their craft, and skills.”

Musician, technologist, and researcher Benn Jordan started encoding his music with adversarial noise as a defence against AI misuse. “All the pillaging from the tech industry made me not want to share my music anymore. What's much more important to me is that something like Poisonify prevents myself and others from feeling discouraged and gives some a feeling of control to help inspire them to continue sharing their ideas. The most damaging part of all this is the threat of a lack of ethics creating some type of cultural desert in the coming years. That's what I'm fighting, in my mind at least.”

Jordan is not the only one fighting, Major labels are challenging AI misuse on several fronts in court, doing a simultaneous bit of advocacy and self-interest. The boutique firm Delgado Entertainment Law is inviting artists to join a class action, the Spawning web extension helps artists opt out of AI training, streaming service Deezer de-promotes AI tracks, anti-generative training tool Harmony Cloak is yet another way to protect your music.

It’s worth noting, though, that “both Poisonify and Harmony Cloak are very resource-intensive,” Jordan says. “At face value, it seems wasteful considering how much power the GPUs use encoding them, but it's still significantly less power than AI training requires. Harmony Cloak's first version is a proof of function, and they're making a newer version that uses far fewer resources. I think they both can become more efficient and accessible soon.”

“So, on one hand, using more power to possibly make AI training less efficient is extremely wasteful and sounds like a net ecological loss. On the other hand, as efficiency drops, at some point it stops making sense for generative AI companies to recklessly scrape and train on any piece of data they can get their hands on.”

Benn Jordan
Jordan does not only think of ways to stop illegal AI training, but also wonders why it happened. “Once I became more involved with macroeconomics, I began to feel like the core problems plaguing the music industry are caused by intellectual property laws and the philosophy of owning intangible things. Copyright began in the 16th century in England,” he ponders, “fast forward to the 20th century in America, and copyright is treated as a form of entrepreneurship which paved the way for a landscape of exploitative record labels and media conglomerates ... and now AI is coming for whatever is left.”

“The way I see it, the problem is how art is monetised,” he adds. “Artist exploitation has existed as long as modern copyright has. We live in a society where we create artificial scarcity so only the most fortunate can access [culture] in order to keep this industry of intangible ideas afloat.

“The good news is that, in many cases, you can bypass the commodification of information and sign up to an artist's Patreon or website to support their work. That direct-support economy is what enabled me to make Poisonify and other projects I am working on, and that ultimately have the sole purpose of supporting individualism, rights, or education in relation to artists.”

Ultimately, and despite the disproportionate efforts of certain actors of the music industry, some even stealing from the dead, artists and their supporters have means to make the scene sustainable. It starts with each of us making sure the way we consume music is fair, and with each artist accessing necessary tools and information needed for their art to reach where they intend it to. Algorithms and learning patterns cannot sell us anything we do not buy — let’s re-centre the financial and creative focus on artists and art lovers.

Nadia Says is the founder of Your Mom's Agency, follow her on Twitter

Share on Facebook
Share on Twitter
Copy link