Published by Hearing Things
Words by Ryan Dombal
Illustration by Poan Pan

Lots of people will tell you that using generative AI in the creative industries is just straight up bad. But in this story for the worker-owned music and culture platform Hearing Things, editor and co-founder Ryan Dombal feeds himself into the machine to see what he can learn. Scroll down to hear his own teenage garage rock, and the AI version of it, and his feelings on the experiment as a whole.

Keeping up with new artists, new sounds, and new musical innovations is my lifeblood. As a music journalist, it’s often a requirement in order to remain relevant and earn whatever little money is left in this beleaguered profession. But beyond the practicalities, I’ve always been drawn to the thrill of the new. Even at 43, an age when some among my tribe are leaning on nostalgia or diving into longform historical projects, I’m still finding the most pleasure in surfing the now – striving to stay atop the wave, juiced off the sound of what’s next.

This tendency goes back to my formative years as a listener, around the turn of the century. My teenage brain took in the horny Björk humanoids in her 1997 music video for “All Is Full of Love” and imagined a not-so-distant future teeming with mechanized beauty. After waiting in line to buy a copy of Radiohead’s Kid A on release day a few weeks after I started college in 2000, I went back to my dorm, stuck my headphones on, and slipped into a realm where the electronic and the organic wrapped around each other like vines, heralding infinite possibilities.

Writing about music professionally, I embraced my generation’s techno-optimism. I was obsessed with how T-Pain and Kanye could twist and warp their voices with Auto-Tune, bringing out bloody emotion; I scoffed when a 40-year-old Jay-Z wrongfully eulogized the ubiquitous technology in 2009: classic old-man behavior. To this day, I pride myself on avoiding knee-jerk rejections of musicians whose ways of working are different from the norm. But generative AI is testing my openness like never before. I can’t help but wonder: Is this just another tool, or is it the end of music as we know it? Or both?

Like many, I’m tempted to just flip my middle finger in the face of all AI-generated music. It’s taking work and money away from real musicians. It’s soulless. It’s cheating. A lawless regurgitation machine. Though it comes with a futuristic sheen, it can also be seen as purely regressive in the way it relies on models based on previously recorded music (aka the past). All last year, the ominous drumbeat of stories about AI music pushed me, almost subconsciously, toward music that wears its humanness like a badge of honor.

It’s the manifestation of everything you would assume about AI music: crass, dopey, redundant.

But no matter how hard conscientious music fans plug their ears, AI music isn’t just going to shrink away. In fact, it’s proliferating to a disconcerting degree. Last January, the music streamer Deezer estimated around 10,000 fully AI-generated songs were being uploaded to its service each day. By December, that number grew to 50,000, which works out to 34 percent of all new music added to the platform in a given day. (So far, Deezer is the only streamer to not only flag the songs it detects as AI-generated but also ban them from its recommendation algorithms and editorial playlists. Shout out to Deezer.) Fifty thousand songs is a lot of songs. But it’s only a small percentage of the 7 million AI-generated tracks that are created daily by users of the app Suno, which has quickly become the biggest player in AI music with more than 100 million people using it to create music over the last couple of years.

Now some of those totally AI songs are becoming hits. When you hear something like “Walk My Walk,” by a stubbled entity called Breaking Rust, it’s understandable to dismiss the technology – and maybe even wish computers were never invented. It’s a formulaic stomp-and-snooze track about supposed rebellion, country pride, and standing your ground, with lyrics like, “You can kick rocks if you don’t like how I talk.” Call it automated grievance music, what your generic belligerent uncle would flip on in his truck. It’s the manifestation of everything you would assume about AI music: crass, dopey, redundant.

In November, “Walk My Walk” topped Billboard’s Country Digital Song Sales chart, a largely meaningless tally that can easily be gamed since nobody really buys digital downloads anymore. (Reports suggest a track would only need to collect a few thousand downloads to scale that particular chart.) But news of it being a No. 1 song of a sort made its own headlines, leading to more curiosity and interest. It’s now collected upwards of 15 million collective plays on Spotify and YouTube. “Walk My Walk” is one of a half-dozen AI-generated tracks to debut on Billboard’s various charts this year, though the magazine admits “that figure could be higher, as it’s become increasingly difficult to tell who or what is powered by AI – and to what extent.”

A more intriguing example of a recent AI hit is Xania Monet’s “How Was I Supposed to Know?,” which has appeared on eight Billboard charts, including ostensibly less-gameable tallies like Adult R&B Airplay. The artist and track were created by a small-town Mississippi woman named Telisha “Nikki” Jones, who has explained that she used Suno to generate the music and vocals but wrote the lyrics herself. This basically checks out: While the song is about as by-the-numbers as you’d expect from a program that’s built for normalization, the lyrics at least have some narrative momentum and exhibit actual angst. “How Was I Supposed to Know?” is essentially about generational trauma, and how Jones/Monet never learned how to love because she never had a father. “So I took every ‘I love you’ too deep/Even when they only meant it for a week,” goes one couplet. To be clear, it’s an extremely corny and manipulative tearjerker, but so is a lot of music that ends up on the radio nowadays. In September, Jones/Monet signed a multi-million dollar record deal with Hallwood Media. The entertainment company is staking its claim as a destination for AI acts, and even contributed to Suno’s $250 million round of funding late last year.

In terms of optics, it’s a canny signing: Jones is a Black woman who created another Black woman to be her musical avatar and is reaping the benefits of her ingenuity. This makes the whole thing look less exploitative and devious than, say, the saga of digital rapper FN Meka, who was briefly signed to Capitol Records in 2022 before the entire thing blew up amid very valid accusations of insensitivity and appropriation. When Jones recently appeared on daytime talk show Tamron Hall, she was met with amazed applause from the studio audience. During that interview, Hall asked Jones if she was concerned with the possibility of a “white guy behind a computer” making a Monet-like creation. Jones said she was not, which is fair enough, but also feels woefully shortsighted. Considering Suno’s user base is largely made up of men between the ages of 25 and 34, and, you know, the music industry’s decades of racially exploitative practices, the AI-appropriation issue feels like a ticking time bomb.

Plenty of people are rejecting Monet’s legitimacy, too. Real-life R&B stars SZA and Kehlani spoke out against the project, with the latter saying, “Nothing and no one on Earth will ever be able to justify AI to me.” (Worth mentioning: Monet’s debut album is called Unfolded and came out over the summer – just as Kehlani’s song “Folded” was scaling the pop and R&B charts.) In a weird twist that feels especially dystopian, Jones/Monet released a new song called “Say My Name With Respect” last month that takes aim at her various haters. “You keep saying I’m not a real artist, right/But somehow my songs still change somebody’s night,” it goes. “People say my lyrics saved them, that’s real art.”

Reading about all of this AI music made me want to give Suno a spin myself, if only for a laugh. I set up an account and did what any cat owner would do: With a few clicks and brain-fart phrases, I prompted it to make a free jazz song about how my 15-pound tabby loves his food. It obliged, though the song sounded more like samba than Ornette Coleman. It was awful, not even dumb-funny, and I felt hollow listening to it. (From a technical standpoint, it’s undeniable that AI music has progressed exponentially over the last couple of years, though I couldn’t help but notice that even its breakthrough hits sound extremely flat and tinny, as if they’re made to be heard coming out of a phone’s speaker.)

I was about to drop this half-assed experiment when I remembered that you can also upload music of your own to Suno and get it to add lyrics, flesh it out musically, or remix it into a different style. I broke out one of my old external hard drives and found a humble garage-rock demo my friend Grant and I made in my mom’s basement more than 20 years ago. I was playing guitar, Grant was on drums. It was our instrumental rip-off of the White Stripes, who were the coolest band on the planet (at least to me) at the time. Rather than turn this scuzzy riff into an EDM drop or whatever, I wanted to hear what it would sound like with vocals and a fuller arrangement. I uploaded the dusty MP3, asked the program to write lyrics based on a title I just thought up, “Blue Black Down,” and for the music prompt I wrote “garage rock inspired by the white stripes, punk, anarchy, disgusting.” I originally also added “sex pistols” as a reference but Suno balked at that, citing copyright concerns; in 2024, the big three major labels sued the tech company for $500 million because of (very valid) flagrant copyright violation, and in November Warner Music Group reached a deal with Suno that will force it to switch to “more advanced and licensed models” this year.

Is there a chance that some genius high school kid is using Suno as a way to make a song that’s unlike anything anyone’s ever heard before? Maybe!

After taking a few seconds to process my demo, what Suno spit out genuinely surprised me. On one hand, the AI version of “Blue Black Down” sounded more like Green Day than the White Stripes – any sense of grit was polished to a blinding gleam, fast-forwarding to the most baldly commercial permutation. On the other hand, it felt weirdly exhilarating to hear this homemade burst of creativity from a past life transform into something that wouldn’t sound out-of-place on some corporate modern-rock radio station. I tried to put myself in the headspace of a teenager with a cheap guitar, a second-hand drum machine, and a laptop, who now had the capability to expand on any small music idea they might have. It could build confidence and push them to keep going.

This is where the nuance comes in. There is an entire universe between someone creating a whole song, lyrics and all, using a few prompts on Suno, and someone using it as a virtual whiteboard that they can edit, add to, and generally fuck with to further the sounds in their head. Based on my limited testing, Suno generally leaned toward the most basic version of whatever weird music I plugged into it, which is disappointing. But why can’t there be a Suno for freaks that does the opposite, taking regular music ideas and flipping them into their oddest possible shapes? (Maybe a more advanced version of Suno is capable of doing this, but I refuse to give them money to find out.) According to Billboard, Suno is currently being used by many professional songwriters to help them crack this chorus or that lyric. It’s everywhere, whether you know it or not.

On the whole, will this technology make music – especially major-label music – sound more homogeneous? Probably. Will it encourage streamers like Spotify to make their own braindead AI crud to put on playlists so that they can pay real artists even less? I bet it will. But is there also a chance that some genius high school kid is using Suno as a way to make a song that’s unlike anything anyone’s ever heard before? Maybe! Is that worth all of the destruction AI can and will levy upon working musicians’ livelihoods (nevermind the environment)? Probably not. And what happens when the AI bubble bursts, like everyone is predicting? Are questions like these going to be circling my brain for the foreseeable future? Most definitely yes.

When I thought to use a still from Björk’s “All Is Full of Love” video for this essay, I wondered if she weighed in on AI music yet. Of course she has. In 2024, she worked on a sound installation called Nature Manifesto for Paris’ Centre Pompidou that had her combining her vocals with the sounds of extinct animals that were created with AI. (Though this almost sounds like a Björk project dreamed up by AI, it is real.) In an interview about the installation with Dazed, she was asked about the project’s use of AI. The incessant innovator, who’s never shied away from smashing together the ancient and the bleeding edge, gave an answer laden with experience and optimism. An answer that I can believe in – that I must believe in.

“I’ve had this discussion every time I put a drum machine on my record and people were like, ‘Oh, there’s no soul in this album,’” she said. “The computer is not supposed to put soul into music, it’s all humans. I’ve heard a lot of soulless guitar music. We have to bring soul to things made by AI. And like all the monumental things mankind has done, we can do it.”

Hearing Things is a worker-owned music and culture platform run by writers and editors with many decades of collective experience covering music and culture at Pitchfork, The Fader, Vibe, Spin, Gawker, Jezebel, and elsewhere.

Ryan Dombal is a writer, editor, and co-founder of the worker-owned music journalism publication Hearing Things. He was previously on staff at Pitchfork for 15 years.

Poan Pan is a Taiwan-based freelance illustrator known for whimsical illustrations with a soft palette and coloured pencils. His work celebrates the movement and warmth of humanity, capturing the essence of awkwardness and quirkiness in everyday life. You can see more of his work on Instagram.

Did you enjoy this story? Would you like to help us keep on searching out great storytelling from independent publishers? If you can afford it, please consider paying £5 per month to support The Mortar, so we can pay all our writers and illustrators a fair rate for their work.

Keep Reading