Music and Artificial Intelligence: new creative frontiers

In recent years, artificial intelligence has emerged from laboratories and academic environments to make a strong entry into territories that seemed exclusively human.
One of these is music. Once the supreme symbol of emotion, intuition, and artistic uniqueness, today music finds itself dealing with – and, increasingly, collaborating with – mathematical models, neural networks, and software capable of learning, creating, and even evoking emotion.
But what does it mean, in concrete terms, “making music with AI”? And which artists have decided to embrace this new dimension?
Automatic and assisted composition: AI as a co-author
Some software, like AIVA or Amper, are capable of composing original pieces starting from a few inputs: a musical genre, a mood, some reference instruments. The result? Convincing instrumental music, sometimes surprising, used for soundtracks, advertisements, or online content.
AI and musical discovery
Not by chance, there are already musical productions generated entirely by an AI, such as the track “Daddy’s Car”, created by Sony’s research lab and inspired by the style of the Beatles. If you listen to it without knowing anything, you might think of a nostalgic and talented pop group, not an algorithm.
Other projects, on the other hand, go further, trying to imitate the voice and style of real artists. The Jukebox system, developed by OpenAI, is capable of generating songs with lyrics, music, and voices that eerily resemble existing singers. It is a technology that fascinates and frightens at the same time, especially because it raises profound questions about authenticity, copyright, and artistic identity.
Artists who experiment with AI
But AI is not just simulation. For some artists, it is a tool for co-creation. Holly Herndon, for example, has developed a “vocal daughter” called Spawn, an artificial intelligence trained to sing and compose together with her.
The result is an album, PROTO, that explores what it means to make music with a sentient machine. It is not about delegating creativity, but amplifying it, making it converse with another intelligence.
The artist Taryn Southern also released an entire album, I AM AI, produced in collaboration with artificial intelligence systems. It was not a provocation, but a conscious experiment to understand where the human hand ends and where the algorithm begins.
Finally, recently a new band, the Purple Atlas, has released a new track on Spotify, “Writing Love Instead” created with AI, although the song lyrics are composed by the band members themselves. It is perhaps here that artistic creation takes root: AI is used as a tool for a purpose, musical composition, but always controlled by the human.
AI and music mixing
The transformation, however, does not concern only the composition. AI technologies are increasingly present in the production, mixing, and mastering phases.
Platforms like LANDR allow anyone to achieve professional mastering in just a few clicks, while intelligent plugins like those from iZotope analyze a track’s frequencies in real-time to suggest corrections and optimizations. It is a change that democratizes music production, breaking down economic and technical barriers.
And then there is the voice: synthetic, artificial, often indistinguishable from the real one. Vocal software like Vocaloid or Synthesizer V allow the creation of virtual singers, some of whom – like the famous Hatsune Miku – have fans spread all over the world and perform in sold-out tours… as if they were real.
But the line between experiment and deception becomes thin when “unreleased” songs by Nirvana or 2Pac start circulating on TikTok or YouTube, digitally reconstructed, without consent, without context.
In parallel, streaming platforms are using predictive models to suggest songs to users with an almost uncanny effectiveness.
Algorithms and dehumanization
Algorithms that analyze our tastes, our moods, even the times of day, to anticipate what we would like to listen to. And more and more often, artists and producers start composing with these criteria in mind: track length, bpm, type of intro. As if, in addition to the audience, it was necessary to convince the algorithm as well.
There are those who see in all this a dehumanizing drift. Those who fear that art will be reduced to a product, that creativity will be compressed into a calculable output. But there are also those who look at AI as a new muse: a way to overcome creative block, explore new sounds, collaborate with the unimaginable.
The music of the future will not only be written for human beings, but perhaps also with machines.
It’s not about choosing between human and artificial, but about understanding how to coexist. As always, what will really matter is the intention: if there is a vision, an emotion, a story to tell, it matters little whether the travel companion is made of flesh or code.