Jump to content
  • Musicians, Machines, and the AI-Powered Future of Sound

    alf9872000

    • 423 views
    • 6 minutes
     Share


    • 423 views
    • 6 minutes

    Fears that computers could replace composers are real. But some music-makers are finding ways to harness generative AI creatively. 

     

    LAST NOVEMBER, AT the Stockholm University of the Arts, a human and an AI made music together. The performance began with musician David Dolan playing a grand piano into a microphone. As he played, a computer system, designed and overseen by composer and Kingston University researcher Oded Ben-Tal, “listened” to the piece, extracting data on pitch, rhythm, and timbre. Then, it added its own accompaniment, improvising just like a person would. Some sounds were transformations of Dolan’s piano; some were new sounds synthesized on the fly. The performance was icy and ambient, eerie and textural. 

     

    This scene, of a machine and human peacefully collaborating, seems irreconcilable with the current artists-versus-machines discourse. You will have heard that AI is replacing journalists, churning out error-riddled SEO copy. Or that AI is stealing from illustrators, who are suing Stability AI, DeviantArt, and Midjourney for copyright infringement. Or that computers are rapping, or at least trying to: the “robot rapper” FN Meka was dropped by Capitol Records following criticism that the character was “an amalgamation of gross stereotypes.” In the most recent intervention, none other than Noam Chomsky claimed that ChatGPT exhibits the “banality of evil.”

     

    These anxieties slot neatly among concerns about automation, that machines will displace people—or, rather, that the people in control of these machines will use them to displace everyone else. Yet some artists, musicians prominent among them, are quietly interested in how these models might supplement human creativity, and not just in a “hey, this AI plays Nirvana” way. They are exploring how AI and humans might collaborate rather than compete. 

     

    “Creativity is not a unified thing,” says Ben-Tal, speaking over Zoom. “It includes a lot of different aspects. It includes inspiration and innovation and craft and technique and graft. And there is no reason why computers cannot be involved in that situation in a way that is helpful.”

     

    SPECULATION THAT COMPUTERS might compose music has been around as long as the computer itself. Mathematician and writer Ada Lovelace once theorized that Charles Babbage's steam-powered Analytical Engine, widely hailed as the first computer, could be used for something other than numbers. In her mind, if the “science of harmony and of musical composition” could be adapted for use with Babbage’s machine, “the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

     

    The first book on the subject, Experimental Music: Composition with an Electronic Computer, written by American composer and professor Lejaren Hiller Jr. and mathematician Leonard Isaacson, appeared in 1959. In popular music, artists like Ash Koosha, Arca, and, most prominently, Holly Herndon have drawn on AI to enrich their work. When Herndon spoke to WIRED last year about her free-to-use, “AI-powered vocal clone,” Holly+, she explained the tension between tech and music succinctly. “There’s a narrative around a lot of this stuff, that it’s scary dystopian,” she said. “I’m trying to present another side: This is an opportunity.”

     

    Musicians have also reacted to the general unease generated by ChatGPT and Bing’s AI chatbot. Bogdan Raczynski, reading transcripts of the chatbots’ viral discussions with humans, says over email that he detected “fright, confusion, regret, guardedness, backtracking, and so on” in the model’s responses. It isn’t that he thinks the chatbot has feelings, but that “the emotions it evokes in humans are very real,” he says. “And for me those feelings have been concern and sympathy.” In response, he has released a “series of comforting live performances for AI” (emphasis mine).

     

    BEN-TAL SAYS HIS work presents an alternative to “the human-versus-machine narrative.” He admits that generative AI can be unsettling because, on a superficial level at least, it exhibits a kind of creativity normally ascribed to humans, but he adds that it is also just another technology, another instrument, in a lineage that goes back to the bone flute. For him, generative AI isn’t unlike turntables: When artists discovered they could use them to scratch records and sample their sounds, they created whole new genres.

     

    In this vein, copyright may need a substantial rethink: Google has refrained from releasing its MusicLM model, which turns text into music, because of the “the risks associated with music generation, in particular, the potential misappropriation of creative content.” In a 2019 paper, Ben-Tal and other researchers asked readers to imagine a musician holodeck, an endpoint for music AI, that has archived all recorded music and can generate or retrieve any possible sound on request. Where do songwriters fit into this future? And before then, can songwriters defend themselves against plagiarism? Should audiences be told, as WIRED does in its articles, when AI is used?

     

    Yet these models still present attractive creative capabilities. In the short term, Ben-Tal says, musicians can use an AI, as he did, to improvise with a pianist outside of their skill set. Or they can draw inspiration from an AI’s compositions, perhaps in a genre they are not familiar with, like Irish folk music

     

    And in the longer term, AI might fulfill a wilder (albeit controversial) fantasy: It could effortlessly realize an artist’s vision. “Composers, you know, we come up with ideas of what music we would like to create, but then translating these into sounds or scores, realizing those ideas, is quite a laborious task,” he says. “If there was a wire that we could plug in and get this out, that could be very fantastic and wonderful.” 

     

    More urgently, mundane and pervasive algorithms are already mangling the industry. Author Cory Doctorow has written about Spotify’s chokehold on music—how playlists, for instance, encourage artists to abandon albums for music that fits into “chill vibes” categories, and train audiences to let Spotify tell them what to listen to. Introduced into this situation, AI will be the enemy of musicians. What happens when Spotify unleashes its own AI artists and promotes those? 

     

    Raczynski hopes he will catch the wave rather than be consumed by it. “Perhaps in a roundabout way, like it or not, I am acknowledging that short of going off the grid, I have no choice but to develop a relationship with AI,” he says. “My hope is to build a reciprocal relationship over a self-centered one.”

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...