Jump to content
  • Why an Octopus-like Creature Has Come to Symbolize the State of A.I.

    aum

    • 440 views
    • 7 minutes
     Share


    • 440 views
    • 7 minutes

    The Shoggoth, a character from a science fiction story, captures the essential weirdness of the A.I. moment.

     

    A few months ago, while meeting with an A.I. executive in San Francisco, I spotted a strange sticker on his laptop. The sticker depicted a cartoon of a menacing, octopus-like creature with many eyes and a yellow smiley-face attached to one of its tentacles. I asked what it was.

     

    “Oh, that’s the Shoggoth,” he explained. “It’s the most important meme in A.I.”

     

    And with that, our agenda was officially derailed. Forget about chatbots and compute clusters — I needed to know everything about the Shoggoth, what it meant and why people in the A.I. world were talking about it.

     

    The executive explained that the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.

     

    But it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.

     

    Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.

     

        RLHF shoggoth @scale_AI tote bags incoming

        courtesy @TimTheSloth pic.twitter.com/RIK4bjHByu


        — Alexandr Wang (@alexandr_wang) March 27, 2023

     

    Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.

     

    Shoggoths landed in the A.I. world in December, a month after ChatGPT’s release, when Twitter user @TetraspaceWest replied to a tweet about GPT-3 (an OpenAI language model that was ChatGPT’s predecessor) with an image of two hand-drawn Shoggoths — the first labeled “GPT-3” and the second labeled “GPT-3 + RLHF.” The second Shoggoth had, perched on one of its tentacles, a smiley-face mask.

     

         pic.twitter.com/wRVhjjSt0Y


        — tetraspace 💎 (@TetraspaceWest) December 30, 2022

     

    In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.

     

    Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.

     

    @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”

     

    Comparing an A.I. language model to a Shoggoth, @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.

     

    “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”

     

    The Shoggoth image caught on, as A.I. chatbots grew popular and users began to notice that some of them seemed to be doing strange, inexplicable things their creators hadn’t intended. In February, when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.

     

    Got a pack of LLM shoggoth stickers!! I ordered extra, so if anybody wants one( and lives on the east coast) DM me. First come first serve pic.twitter.com/UhmhCDyacR


    — MoreWrong (@wrong_more) March 17, 2023

     

    Eventually, A.I. enthusiasts extended the metaphor. In February, Twitter user @anthrupad created a version of a Shoggoth that had, in addition to a smiley-face labeled “R.L.H.F.,” a more humanlike face labeled “supervised fine-tuning.” (You practically need a computer science degree to get the joke, but it’s a riff on the difference between general A.I. language models and more specialized applications like chatbots.)

     

    Today, if you hear mentions of the Shoggoth in the A.I. community, it may be a wink at the strangeness of these systems — the black-box nature of their processes, the way they seem to defy human logic. Or maybe it’s an in-joke, visual shorthand for powerful A.I. systems that seem suspiciously nice. If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.

     

    In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I.

     

    language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world. And some of them have gotten to play around with the versions of this technology that haven’t yet been sanitized for public consumption — the real, unmasked Shoggoths.

     

    That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)

     

    And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...