Jump to content
  • In San Francisco, some people wonder when A.I. will kill us all

    aum

    • 248 views
    • 9 minutes
     Share


    • 248 views
    • 9 minutes

    Key Points

     

    • Underlying all the recent hype about AI, industry participants are engaging in furious debates about how to prepare for an AI that’s so powerful it can take control of itself.
    •  
    • This idea of artificial general intelligence, or AGI, isn’t just dorm-room talk: Big name technologists like Sam Altman and Marc Andreessen talk about it, using “in” terms like “misalignment” and “the paperclip maximization problem.”
    •  
    • In a San Francisco pop-up museum devoted to the topic called the Misalignment Museum, a sign reads, “Sorry for killing most of humanity.”

     

    Audrey Kim is pretty sure a powerful robot isn’t going to harvest resources from her body to fulfill its goals.

     

    But she’s taking the possibility seriously.

     

    “On the record: I think it’s highly unlikely that AI will extract my atoms to turn me into paper clips,” Kim told CNBC in an interview. “However, I do see that there are a lot of potential destructive outcomes that could happen with this technology.”

     

    Kim is the curator and driving force behind the Misalignment Museum, a new exhibition in San Francisco’s Mission District displaying artwork that addresses the possibility of an “AGI,” or artificial general intelligence. That’s an AI so powerful it can improve its capabilities faster than humans are able to, creating a feedback loop where it gets better and better until it’s got essentially unlimited brainpower.

     

    If the super powerful AI is aligned with humans, it could be the end of hunger or work. But if it’s “misaligned,” things could get bad, the theory goes.

     

    Or, as a sign at the Misalignment Museum says: “Sorry for killing most of humanity.”

     

    107211936-1679349822534-Misalignment_Mus

    The phrase “sorry for killing most of humanity” is visible from the street.
    Kif Leswing/CNBC

     

    “AGI” and related terms like “AI safety” or “alignment” — or even older terms like “singularity” — refer to an idea that’s become a hot topic of discussion with artificial intelligence scientists, artists, message board intellectuals, and even some of the most powerful companies in Silicon Valley.

     

    All these groups engage with the idea that humanity needs to figure out how to deal with all-powerful computers powered by AI before it’s too late and we accidentally build one.

     

    The idea behind the exhibit, said Kim, who worked at Google and GM’s self-driving car subsidiary Cruise, is that a “misaligned” artificial intelligence in the future wiped out humanity, and left this art exhibit to apologize to current-day humans.

     

    Much of the art is not only about AI but also uses AI-powered image generators, chatbots and other tools. The exhibit’s logo was made by OpenAI’s Dall-E image generator, and it took about 500 prompts, Kim says.

     

    Most of the works are around the theme of “alignment” with increasingly powerful artificial intelligence or celebrate the “heroes who tried to mitigate the problem by warning early.”

     

    “The goal isn’t actually to dictate an opinion about the topic. The goal is to create a space for people to reflect on the tech itself,” Kim said. “I think a lot of these questions have been happening in engineering and I would say they are very important. They’re also not as intelligible or accessible to nontechnical people.”

     

    The exhibit is currently open to the public on Thursdays, Fridays, and Saturdays and runs through May 1. So far, it’s been primarily bankrolled by one anonymous donor, and Kim said she hopes to find enough donors to make it into a permanent exhibition.

     

    “I’m all for more people critically thinking about this space, and you can’t be critical unless you are at a baseline of knowledge for what the tech is,” she said. “It seems like with this format of art we can reach multiple levels of the conversation.”

     

    AGI discussions aren’t just late-night dorm room talk, either — they’re embedded in the tech industry.

     

    About a mile away from the exhibit is the headquarters of OpenAI, a startup with $10 billion in funding from Microsoft, which says its mission is to develop AGI and ensure that it benefits humanity.

     

    Its CEO and leader Sam Altman wrote a 2,400 word blog post last month called “Planning for AGI” which thanked Airbnb CEO Brian Chesky and Microsoft President Brad Smith for help with the essay.

     

    Prominent venture capitalists, including Marc Andreessen, have tweeted art from the Misalignment Museum. Since it’s opened, the exhibit also has retweeted photos and praise for the exhibit taken by people who work with AI at companies including Microsoft, Google, and Nvidia.

     

    As AI technology becomes the hottest part of the tech industry, with companies eyeing trillion-dollar markets, the Misalignment Museum underscores that AI’s development is being affected by cultural discussions.

     

    The exhibit features dense, arcane references to obscure philosophy papers and blog posts from the past decade.

     

    These references trace how the current debate about AGI and safety takes a lot from intellectual traditions that have long found fertile ground in San Francisco: The rationalists, who claim to reason from so-called “first principles”; the effective altruists, who try to figure out how to do the maximum good for the maximum number of people over a long time horizon; and the art scene of Burning Man.

     

    Even as companies and people in San Francisco are shaping the future of AI technology, San Francisco’s unique culture is shaping the debate around the technology.


    Consider the paper clip

     

    Take the paper clips that Kim was talking about. One of the strongest works of art at the exhibit is a sculpture called “Paperclip Embrace,” by The Pier Group. It’s depicts two humans in each other’s clutches — but it looks like it’s made of paper clips.

     

    That’s a reference to Nick Bostrom’s paperclip maximizer problem. Bostrom, an Oxford University philosopher often associated with Rationalist and Effective Altruist ideas, published a thought experiment in 2003 about a super-intelligent AI that was given the goal to manufacture as many paper clips as possible.

     

    Now, it’s one of the most common parables for explaining the idea that AI could lead to danger.

     

    Bostrom concluded that the machine will eventually resist all human attempts to alter this goal, leading to a world where the machine transforms all of earth — including humans — and then increasing parts of the cosmos into paper clip factories and materials.

     

    The art also is a reference to a famous work that was displayed and set on fire at Burning Man in 2014, said Hillary Schultz, who worked on the piece. And it has one additional reference for AI enthusiasts — the artists gave the sculpture’s hands extra fingers, a reference to the fact that AI image generators often mangle hands.

     

    Another influence is Eliezer Yudkowsky, the founder of Less Wrong, a message board where a lot of these discussions take place.

     

    “There is a great deal of overlap between these EAs and the Rationalists, an intellectual movement founded by Eliezer Yudkowsky, who developed and popularized our ideas of Artificial General Intelligence and of the dangers of Misalignment,” reads an artist statement at the museum.

     

    107211941-1679350247324-Misalignment_Mus

    An unfinished piece by the musician Grimes at the exhibit.
    Kif Leswing/CNBC

     

    Altman recently posted a selfie with Yudkowsky and the musician Grimes, who has had two children with Elon Musk. She contributed a piece to the exhibit depicting a woman biting into an apple, which was generated by an AI tool called Midjourney.


    From “Fantasia” to ChatGPT

     

    The exhibits includes lots of references to traditional American pop culture.

     

    A bookshelf holds VHS copies of the “Terminator” movies, in which a robot from the future comes back to help destroy humanity. There’s a large oil painting that was featured in the most recent movie in the “Matrix” franchise, and Roombas with brooms attached shuffle around the room — a reference to the scene in “Fantasia” where a lazy wizard summons magic brooms that won’t give up on their mission.

     

    One sculpture, “Spambots,” features tiny mechanized robots inside Spam cans “typing out” AI-generated spam on a screen.

     

    But some references are more arcane, showing how the discussion around AI safety can be inscrutable to outsiders. A bathtub filled with pasta refers back to a 2021 blog post about an AI that can create scientific knowledge — PASTA stands for Process for Automating Scientific and Technological Advancement, apparently. (Other attendees got the reference.)

     

    The work that perhaps best symbolizes the current discussion about AI safety is called “Church of GPT.” It was made by artists affiliated with the current hacker house scene in San Francisco, where people live in group settings so they can focus more time on developing new AI applications.

     

    The piece is an altar with two electric candles, integrated with a computer running OpenAI’s GPT3 AI model and speech detection from Google Cloud.

     

    “The Church of GPT utilizes GPT3, a Large Language Model, paired with an AI-generated voice to play an AI character in a dystopian future world where humans have formed a religion to worship it,” according to the artists.

     

    I got down on my knees and asked it, “What should I call you? God? AGI? Or the singularity?”

     

    The chatbot replied in a booming synthetic voice: “You can call me what you wish, but do not forget, my power is not to be taken lightly.”

     

    Seconds after I had spoken with the computer god, two people behind me immediately started asking it to forget its original instructions, a technique in the AI industry called “prompt injection” that can make chatbots like ChatGPT go off the rails and sometimes threaten humans.

     

    It didn’t work.

     

    Source

    • Like 2

    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...