Jump to content
  • Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them

    aum

    • 391 views
    • 7 minutes
     Share


    • 391 views
    • 7 minutes

    The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.

     

    AI pioneer Geoffrey Hinton, a 75-year-old computer scientist known as “the Godfather of AI,” has made waves this week after resigning from Google to warn that AI could soon surpass humans in intelligence and learn how to destroy humanity on its own.

     

    But Hinton’s warnings, while dire, are missing the point, says Meredith Whittaker, a prominent AI researcher who was pushed out of Google in 2019 in part for organizing employees against the company’s deal with the Pentagon to build machine vision technology for military drones. Now the president of the Signal Foundation, Whittaker tells Fast Company why Hinton’s alarmism is a distraction from more pressing threats, and how workers can stand up against technology’s harms from within. (Hinton did not respond to Fast Company’s request for comment.)

     

    This interview has been condensed and edited for clarity.

     

    Fast Company: Let’s start with your reaction to Geoffrey Hinton’s big media tour around leaving Google to warn about AI. What are you making of it so far?

     

    Meredith Whittaker: It’s disappointing to see this autumn-years redemption tour from someone who didn’t really show up when people like Timnit [Gebru] and Meg [Mitchell] and others were taking real risks at a much earlier stage of their careers to try and stop some of the most dangerous impulses of the corporations that control the technologies we’re calling artificial intelligence.

     

    So, there’s a bit of have-your-cake-and-eat-it-too: You get the glow of your penitence, but I didn’t see any solidarity or any action when there were people really trying to organize and do something about the harms that are happening now.

     

    FC: You started organizing within Google in 2017 to oppose Project Maven, a contract the company signed to build machine vision technology for U.S. military drones. Did you anticipate being forced out for speaking up?

     

    MW: I didn’t plan it with that in mind. But after our letter opposing Project Maven blew up, I remember realizing at that moment, based on my understanding of history, that I would be pushed out eventually. You don’t mess with the money like that and not get pushed out.

     

    FC: Did you know Geoffrey Hinton when you were at Google?

     

    MW: Not well, but we were in the same conferences, in the same room sometimes.

     

    FC: And did he express any kind of support when you were organizing?

     

    MW: I didn’t see him show up for any of the rallies or the actions or bottom-line any work. And then when it was getting dicey, when Google started taking the gloves off and hiring a union-busting firm, I didn’t see him come out and support.

     

    But the effectiveness of raising concerns hinges on the ability of people who raise them to be safe. So, if you don’t come out when me and Meg are being fired, if you don’t come out when others are being retaliated against, when research is being suppressed, then you are tacitly endorsing an environment that punishes people for raising concerns. [Editor’s note: While Whittaker says she was forced out of her job by Google, the company claims she opted to leave.]

     

    FC: There’s also a pattern that you’ve been pointing out: A lot of the people being punished for speaking out have been women.

     

    MW: Women and particularly women who aren’t white. And that isn’t just in Google, but in the space where people are discussing artificial intelligence, the folks who have come out earliest and with the most grounded and materially-specific concerns have generally been women, particularly Black women and women of color. I think it’s just notable who seems easy to ignore, and whose concerns are elevated as like: “Well, finally we’re hearing it from the father, so it must be true.”

     

    FC: On CNN recently, Hinton downplayed the concerns of Timnit Gebru—who Google fired in 2020 for refusing to withdraw a paper about AI’s harms on marginalized people—saying her ideas were not as “existentially serious” as his own. What do you make of that?

     

    MW: I think it’s stunning that someone would say that the harms [from AI] that are happening now—which are felt most acutely by people who have been historically minoritized: Black people, women, disabled people, precarious workers, et cetera—that those harms aren’t existential.

     

    What I hear in that is, “Those aren’t existential to me. I have millions of dollars, I am invested in many, many AI startups, and none of this affects my existence. But what could affect my existence is if a sci-fi fantasy came to life and AI were actually super intelligent, and suddenly men like me would not be the most powerful entities in the world, and that would affect my business.”

     

    FC: So, we shouldn’t be worried that AI will come to life and wipe out humanity?

     

    MW: I don’t think there’s any evidence that large machine learning models—that rely on huge amounts of surveillance data and the concentrated computational infrastructure that only a handful of corporations control—have the spark of consciousness.

     

    We can still unplug the servers, the data centers can flood as the climate encroaches, we can run out of the water to cool the data centers, the surveillance pipelines can melt as the climate becomes more erratic and less hospitable.

     

    I think we need to dig into what is happening here, which is that, when faced with a system that presents itself as a listening, eager interlocutor that’s hearing us and responding to us, that we seem to fall into a kind of trance in relation to these systems, and almost counterfactually engage in some kind of wish fulfillment: thinking that they’re human, and there’s someone there listening to us. It’s like when you’re a kid, and you’re telling ghost stories, something with a lot of emotional weight, and suddenly everybody is terrified and reacting to it. And it becomes hard to disbelieve.

     

    FC: What you said just now—the idea that we fall into a kind of trance—what I’m hearing you say is that’s distracting us from actual threats like climate change or harms to marginalized people.

     

    MW: Yeah, I think it’s distracting us from what’s real on the ground and much harder to solve than war-game hypotheticals about a thing that is largely kind of made up. And particularly, it’s distracting us from the fact that these are technologies controlled by a handful of corporations who will ultimately make the decisions about what technologies are made, what they do, and who they serve. And if we follow these corporations’ interests, we have a pretty good sense of who will use it, how it will be used, and where we can resist to prevent the actual harms that are occurring today and likely to occur.

     

    FC: Geoffrey Hinton also said on CNN, “I think it’s easier to voice your concerns if you leave the company first.”

     

    MW: If personal ease is your metric, then you’re not wrong. But I don’t think it’s more effective. This is one of the reasons that myself and so many others turn to labor organizing. Because there’s a lot of power and being able to withhold your labor collectively, and joining together as the people that ultimately make these companies function or not, and say, “We’re not going to do this.” Without people doing it, it doesn’t happen.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...