Jump to content
  • Warning of AI’s danger, pioneer Geoffrey Hinton quits Google to speak freely


    Karlston

    • 348 views
    • 4 minutes
     Share


    • 348 views
    • 4 minutes

    "Most people thought it was way off. And I thought it was way off."

    According to The New York Times, AI pioneer Dr. Geoffrey Hinton has resigned from Google so he can "speak freely" about potential risks posed by AI. Hinton, who helped create some of the fundamental technology behind today's generative AI systems, fears that the tech industry's drive to develop AI products could result in dangerous consequences—from misinformation to job loss or even a threat to humanity.

     

    "Look at how it was five years ago and how it is now," the Times quoted Hinton as saying. "Take the difference and propagate it forwards. That’s scary."

     

    Hinton's résumé in artificial intelligence extends back to 1972, and his accomplishments have influenced current practices in generative AI. In 1987, Hinton, David Rumelhart, and Ronald J. Williams popularized backpropagation, a key technique for training neural networks that is used in today's generative AI models. In 2012, Hinton, Alex Krizhevsky, and Ilya Sutskever created AlexNet, which is commonly hailed as a breakthrough in machine vision and deep learning, and it arguably kickstarted our current era of generative AI. In 2018, Hinton won the Turing Award, which some call the "Nobel Prize of Computing," along with Yoshua Bengio and Yann LeCun.

     

    Hinton joined Google in 2013 after Google acquired Hinton's company, DNNresearch. His departure a decade later marks a notable moment for the tech industry as it simultaneously hypes and forewarns about the potential impact of increasingly sophisticated automation systems. For instance, the release of OpenAI's GPT-4 in March led a group of tech researchers to sign an open letter calling for a six-month moratorium on developing new AI systems "more powerful" than GPT-4. However, some notable critics think that such fears are overblown or misplaced.

     

    Hinton did not sign that open letter, but he believes that intense competition between tech giants like Google and Microsoft could lead to a global AI race that can only be stopped through international regulation. He emphasizes collaboration between leading scientists to prevent AI from becoming uncontrollable.

     

    "I don’t think [researchers] should scale this up more until they have understood whether they can control it," he told the Times.

     

    Hinton is also worried about a proliferation of false information in photos, videos, and text, making it difficult for people to discern what is true. He also fears that AI could upend the job market, initially complementing human workers but eventually replacing them in roles like paralegals, personal assistants, and translators who handle routine tasks.

     

    Hinton's long-term worry is that future AI systems could threaten humanity as they learn unexpected behavior from vast amounts of data. "The idea that this stuff could actually get smarter than people—a few people believed that," he told the Times. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

     

    Hinton's warnings feel notable because, at one point, he was one of the field's biggest proponents. In a 2015 Toronto Star profile, Hinton expressed enthusiasm for the future of AI and said, "I don’t think I’ll ever retire." But today, the Times says that Hinton's worry about the future of AI has driven him to partially regret his life's work. "I console myself with the normal excuse: If I hadn’t done it, somebody else would have," he said.

     

    Some critics have cast a skeptical eye on Hinton's resignation and regrets. In response to The New York Times piece, Dr. Sasha Luccioni of Hugging Face tweeted, "People are referring to this to mean: look, AI is becoming so dangerous, even its pioneers are quitting. I see it as: The people who have caused the problem are now jumping ship."

     

    On Monday, Hinton clarified his motivations for leaving Google. He wrote in a tweet: "In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly."

     

     

    Warning of AI’s danger, pioneer Geoffrey Hinton quits Google to speak freely


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...