Jump to content
  • AI technology “can go quite wrong,” OpenAI CEO tells Senate

    alf9872000

    • 639 views
    • 8 minutes
     Share


    • 639 views
    • 8 minutes

    Advanced AI systems should require government licenses, GPT-4 maker's CEO says.

    OpenAI CEO Sam Altman testified in the US Senate today about the potential dangers of artificial intelligence technology made by his company and others, and urged lawmakers to impose licensing requirements and other regulations on organizations that make advanced AI systems such as OpenAI's GPT-4.

     

    "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said. "For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities."

     

    While Altman touted AI's benefits, he said that OpenAI is "quite concerned" about elections being affected by content generated by AI. "Given that we're going to face an election next year and these models are getting better, I think this is a significant area of concern... I do think some regulation would be quite wise on this topic," Altman said.

     

    Altman was speaking at a hearing held by the Senate Judiciary Committee's Subcommittee on Privacy, Technology, and the Law. Also testifying was IBM's chief privacy and trust officer, Christina Montgomery.

     

    "IBM urges Congress to adopt a precision regulation approach to AI," Montgomery said. "This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself." Montgomery said that Congress should clearly define the risks of AI and impose "different rules for different risks," with the strongest rules "applied to use cases with the greatest risks to people and society."

    AI tech “can go quite wrong”

    Several lawmakers commented on OpenAI and IBM's willingness to face new rules, with Sen. Dick Durbin (D-Ill.) saying it's remarkable that big companies came to the Senate to "plead with us to regulate them."

     

    Altman suggested that Congress form a new agency that licenses AI tech "above a certain scale of capabilities and could take that license away to ensure compliance with safety standards." Before an AI system is released to the public, there should be independent audits by "experts who can say the model is or isn't in compliance with these stated safety thresholds and these percentages on questions X or Y," he said.

     

    Altman said he is worried that the AI industry could "cause significant harm to the world."

     

    "I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that," Altman said. "We want to work with the government to prevent that from happening."

     

    Altman said he doesn't think burdensome requirements should apply to companies and researchers whose models are much less advanced than OpenAI's. He suggested that Congress "define capability thresholds" and place AI models that can perform certain functions into the strict licensing regime.

     

    As examples, Altman said that licenses could be required for AI models "that can persuade, manipulate, influence a person's behavior, a person's beliefs," or "help create novel biological agents." Altman said it would be simpler to require licensing for any system that is above a certain threshold of computing power, but that he would prefer to draw the regulatory line based on specific capabilities.

     

    OpenAI consists of both nonprofit and for-profit entities. Altman said that OpenAI's GPT-4 model is "more likely to respond helpfully and truthfully and refuse harmful requests than any other model of similar capability," partly due to extensive pre-release testing and auditing:

     

    Before releasing any new system, OpenAI conducts extensive testing, engages external experts for detailed reviews and independent audits, improves the model's behavior, and implements robust safety and monitoring systems. Before we released GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming, and dangerous capability testing.

     

    Altman also said that people should be able to opt out of having their personal data used for training AI models. OpenAI last month announced that ChatGPT users can now turn off chat history to prevent conversations from being used to train AI models.

    People shouldn’t be “tricked” into interacting with AI

    Montgomery pitched transparency requirements, saying that consumers should know when they're interacting with AI. "No person anywhere should be tricked into interacting with an AI system... the era of AI cannot be another era of move fast and break things," she said.

     

    She also said the US should quickly hold companies accountable for deploying AI "that disseminates misinformation on things like elections."

     

    Senators heard from Gary Marcus, an author who founded two AI and machine learning companies and is a professor emeritus of psychology and neural science at New York University. He said at today's hearing that AI can create persuasive lies and provide harmful medical advice. Marcus also criticized Microsoft for not immediately pulling the Sydney chatbot after it exhibited alarming behavior.

     

    "Sydney clearly had problems... I would have temporarily withdrawn it from the market and they didn't," Marcus said. "That was a wake-up call to me and a reminder that even if you have companies like OpenAI that is a nonprofit... other people can buy those companies and do what they like with them. Maybe we have a stable set of actors now, but the amount of power that these systems have to shape our views and lives is really significant, and that doesn't even get into the risks that someone might repurpose them deliberately for all kinds of bad purposes."

    FDA-like safety reviews proposed

    AI-generated news articles are another problem discussed by Marcus. "The quality of the overall news market is going to decline" as more news is written by AI, he said.

     

    Marcus said the US government probably needs "a cabinet-level organization" with technical expertise whose full-time job is overseeing AI. There should be Food and Drug Administration-like safety reviews prior to widespread deployment of AI systems, and reviews of products after they're released, he said.

     

    But Marcus also warned senators against the potential of regulatory capture. "If we make it appear as if we are doing something, but it's more like greenwashing and nothing really happens, we just keep out the little players because we put so [many] burdens [on them] that only the bigger players can do it," he said.

    Pausing AI development seen as unrealistic

    Some AI critics have called for a pause in development, and a nonprofit AI research group accused OpenAI of releasing GPT-4 without properly accounting for its risks. Altman said today that iterative releases are better because "going off to build a super powerful AI system in secret and then dropping it on the world all at once, I think, would not go well."

     

    "A big part of our strategy is while these systems are relatively weak and deeply imperfect to find ways to get people to have experience with them... and to figure out what we need to do to make it safer and better," Altman said.

     

    Sen. Richard Blumenthal (D-Conn.), chair of the subcommittee, said that "AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access."

     

    "There are places where the risk of AI is so extreme that we ought to impose restrictions or even ban their use, especially when it comes to commercial invasions of privacy for profit and decisions that affect people's livelihoods," he said.

     

    There's no sense in trying to halt the development of AI, Blumenthal said. "The world won't wait, the rest of the global scientific community isn't going to pause. We have adversaries that are moving ahead, and sticking our heads in the sand is not the answer," Blumenthal said.

    Hawley skeptical of new federal agency

    Blumenthal supported the idea of an agency to regulate AI but warned that tech companies "will run circles around them" if a new agency isn't given sufficient funding and scientific expertise. Blumenthal noted that US companies usually oppose any new regulations. But in the case of AI makers, Blumenthal said, "I sense there is a willingness to participate here that is genuine and authentic."

     

    Sen. Josh Hawley (R-Mo.) said that agencies "usually get captured by the interests that they're supposed to regulate" and "get controlled by the people who they're supposed to be watching." He suggested that AI harms can be handled by class actions and other lawsuits, and that Congress can "create a federal right of action to allow private individuals who are harmed by this technology" to sue companies.

     

    Altman pointed out that people can already sue OpenAI over harms caused by technology "unless I'm really misunderstanding how things work." Marcus said that litigation isn't enough, noting that lawsuits can take a decade or more. Hawley's suggestion "would certainly make a lot of lawyers wealthy, but I think it would be too slow to affect a lot of the things we care about," Marcus said.

     

    AI's impact on jobs was also discussed at the hearing. IBM CEO Arvind Krishna recently revealed plans to pause hiring for about 7,800 jobs that could be replaced by AI systems.

     

    Montgomery said that "new jobs will be created, many more jobs will be transformed, and some jobs will be transitioned away." Altman told senators that "GPT-4 will entirely automate away some jobs and will create new ones that we believe will be much better."

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...