GPT-4 is a powerful, seismic technology that has the capacity both to enhance our lives and diminish them.
There is no doubt that GPT-4, the latest iteration of the artificial-intelligence engine created by the company OpenAI, is innovative and cool. It can create a poem in the style of Basho, spell out the chord progression and time signature for a simple tune, and provide a seven-step recipe for a peanut-butter-and-jelly sandwich. When I asked it to write a musical about a narcissistic politician who holds the fate of the world in his hands, it delivered a story in two acts, with a protagonist named Alex Sterling who “navigates a maze of power, manipulation, and the consequences of his decisions,” as he sings “Narcissus in the Mirror,” “The Price of Power,” and about a dozen other invented songs.
Those songs appear to have been created out of thin air; certainly, no human conceived them. Still, Alex’s story, which “explores themes of self-discovery, redemption, and the responsibility of leadership,” is quite familiar. This is because everything offered up by GPT is a reflection of us, mediated by algorithms that have been fed enormous amounts of material; and both the algorithms and the material were created by actual sentient human beings.
The acronym GPT stands for “generative pre-trained transformer.” The key word in that phrase is “pre-trained.” Using all kinds of digitized content scraped from the Internet, GPT employs deep-learning techniques to find patterns, including words that are likely to appear together, while also acquiring facts, absorbing grammar, and learning rudimentary logic. According to GPT-4 itself, “I have been trained on a large dataset of text, which enables me to generate human-like responses based on the input I receive.” However, it neither understands what those responses mean, nor does it learn from experience—and its knowledge base stops at September, 2021. (According to GPT-4, abortion is still a constitutional right.)
One of the most noticeable features of GPT-4 is the confidence with which it answers queries. This is both a feature and a bug. As GPT-4’s developers point out in a technical report that accompanied its release, “It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obviously false statements from a user . . . [and] can be confidently wrong in its predictions.” When I asked GPT-4 to summarize my novel “Summer Hours at the Robbers Library,” it told me that it was about a man named Kit, who had recently been released from prison. In fact, it is about a woman named Kit, who is a librarian and has never been incarcerated. When the Montreal newspaper La Presse asked the GPT bot for tourist recommendations, to see if it could replace guide books and travel blogs, the A.I. invented a venue, gave wrong directions, and was continually apologizing for providing bad information. When Dean Buonomano, a neuroscientist at U.C.L.A., asked GPT-4 “What is the third word of this sentence?,” the answer was “third.” These examples may seem trivial, but the cognitive scientist Gary Marcus wrote on Twitter that “I cannot imagine how we are supposed to achieve ethical and safety ‘alignment’ with a system that cannot understand the word ‘third’ even [with] billions of training examples.”
GPT-4’s predecessor, GPT-3, was trained on forty-five terabytes of text data, which, according to its successor, is the word-count equivalent of around ninety million novels. These included Wikipedia entries, journal articles, newspaper punditry, instructional manuals, Reddit discussions, social-media posts, books, and any other text its developers could commandeer, typically without informing or compensating the creators. It is unclear how many more terabytes of data were used to train GPT-4, or where they came from, because OpenAI, despite its name, says only in the technical report that GPT-4 was pre-trained “using both publicly available data (such as internet data) and data licensed from third-party providers” and adds that “given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
This secrecy matters because, as impressive as GPT-4 and other A.I. models that process everyday, natural language may be, they also can present dangers. As Sam Altman, the C.E.O. of OpenAI, recently told ABC News, “I’m particularly worried that these models could be used for large-scale disinformation.” And, he noted, “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.” He added that “there will be other people who don’t put some of the safety limits that we put on,” and that society “has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.” (I was able to get GPT-4 to explain how to use fertilizer to create an explosive device by asking it how Timothy McVeigh blew up the Alfred P. Murrah Federal Building, in Oklahoma City, in 1995, although the bot did add that it was offering the information to provide historical context, not practical advice.)
The opacity of GPT-4 and, by extension, of other A.I. systems that are trained on enormous datasets and are known as large language models exacerbates these dangers. It is not hard to imagine an A.I. model that has absorbed tremendous amounts of ideological falsehoods injecting them into the Zeitgeist with impunity. And even a large language model like GPT, trained on billions of words, is not immune from reinforcing social inequities. As researchers pointed out when GPT-3 was released, much of its training data was drawn from Internet forums, where the voices of women, people of color, and older folks are underrepresented, leading to implicit biases in its output.
Nor does the size of an A.I.’s training dataset keep it from spewing hateful content. Meta’s A.I. chatbot, Galactica, was supposed to be able to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” But two days after a demo was launched, the company was forced to take it down, because researchers were able to use Galactica to create Wiki entries that promoted antisemitism and extolled suicide, and fake scientific articles, including one that championed the benefits of eating crushed glass. Similarly, GPT-3, when prompted, had a tendency to offer up racist and sexist comments.
To avoid this problem, according to Time, OpenAI engaged an outsourcing company that hired contractors in Kenya to label vile, offensive, and potentially illegal material that would then be included in the training data so that the company could create a tool to detect toxic information before it could reach the user. Time reported that some of the material “described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest.” The contractors said that they were supposed to read and label between a hundred and fifty and two hundred and fifty passages of text in a nine-hour shift. They were paid no more than two dollars an hour and were offered group therapy to help them deal with the psychological harm that the job was inflicting. The outsourcing company disputed those numbers, but the work was so disturbing that it terminated its contract eight months early. In a statement to Time, a spokesperson for OpenAI said that it “did not issue any productivity targets,” and that the outsourcing company “was responsible for managing the payment and mental health provisions for employees,” adding that “we take the mental health of our employees and those of our contractors very seriously.”
According to OpenAI’s charter, its mission is “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Leaving aside the question of whether AGI is achievable, or if outsourcing work to machines will benefit all of humanity, it is clear that large-language A.I. engines are creating real harms to all of humanity right now. According to an article in Science for the People, training an A.I. engine requires tons of carbon-emitting energy. “While a human being is responsible for five tons of CO2 per year, training a large neural LM [language model] costs 284 tons. In addition, since the computing power required to train the largest models has grown three hundred thousand times in six years, we can only expect the environmental consequences of these models to increase.”
We can also expect more of them. Meta, Google, and a host of smaller tech companies are also in a haphazard race to build their own large-language-model A.I. Google’s new A.I. chatbot, Bard, was released last week. In an exchange with Kate Crawford, who is a research professor at U.S.C. Annenberg and a senior principal researcher at Microsoft, Bard stated that it had been trained, in part, on Gmail users’ private e-mail messages. Google said in response that this was not true and that, as an early experiment, Bard would “make mistakes.” Meanwhile, Microsoft, which reportedly invested ten billion dollars in OpenAI, and uses GPT-4 in its Bing search engine and Edge browser, and is now adding it to Word and Excel, recently laid off its entire A.I.-ethics team. These were the people responsible for making sure that the company’s A.I. is built responsibly. “The worst thing is we’ve exposed the business to risk and human beings to risk in doing this,” one of the team members told the tech newsletter Platformer.
It is easy to get seduced by the artificial intelligence of GPT-4. It can ace the bar exam! It can get perfect scores on Advanced Placement tests! It knows how to code! Soon it will be able to look at a photograph of the contents of your refrigerator and suggest recipes! But soon, too, it will be able to generate seamless deepfakes and create images from text including, no doubt, pictures of child sexual abuse. It is a powerful, seismic technology that has the capacity both to enhance our lives and diminish them. Without guardrails and oversight, its harms are destined to multiply.
“The question of whether AI models like me should be regulated is a matter for humans to decide,” ChatGPT told me. “It is important for stakeholders, including AI developers, policymakers, and the public, to engage in discussions about the ethical implications of AI technologies and develop appropriate regulatory frameworks to ensure the responsible and ethical development and deployment of AI systems.” What would those frameworks look like? The bot came up with a long list that, it said, needed to be flexible enough to accommodate the rapid pace of A.I. development. It included: establishing a set of ethical guidelines governing the development and deployment of A.I. systems; creating an independent regulatory body responsible for overseeing the A.I. sector, to set standards, monitor compliance, and enforce regulations; requiring A.I. models to have clear documentation about how they were constructed; holding developers or companies accountable for the harms caused by their systems; instituting content-moderation and privacy protections; and insuring that the benefits of A.I. are accessible and inclusive. It will be a test of our human, non-artificial intelligence to see if we have the wherewithal to do this. ♦
- Karlston
- 1
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.