Jump to content

These are the two new books you need to read about AI


aum

Recommended Posts

They explore the people who make AI work—for good or ill

 

5436b00a08924e2cc0d12d95cfd0cfeca726cde4

Illustration: Noma Bar

 

It is not only the business world that is excited about generative artificial intelligence (AI). So, too, are publishers. In the past 12 months at least 100 books about AI have been published in America, reckons Thad McIlroy, a contributing editor at Publishers Weekly, and many multiples of that have been self-published. At the top of the pile are two new titles, which represent opposing sides of a noisy debate: whether enthusiasm about AI’s benefits should outweigh concerns about its downsides.


The darker side of the shiny AI era is the subject of “Feeding the Machine” by three academics at the University of Essex and Oxford University. Automation is born of exploitation, they contend. Today’s glowing data centres that run AI systems are akin to the soot-covered factories of the 19th century. Behind the algorithms are humans—yes, lavishly paid engineers, but also an army of workers who make the systems hum, from those who review the underlying data that are fed into the software to those who check its answers.


The authors delve into seven archetypal jobs in the AI supply chain. Online content moderators, often in poor countries, assess whether material on platforms such as Facebook is acceptable under the terms of service, which helps train automated systems. Data-centre technicians are always on call to ensure the infrastructure is up and running. This guzzles growing amounts of electricity. A ChatGPT prompt consumes around ten times as much energy as a Google search.


Readers meet a voice actor who has to compete with an audio-synthesised version of herself for jobs and a machine-learning engineer who struggles with the ethical implications of her work—perpetuating bias, threatening jobs and potentially posing an existential risk to humanity. But the most interesting character-study is of Anita, a Ugandan data annotator, who spends mind-numbing ten-hour days in low lighting to prevent eye strain. A university graduate, her entrée into a glamorous career in tech amounts to watching a constant stream of video footage of car drivers, looking for evidence of driver fatigue—such as slumping shoulders or drooping eyes—and labelling it, all for around $1.20 an hour.


There is much to respect about the authors’ critical assessment of AI, yet also much to challenge. It is true that data-labelling is dreary, and that content-moderation can require mental-health counselling. But the authors grossly overstate their case. “The AI industry is just the next phase in a long journey that stretches back to the age of colonialism,” they argue. “The solution is to dismantle the machine and build something else in its place.” This extremism is ridiculous, considering that AI can automate expensive services like medical diagnoses, energy distribution and logistics—to name just three—which can help people in poor countries.


How AI gets built is only one facet of the technology. Another is how it gets used. A practical and more positive way to think about the interaction of people and AI is provided by Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School. He focuses on how people should learn to use generative AI services like ChatGPT. The technology is an “alien intelligence”, he says, that can augment humans’ own. But people need to raise their game in order to get the most from it. In that respect, AI is literally a “co-intelligence”, as the book’s title stresses.


Mr Mollick introduces the idea of a “jagged frontier”: the boundary between what AI can and cannot do. It is jagged because it is not clear where humans are better, and the dividing line is always changing. For instance, prompting a large language model (LLM) for a sonnet of exactly 50 words may result in a beautiful text returned in just a few seconds with, alas, 48 words, not 50. This is because the system is designed to produce a simulacrum of what it has seen in its training data, not act as a counter or calculator. In this and in a myriad of other tasks, AI is weird. When it fails, it does so in ways that people would not.


As a result, people need to experiment with AI to learn its capabilities and flaws. Mr Mollick advocates four rules. First, “always invite AI to the table”; that is, try to find a way to use it in every task. Second, “be the human in the loop”—look for ways it can help, rather than replace you. Third, give the AI a persona and prod it. Oddly, LLMs work better when they are asked to adopt a persona, such as “you are a corporate-strategy expert with a flair for originality”. Fourth, assume this is the worst AI you will use—so do not be sanctimonious when it fails. Systems will only get better.


The precepts force people to develop new skills to work with the machine, just as humans had to enhance their numeracy to work with calculators and spreadsheets, even as those tools made many things easier. “The strengths and weaknesses of AI may not mirror your own, and that’s an asset,” Mr Mollick writes. “This diversity in thought and approach can lead to innovative solutions and ideas that might never occur to a human mind.”


“Co-Intelligence” usefully brings data to bear on AI performance. LLMs score higher on creativity than most people, according to several studies in 2023 by researchers in America, Germany and Britain. AI also helps business people accomplish more tasks, work faster and improve the quality of their output, benefiting average workers most. For software developers, there was a 56% improvement on tasks, according to a study by Microsoft.


Yet AI’s usefulness presents a new problem: it lulls people into a dangerous complacency. When AI systems are very good, people tend to trust the output without fully scrutinising it. When the AI is good but not great, people are more attentive and add their own judgment, which improves performance. It is a reminder that in the AI age, humans are still needed—yet must become sharper still.


Amid AI hype in business, where companies say a lot but seem to do little, “Co-Intelligence” usefully notes that innovation is hard for organisations but easy for individuals. Hence, do not look for how AI will change business from chief executives’ statements but from ordinary worker-bees who quietly incorporate it into their everyday tasks. The revolution will be noticed only in hindsight.


So, gentle reader, did your correspondent use AI to write this review? Yes—it was entirely written by artificial intelligence. Every word of it. Just kidding. None of it was, actually. The reason is that writing is not just the output that readers consume but a process of reflection and intellectual discovery by the writer, hopefully to originate novel ideas, not just express existing ones. Yet Mr Mollick’s first rule was not disobeyed: an LLM was prompted to challenge the article’s points. (Sadly its response was so generic that a vituperous editor was needed instead.)


As AI becomes commonplace, people will be empowered as well as reduced by it. Whether humans are the master craftsmen to their algorithmic assistants, or they become mere apprentices to the AI masterminds, remains the question. It is not one ChatGPT can reliably answer.

 

Source

Link to comment
Share on other sites


  • Views 358
  • Created
  • Last Reply

Top Posters In This Topic

  • aum

    1

Popular Days

Top Posters In This Topic

Popular Days

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...