Jump to content

How Artificial Intelligence Got Its Name


humble3d

Recommended Posts

How Artificial Intelligence Got Its Name


It’s a little-known fact that part of Wikipedia’s name comes from a bus in Hawaii.


In 1995, six years before the famous online encyclopedia launched, a computer programmer named Ward Cunningham was at Honolulu International Airport on his first visit to the islands.


He was in the middle of developing a new kind of website to help software designers collaborate, one that users themselves could rapidly edit from a web browser.


It was a striking innovation at the time. But what to call it?


“I wanted an unusual word to name for what was an unusual technology,” Cunningham told a curious lexicographer in 2003.


“I learned the word wiki … when I was directed to the airport shuttle, called the Wiki Wiki Bus.”


Wiki means quick, and Hawaiian words are doubled for emphasis: the very quick bus.


With that, Cunningham’s software had the distinctive sound he was looking for: WikiWikiWeb.


Wikipedia, whose development Cunningham wasn’t involved with, is one among countless websites based on his work.


The second half of its name comes from the word encyclopedia, with pedia being the Greek term for knowledge: “quick knowledge.”


Yet now the site is so successful that its fame has eclipsed its origins—along with the fact that a chance visit to an island gifted the digital age one of its most iconic terms.


Who would imagine using their knees to control a computer today in parallel to typing?


I love delving into the origins of new words—especially around technology.


In a digital age, technology can feel like a natural order of things, arising for its own reasons.


Yet every technology is embedded in a particular history and moment.


For me, etymology emphasizes the contingency of things I might otherwise take for granted.


Without a sense of these all-too-human stories, I’m unable to see our creations for what they really are:
 
marvelous, imperfect extensions of human will, enmeshed within all manner of biases and unintended consequences.


I give talks about technology to teenagers, and often use Wikipedia as a prompt for discussion.


Let’s find and improve an article to make Wikipedia better, I suggest, and in the process, think about what “better” means.


My audience’s reaction is almost always the same. What do I mean by improving an article?


Aren’t they all written by experts?


No, I say. That’s the whole point of a wiki: The users themselves write it, which means no page is ever the last word.


There are no final answers, and no ownership beyond the community itself.


Some contributors to Ward Cunningham’s original wiki are less than complimentary towards Wikipedia precisely because they see it betraying this intention.


By virtue of its success, they argue, Wikipedia encourages an illusion of impartiality and permanence.


Its pages can become “self-contained, self-appointed ‘truths’” that close down debate—or restrict it to a self-selected editorial caste.


A larger momentum lies behind such worries:
the passage of time itself.


None of my teenage audience remembers a time before Wikipedia, and Cunningham’s original website is older than every one of them.


Like much of the software and hardware in their lives, Wikipedia is simply a part of the landscape—something people inhabit, adapting themselves to its contours.


Digital technology has an uneasy relationship with time.


Old formats and platforms fall into disuse fast; newer is by definition better, faster, brighter.


Yet decades-old decisions continue to have an influence.


 If, for example, you want to understand the design of a computer mouse, you need to dive back into a 1965 NASA paper exploring different possible control methods—including a kind of treadle moved with the knees, a "Grafacon" tablet and stylus, a light pen, a joystick.


In doing this, you’ll find yourself transported to a time when it was by no means obvious how people might best interact with a computer, or even what interacting with a computer meant.


Sample quote from the original paper:


“Although the knee control was only primitively developed at the time it was tested, it ranked high in both speed and accuracy, and seems very promising.”


Etymology insists that everything was once new.


Who would imagine using their knees to control a computer today in parallel to typing, like working at an old-fashioned sewing machine?


Tracing the thread of a word is a beautiful counterbalance to digital culture’s often-relentless obsession with the present, sketching how each iteration of software and hardware builds off older ideas and values.


Tune into a wireless network, and you’re entering the same verbal space as the wireless telegraph—developed in the 1890s.


Then as now, international agreements needed to be established for successful networking, together with regulation, licensing, and the aggressive assertion of trademarked standards.


To communicate electronically is to participate in a vast, negotiated consensus, built and maintained on the basis of decades of accumulated assumptions.


Similarly, to speak about technology is to assume: It demands shared notions of sense and usage.


Yet there are some terms that deserve more skepticism than most. Sixty years ago, a group of scientists drew up a conference agenda aimed at predicting and shaping the future—at establishing a field they believed would transform the world.


Their mission was to use the young science of digital computation to recreate and exceed the workings of the human mind.


Their chosen title?


The Dartmouth Summer Research Project on Artificial Intelligence.


The assumptions of the Dartmouth Conference, set out in a 1955 proposal, were explicitly immodest:


“[T]he study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”


Yet today, the very word “intelligence” continues to lie somewhere between a millstone and a straw man for this field.


From self-driving vehicles to facial recognition, from mastery of Chess and Go to translation based on processing billions of samples, smarter and smarter automation is a source of anxious fascination.


Yet the very words that populate each headline take us further away from seeing machines as they are—not so much a mirror of human intellect as something utterly unlike us, and all the more potent for this.


As Alan Turing himself put it in his 1950 paper on computing machinery and intelligence, “we can only see a short distance ahead, but we can see plenty there that needs to be done.”


If we are to face the future honestly, we need both a clear sense of where we are coming from—and an accurate description of what is unfolding under our noses.


AI, such as it is, sprawls across a host of emerging disciplines for which more precise labels exist:
machine learning, symbolic systems, big data, supervised learning, neural networks.


Yet a 60-year-old analogy fossilized in words obfuscates the debate around most of these developments—while feeding unhelpful fantasies in place of practical knowledge.


Even though we can get trapped in words, today remains an age of extraordinary linguistic fertility—one marked not only by mass literacy, itself a recent historical phenomenon, but by mass participation in written and recorded discourse.


Through the screens of billions of cell phones, tablets, laptops, and desktop computers, humanity is saturated with self-expression as never before.


All these words tell and reveal more than we know, if we care to interrogate them.  


Etymology insists that everything was once new—although born into a negotiation with all that came before.


In a small way, the stories behind new words challenge us to think again: to recover the shock and strangeness of each addition to the world.


I don’t believe that knowing the origin of Wikipedia especially helps us understand its success, or teaches us how to use it better.


But it does offer a reminder that things weren’t always like they are today—followed, hopefully, by the insight that things won't always be like they are today, either.

http://www.theatlantic.com/technology/archive/2016/08/how-artificial-intelligence-got-its-name/495050/

 

Link to comment
Share on other sites


  • Views 621
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...