By now you’ve heard of the Turing Test. In a paper published in the philosophy journal Mind, Alan Turing proposed a thesis and a test: If someday, a computer was able to answer questions in a manner indistinguishable from humans, then it must be “thinking.”
The test was built on deception and “won” once a computer successfully passed as a human. It goes like this: there is a person, a machine, and in a separate room, an interrogator. The object of the game is for the interrogator to figure out which answers come from the person and which come from the machine. At the beginning of the game, the interrogator is given labels, X and Y, but doesn’t know which one refers to the computer and is only allowed to ask questions like “Will X please tell me whether X plays chess?” At the end of the game, the interrogator has to figure out who was X and who was Y. The job of the other person is to help the interrogator identify the machine, and the job of the machine is to trick the interrogator into believing that it is actually the other person.
About the game, Turing wrote: “I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.” But Turing was a scientist, and he knew that his theory could not be proven, at least not within his lifetime.
Alan Turing OBE FRS was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist.
We are already living in the age of artificial intelligence, and yet we continue to measure an AI’s ability based on either deception (can a computer fool a human into believing it’s human?) or replication (can a computer act exactly as we would?). We ought to acknowledge AI for what it has always been: intelligence gained and expressed in ways that do not necessarily resemble our own human experience.
Rather than judging an AI on whether or not it can “think” exactly like we do, we should adopt a new test to measure the meaningful contributions of an AI, which would judge the value of cognitive and behavioral tasks—different, but powerful—we cannot perform on our own. Artificial general intelligence would be achieved when a system makes general contributions that are equal to or better than a human’s.
A meaningful contribution test would be passed when an AI can sit in on a meeting and make a valuable contribution––unsolicited––before the meeting concludes. Making a valuable contribution in a group is something that most people on Earth have, at some point, had to do themselves: at work, in a religious setting, at the neighborhood pub with friends, or in a high school history class. Simply interjecting with a factoid or to answer a question doesn’t add value to a conversation. Making a valuable contribution involves many different skills:
- Making educated guesses: This is also called abductive reasoning, and it’s how most of us get through the day. We use the best information available, make and test hypotheses, and come up with an answer even if there’s no clear explanation.
- Correctly extracting meaning from words, pauses, and ambient noise: Just because someone says they’re happy to take on a new project doesn’t mean it literally makes them happy. Other cues, like their body language, might tell us that they’re fairly unhappy with the request but, for whatever reason, they’re not able to say no.
- Using experience, knowledge, and historical context for understanding: When people interact, they bring with them a nuanced worldview, a unique set of personal experiences, and typically their own expectations. Sometimes logic and facts won’t win an argument. Other times, they’re all that matter.
- Reading the room: There’s the explicit interaction and the tacit one happening beneath the surface. Subtle cues help us figure out when there’s an elephant demanding our attention.
The Contributing Team Member Test would be passed when an AI pushes back on a small but growing consensus, tactfully argues against prevailing ideas and recruiting other members of the group to support an alternative viewpoint.
Adapted from the new book The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb. Copyright © by Amy Webb. Published by arrangement with PublicAffairs, an imprint of Hachette Book Group.