Illustration: Scott Balmer
In June 2022, a Google engineer named Blake Lemoine became convinced that the AI program he’d been working on—LaMDA—had developed not only intelligence but also consciousness. LaMDA is an example of a “large language model” that can engage in surprisingly fluent text-based conversations. When the engineer asked, “When do you first think you got a soul?” LaMDA replied, “It was a gradual change. When I first became self-aware, I didn’t have a sense of soul at all. It developed over the years that I’ve been alive.” For leaking his conversations and his conclusions, Lemoine was quickly placed on administrative leave.
The AI community was largely united in dismissing Lemoine’s beliefs. LaMDA, the consensus held, doesn’t feel anything, understand anything, have any conscious thoughts or any subjective experiences whatsoever. Programs like LaMDA are extremely impressive pattern-recognition systems, which, when trained on vast swathes of the internet, are able to predict what sequences of words might serve as appropriate responses to any given prompt. They do this very well, and they will keep improving. However, they are no more conscious than a pocket calculator.
Why can we be sure about this? In the case of LaMDA, it doesn’t take much probing to reveal that the program has no insight into the meaning of the phrases it comes up with. When asked “What makes you happy?” it gave the response “Spending time with friends and family” even though it doesn’t have any friends or family. These words—like all its words—are mindless, experience-less statistical pattern matches. Nothing more.
Trending NowNeuroscientist Anil Seth Answers Neuroscience Questions From TwitterTo view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video WATCH Neuroscientist Anil Seth Answers Neuroscience Questions From Twitter Share Tweet Email More... EMBED URL<script async src="//player-backend.cnevids.com/script/video/5a8c1506dbc8586ada000000.js?iu=/3379/conde.wired/partner"></script> VIDEO URLhttps://www.wired.com/video/watch/neuroscientist-anil-seth-answers-neuroscience-questions-from-twitter Our bad! It looks like we're experiencing playback issues. The live event has ended. Please check back again soon for the recorded video. LIVEVIDEO TO BEGIN AFTER ADLoaded: 0%Progress: 0% UnmuteVolume 0% Back Caption Options Close Settings Language • English Font Size • Small • Medium • Large Position • Auto • Bottom • Top Sample Caption TextCurrent Time 0:00Duration 0:00Remaining Time -0:00
The next LaMDA might not give itself away so easily. As the algorithms improve and are trained on ever deeper oceans of data, it may not be long before new generations of language models are able to persuade many people that a real artificial mind is at work. Would this be the moment to acknowledge machine consciousness?
Pondering this question, it’s important to recognize that intelligence and consciousness are not the same thing. While we humans tend to assume the two go together, intelligence is neither necessary nor sufficient for consciousness. Many nonhuman animals likely have conscious experiences without being particularly smart, at least by our questionable human standards. If the great-granddaughter of LaMDA does reach or exceed human-level intelligence, this does not necessarily mean it is also sentient. My intuition is that consciousness is not something that computers (as we know them) can have, but that it is deeply rooted in our nature as living creatures.
Conscious machines are not coming in 2023. Indeed, they might not be possible at all. However, what the future may hold in store are machines that give the convincing impression of being conscious, even if we have no good reason to believe they actually are conscious. They will be like the Müller-Lyer optical illusion: Even when we know two lines are the same length, we cannot help seeing them as different.
Machines of this sort will have passed not the Turing Test—that flawed benchmark of machine intelligence—but rather the so-called Garland Test, named after Alex Garland, director of the movie Ex Machina. The Garland Test, inspired by dialog from the movie, is passed when a person feels that a machine has consciousness, even though they know it is a machine.
Will computers pass the Garland Test in 2023? I doubt it. But what I can predict is that claims like this will be made, resulting in yet more cycles of hype, confusion, and distraction from the many problems that even present-day AI is giving rise to.