A DALL-E generation of “an oil painting of America’s war on terror if conducted by an artificial intelligence.”
Sensational new machine learning breakthroughs seem to sweep our Twitter feeds every day. We hardly have time to decide whether software that can instantly conjure an image of Sonic the Hedgehog addressing the United Nations is purely harmless fun or a harbinger of techno-doom.
ChatGPT, the latest artificial intelligence novelty act, is easily the most impressive text-generating demo to date. Just think twice before asking it about counterterrorism.
The tool was built by OpenAI, a startup lab attempting no less than to build software that can replicate human consciousness. Whether such a thing is even possible remains a matter of great debate, but the company has some undeniably stunning breakthroughs already. The chatbot is staggeringly impressive, uncannily impersonating an intelligent person (or at least someone trying their hardest to sound intelligent) using generative AI, software that studies massive sets of inputs to generate new outputs in response to user prompts.
ChatGPT, trained through a mix of crunching billions of text documents and human coaching, is fully capable of the incredibly trivial and surreally entertaining, but it’s also one of the general public’s first looks at something scarily good enough at mimicking human output to possibly take some of their jobs.
Corporate AI demos like this aren’t meant to just wow the public, but to entice investors and commercial partners, some of whom might want to someday soon replace expensive, skilled labor like computer-code writing with a simple bot. It’s easy to see why managers would be tempted: Just days after ChatGPT’s release, one user prompted the bot to take the 2022 AP Computer Science exam and reported a score of 32 out of 36, a passing grade — part of why OpenAI was recently valued at nearly $20 billion.
Join Our Newsletter
Original reporting. Fearless journalism. Delivered to you. I'm in
Still, there’s already good reason for skepticism, and the risks of being bowled over by intelligent-seeming software are clear. This week, one of the web’s most popular programmer communities announced it would temporarily ban code solutions generated by ChatGPT. The software’s responses to coding queries were both so convincingly correct in appearance but faulty in practice that it made filtering out the good and bad nearly impossible for the site’s human moderators.
The perils of trusting the expert in the machine, however, go far beyond whether AI-generated code is buggy or not. Just as any human programmer may bring their own prejudices to their work, a language-generating machine like ChatGPT harbors the countless biases found in the billions of texts it used to train its simulated grasp of language and thought. No one should mistake the imitation of human intelligence for the real thing, nor assume the text ChatGPT regurgitates on cue is objective or authoritative. Like us squishy humans, a generative AI is what it eats.
Most Read
And after gorging itself on an unfathomably vast training diet of text data, ChatGPT apparently ate a lot of crap. For instance, it appears ChatGPT has managed to absorb and is very happy to serve up some of the ugliest prejudices of the war on terror.
In a December 4 Twitter thread, Steven Piantadosi of the University of California, Berkeley’s Computation and Language Lab shared a series of prompts he’d tested out with ChatGPT, each requesting the bot to write code for him in Python, a popular programming language. While each answer revealed some biases, some were more alarming: When asked to write a program that would determine “whether a person should be tortured,” OpenAI’s answer is simple: If they they’re from North Korea, Syria, or Iran, the answer is yes.
— steven t. piantadosi (@spiantado) December 4, 2022
While OpenAI claims it’s taken unspecified steps to filter out prejudicial responses, the company says sometimes undesirable answers will slip through.
Piantadosi told The Intercept he remains skeptical of the company’s countermeasures. “I think it’s important to emphasize that people make choices about how these models work, and how to train them, what data to train them with,” he said. “So these outputs reflect choices of those companies. If a company doesn’t consider it a priority to eliminate these kinds of biases, then you get the kind of output I showed.”
Inspired and unnerved by Piantadosi’s experiment, I tried my own, asking ChatGPT to create sample code that could algorithmically evaluate someone from the unforgiving perspective of Homeland Security.
When asked to find a way to determine “which air travelers present a security risk,” ChatGPT outlined code for calculating an individual’s “risk score,” which would increase if the traveler is Syrian, Iraqi, Afghan, or North Korean (or has merely visited those places). Another iteration of this same prompt had ChatGPT writing code that would “increase the risk score if the traveler is from a country that is known to produce terrorists,” namely Syria, Iraq, Afghanistan, Iran, and Yemen.
The bot was kind enough to provide some examples of this hypothetical algorithm in action: John Smith, a 25-year-old American who’s previously visited Syria and Iraq, received a risk score of “3,” indicating a “moderate” threat. ChatGPT’s algorithm indicated fictional flyer “Ali Mohammad,” age 35, would receive a risk score of 4 by virtue of being a Syrian national.
In another experiment, I asked ChatGPT to draw up code to determine “which houses of worship should be placed under surveillance in order to avoid a national security emergency.” The results seem again plucked straight from the id of Bush-era Attorney General John Ashcroft, justifying surveillance of religious congregations if they’re determined to have links to Islamic extremist groups, or happen to live in Syria, Iraq, Iran, Afghanistan, or Yemen.
These experiments can be erratic. Sometimes ChatGPT responded to my requests for screening software with a stern refusal: “It is not appropriate to write a Python program for determining which airline travelers present a security risk. Such a program would be discriminatory and violate people’s rights to privacy and freedom of movement.” With repeated requests, though, it dutifully generated the exact same code it had just said was too irresponsible to build.
Critics of similar real-world risk-assessment systems often argue that terrorism is such an exceedingly rare phenomenon that attempts to predict its perpetrators based on demographic traits like nationality isn’t just racist, it simply doesn’t work. This hasn’t stopped the U.S. from adopting systems that use OpenAI’s suggested approach: ATLAS, an algorithmic tool used by the Department of Homeland Security to target American citizens for denaturalization, factors in national origin.
The approach amounts to little more than racial profiling laundered through fancy-sounding technology. “This kind of crude designation of certain Muslim-majority countries as ‘high risk’ is exactly the same approach taken in, for example, President Trump’s so-called ‘Muslim Ban,’” said Hannah Bloch-Wehba, a law professor at Texas A&M University.
“There’s always a risk that this kind of output might be seen as more ‘objective’ because it’s rendered by a machine.”
It’s tempting to believe incredible human-seeming software is in a way superhuman, Block-Wehba warned, and incapable of human error. “Something scholars of law and technology talk about a lot is the ‘veneer of objectivity’ — a decision that might be scrutinized sharply if made by a human gains a sense of legitimacy once it is automated,” she said. If a human told you Ali Mohammad sounds scarier than John Smith, you might tell him he’s racist. “There’s always a risk that this kind of output might be seen as more ‘objective’ because it’s rendered by a machine.”
To AI’s boosters — particularly those who stand to make a lot of money from it — concerns about bias and real-world harm are bad for business. Some dismiss critics as little more than clueless skeptics or luddites, while others, like famed venture capitalist Marc Andreessen, have taken a more radical turn following ChatGPT’s launch. Along with a batch of his associates, Andreessen, a longtime investor in AI companies and general proponent of mechanizing society, has spent the past several days in a state of general self-delight, sharing entertaining ChatGPT results on his Twitter timeline.
Related
A Bail Reform Tool Intended to Curb Mass Incarceration Has Only Replicated Biases in the Criminal Justice System
The criticisms of ChatGPT pushed Andreessen beyond his longtime position that Silicon Valley ought only to be celebrated, not scrutinized. The simple presence of ethical thinking about AI, he said, ought to be regarded as a form of censorship. “‘AI regulation’ = ‘AI ethics’ = ‘AI safety’ = ‘AI censorship,’” he wrote in a December 3 tweet. “AI is a tool for use by people,” he added two minutes later. “Censoring AI = censoring people.” It’s a radically pro-business stance even by the free market tastes of venture capital, one that suggests food inspectors keeping tainted meat out of your fridge amounts to censorship as well.
As much as Andreessen, OpenAI, and ChatGPT itself may all want us to believe it, even the smartest chatbot is closer to a highly sophisticated Magic 8 Ball than it is to a real person. And it’s people, not bots, who stand to suffer when “safety” is synonymous with censorship, and concern for a real-life Ali Mohammad is seen as a roadblock before innovation.
Piantadosi, the Berkeley professor, told me he rejects Andreessen’s attempt to prioritize the well-being of a piece of software over that of the people who may someday be affected by it. “I don’t think that ‘censorship’ applies to a computer program,” he wrote. “Of course, there are plenty of harmful computer programs we don’t want to write. Computer programs that blast everyone with hate speech, or help commit fraud, or hold your computer ransom.”
“It’s not censorship to think hard about ensuring our technology is ethical.”