War is terrible. But it has often played a pivotal role in advancing technology. And Russia’s invasion of Ukraine is shaping up to be a key proving ground for artificial intelligence, for ill and, perhaps in a few instances, for good too.
There has been increasing alarm among a wide range of civil society groups and A.I. researchers in recent years about the advent of lethal autonomous weapons systems—A.I.-enabled weapons with the ability to select targets and kill people without human oversight. This has led to a concerted effort at the United Nations by several countries to try to ban or at least restrict the use of such systems. But those talks have so far not resulted in much progress.
Meanwhile, the development of autonomous weapons has continued at a quickening pace. Right now, those weapons are still in their infancy. We won’t see humanitarian groups' worst nightmares of swarms of “slaughterbot” drones realized in the Ukraine conflict. But weapons with some degree of autonomy are likely to be deployed by both sides.
Already, Ukraine has been using the Turkish-made TB2 drone, which can take off, land, and cruise autonomously, although it still relies on a human operator to decide when to drop the laser-guided bombs it carries. (The drone can also use lasers to guide artillery strikes.) Russia meanwhile has a “kamikaze” drone with some autonomous capabilities called the Lantset, which it reportedly used in Syria and could use in Ukraine. The Lantset is technically a “loitering munition” designed to attack tanks, vehicle columns or troop concentrations. Once launched, it circles a pre-designated geographic area until detecting a pre-selected target type. It then crashes itself into the target, detonating a warhead it carries.
Russia has made A.I. a strategic priority. Vladimir Putin, the country’s president, said in 2017 that whoever becomes the leader in A.I. “will become the ruler of the world.” But at least one recent assessment, from researchers at the U.S.-government funded Center for Naval Analyses, says Russia lags the U.S. and China in developing A.I. defense capabilities.
In an interview last week with Politico, one of the study’s authors, Samuel Bendett, told the publication that Russia would definitely use A.I. in Ukraine to help analyze battlefield data, including surveillance footage from drones. He also said that it was possible that China would provide Russia with more advanced A.I.-enabled weapons to use in Ukraine in exchange for gaining insights into how Russia effectively integrates drones into combat operations, an area in which Russia has battle-tested expertise from Syria that China lacks.
But A.I. may not just be deployed on the front lines of the Ukraine conflict. It could play a vital role in the information war. Many fear that A.I. techniques such as deepfakes—highly realistic video fakes created using an A.I. technique—will supercharge Russian disinformation campaigns, although so far there is no evidence of deepfakes being used. Machine learning can also be used to help detect disinformation. The large social media platforms already deploy these systems, although their track record in accurately identifying and removing disinformation is spotty at best.
Some people have also suggested A.I. can help analyze the vast amount of open source intelligence coming out of Ukraine—everything from TikTok videos and Telegram posts of troop formations and attacks uploaded by average Ukrainians to publicly-available satellite imagery. This could allow civil society groups to fact-check the claims made by both sides in the conflict as well as to document potential atrocities and human rights violations. That could be vital for future war crimes prosecutions.
Finally, the war has deeply affected world’s A.I. researchers, as it has everyone else. Many prominent researchers have engaged in discussion over Twitter about how best the profession should respond to the conflict and how the technology it works on could help end the current conflict and alleviate humanitarian suffering, or at least prevent future wars. The tech publication Protocol has an overview of the discussion—much of which, to my ears at least, seemed oddly naïve and disconnected from realities of international politics and war and peace.
This disconnect may seem unimportant, perhaps even comical—but I think it is a deeply concerning. When those developing technology can’t grasp the implications of what they are building and how it might be used, we are all in danger. The physicists working on nuclear power understood immediately the implications of what they were creating and were at the forefront of efforts to govern atomic weapons—even if they were overly sanguine about the prospects for international control. Too many of today’s computer scientists seem willfully blind to the political and military dimensions of their work and too willing to leave the hard task of figuring out how to govern A.I. to others. Perhaps this war will be a wakeup call for them too.
Jeremy Kahn@jeremyakahnjeremy.kahn@fortune.com
A.I. IN THE NEWS
Clearview AI ramps up hiring and makes a big sales push. Hoan Thon-That, chief executive of the controversial facial recognition startup, told Reuters that the company is on a hiring spree, planning to expand its headcount by a third, as it tries to win more business from U.S. law enforcement groups and federal agencies. He also said the company, which has faced criticism and lawsuits for scraping people’s photos from social media without permission, plans new features, including facial recognition that will accurately account for how people age. He said the company had 3,100 current customers but that many of them had only committed to five- or six-figure purchases and that he wanted to up-sell those customers to seven- and eight-figure deals.
Nvidia hit by ransomware attack, data breach. The company, which is a major producer of graphics processing units, specialized computer chips that are a mainstay of A.I. applications, has admitted that its corporate networks were hit with a ransomware attack, Bloomberg News reported. The company admitted that 1 terrabyte of data, including designs for next generation chips, had been compromised but told the news service that the attack was “relatively minor” and that it did not believe it was related to “current geopolitical tensions.”
Startup develops A.I. to help farmers “see through clouds” in satellite imagery. The company, Aspia Space, which is based in Cornwall, in England, has developed A.I. that can predict what agricultural land will look like in satellite photos, even when there is too much cloud-cover to get actual photographs. The system, which takes satellite radar imagery and converts it to a photo-realistic prediction, is designed to help farmers monitor planting and crop growth. In areas that are frequently cloud-covered, such as the U.K., farmers have previously struggled to use satellite images before because they can’t get enough clear photos of their fields, while radar imagery is technically difficult for non-experts to interpret, the company told the BBC.
A.I. in Africa gains ground with new research center in the Republic of the Congo. Anatole Collinet Makosso, the prime minister of the Republic of the Congo, inaugurated the new African Center for Research in Artificial Intelligence (CARIA). It is located at Denis Sassou Nguesso University in Kintele, a northern suburb of Brazzaville, the Congolese capital, according to African news site CGTN. The center is an example of how Africa is developing a handful of hubs for A.I. research, as the continent seeks to catch up with other geographies in deploying the technology.
Another A.I. safety research company launches. Stuart Armstrong, previously an A.I. safety researcher at the Future of Humanity Institute in Oxford, England, has left the academic research group to co-found Aligned AI, a new startup based in Oxford and dedicated to A.I. safety. The company, which is incorporated as a “benefit corporation” (one that can make a profit but is dedicated to a social purpose), plans to make money, at least initially, by offering consulting services to businesses on how best to ensure their A.I. systems won’t unintentionally do something dangerous.
EYE ON A.I. TALENT
LinkedIn, the professional social networking service that is owned by Microsoft, has named Joaquin Quinonero Candela its technical fellow for A.I. Candela was formerly a senior A.I. executive and researcher at Facebook, working most recently on “responsible AI” for the company.
Cruise, the self-driving car company owned by General Motors, has appointed Kyle Vogt as its new chief executive, Bloomberg News reported. Vogt had been serving as interim CEO since December 2021. Vogt is also a co-founder, chief technology officer, and president of the San Francisco-based company.
EYE ON A.I. RESEARCH
Using natural language to build complex logistical simulations. Researchers at MIT’s Center for Transportation and Logistics found that they were able to use a sophisticated natural language processing (NLP) algorithm to build an effective simulator for inventory management. This may let logistics experts create and use simulations more easily, since they could create the simulations without having to engage people with deep technical knowledge in simulation and coding. As the researchers write in a paper published last week on the non-peer reviewed research repository arxiv.org, using NLP in this way “has the potential to remove the tedium of programming and allow experts to focus on the high-level consideration of the problem and holistic thinking.” The researchers used GPT-3 Codex, a system that can turn high-level natural language instructions into software code. It is based on GPT-3, the large language A.I. system created by San Francisco company OpenAI.
FORTUNE ON A.I.
Why Russia’s invasion of Ukraine is a warning to U.S. tech—by Jacob Carpenter
The next generation of brain-computing interfaces could be supercharged by artificial intelligence—by Jonathan Vanian
Social media’s latest test: policing misinformation about Russia’s Ukraine invasion—by Jacob Carpenter
Commentary: Toward data dignity: We need commonsense privacy regulation to curb Big Tech—by Tom Chavez, Maritza Johnson, and Jesper Andersen
BRAIN FOOD
Is A.I. conscious? Amanda Askell, a philosopher who works for Anthropic, a research company dedicated to A.I. safety that was founded by a group of OpenAI alums (including Askell), has penned a blog on this question that has gotten some attention among A.I. folks this past week. Her view—consciousness is probably a continuum and right now, today’s most advanced machine learning systems are something akin to plants. “Our current ML systems are more likely to be phenomenally conscious than a chair but far less likely to be conscious than a mouse. I’d also put them at less likely to be conscious than insects or fish or bivalves. I think I’d place current ML systems in the region of plants,” she writes.
She goes on to argue that the question of exactly when we think an A.I. system might have consciousness matters because it relates to another quality, sentience (or self-awareness), that she says will determine when humans should start treating A.I. systems as “moral patients”—i.e. subjects with certain rights to be treated in certain ethical ways. As Askell says, “Humans also have a long history of denying moral patienthood to others when acknowledging patienthood would be inconvenient. Given this, I think it’s better to err on the side of mistakenly attributing sentience than mistakenly denying it. This doesn’t mean acting as if current ML systems are sentient – it just means having a lower bar for when you do start to act as if things might be sentient.”
What do you think?