The introduction of machine learning, and potentially Artificial Intelligence (AI), will vastly enhance capabilities for automating the reaching of mass audiences with tailored and plausible content. Consequently, they will render malicious actors even more powerful.
Information warfare, or targeted misinformation campaigns designed to confuse and obfuscate for political gain, are nothing new. Disinformation and so-called fake news have been around for generations.
Prior to the invasion of Ukraine, there were suggestions that Russia was planning to produce a fake video showing a Ukrainian attack as a pretext for an invasion. At the time, US officials claimed to have evidence of a Russian plan to make a “very graphic” fake video of a Ukrainian attack on the Russian-speaking secessionist Donetsk region of Ukraine.
The alleged plot would involve using corpses, footage of blown-up buildings, fake Ukrainian military hardware, Turkish-made drones and actors playing the part of Russian-speaking mourners.
The use of misleading “deepfakes” has risen dramatically across the globe. As with so much of emerging technology, deepfakes will inevitably become a part of armed conflict. While perfidious deepfakes like that would almost certainly violate the law of armed conflict, those that amount to ruses would not. Other considerations about the impact on the civilian population are also necessary to determine what uses of deepfakes in armed conflict would be legal.
Although this false flag operation did not materialise, deepfake technology is increasingly recognised as a potentially useful and effective tool in armed conflict.
Recent years have seen Deep-Fake technology become an effective weapon in times of war. In the recent war Russia has waged against Ukraine, suspicion has surfaced that Russia is planning to use fake videos as a pretext for invasion, using Deep-Fake, representing a Ukrainian attack.
Deep-Fake is a computer simulation of reality produced by AI, and it is capable of forging identities in videos. In the wake of Deep Fake videos being used for political purposes, concerns have grown that the technology might be harmful and affect democratic processes.
Deep-Fake technology has been described as a kind of information warfare when used to deceive the public or disrupt international relations.
As a result, this technology becomes a powerful tool for manipulating information by faking images, audio and creating materials that can be quickly shared via social media, thereby contributing to disinformation. In the absence of a policy of protection against Deep-Fake being used to disrupt international relations, the law provides a loophole which can be exploited.
According to the Lieber Institute at the US West Point military academy it is hard to resolve the problem of a Deep-Fake campaign due to various psychological factors, and blocking or removing the content created with Deep-Fake technology may actually make things worse by attracting attention. In light of the data on Russia’s invasion plans for Ukraine, US President Biden chose to strategically share unclassified intelligence information about Russia’s plans with the public, in order to clear up any misinformation that might have spread.
Deepfake technology will likely become too effective in armed conflicts to resist. While few uses of deepfakes would be prohibited by the law of armed conflict, any perfidious use would be unlawful. Other uses intended to terrorise the population or violate the constant care obligation would also violate the law.
Like the 2016 American Presidential Election, the 2017 French Presidential Election was the target of a Russian disinformation campaign that included the selected leaking of then-candidate Emmanuel Macron’s emails. While Macron still managed to win the election, a much more sinister future of information warfare is not far away.
You Might Also Read: