From post-truth to post-reality: the future of disinformation - Friends of Europe

image

Imagine one day putting on your smart glasses – which replaced your mobile phone – and walking to work. Along the way, you check your email, browse news updates and see advertisements on the walls of buildings you walk past. Across the boulevard, you see an altercation of some kind as people step in to protect a woman who has been assaulted by what appear to be people dressed like recently arriving migrants. You hear whispers of passers-by saying things like “that is the third attack this week” and “enough is enough.”

The year is 2029 and the developed world has moved even more of human activity into the digital realm to drastically cut carbon emissions and enable a new era of seemingly limitless possibilities.

Next, you arrive at the office where you must switch your eyewear to sit in a product design meeting with colleagues from Nairobi, Mumbai, Santa Fe and Berlin. You see and interact with highly detailed three-dimensional (3D) avatars of each other and discuss the 3D product schematic in front of you. Perhaps during a work break, using that same eyewear, you make a visit to the alpine hut in the Dolomites that you enjoyed so much last summer.

All of these things are possible with the advent of new technologies, which are changing our world from a two-dimensional interactive relationship with the internet to a 3D immersive one. But this transition also brings with it new risks to society and democracy.

In the scenario presented above, the alleged migrant assault on a woman and the whispered remarks about it were all fabricated disinformation paid for by an anti-immigrant political party using the same techniques available for product placement advertising. Also, it was later learned that one of the avatars in your product design meeting was not actually your colleague in Berlin but rather someone who had hacked into and was using their account. In this case, your entire team was interacting with a live 3D deep fake that could have been someone from your company’s fiercest competitor.

These are just a few examples of the risks we face in the transition to a new world made by the convergence of immersive technologies and artificial intelligence. In the words of XRSI Founder Kavya Pearlman, these developments are moving our world “from post-truth to post-reality”.

The technologies that will enable this future include augmented reality (AR), virtual reality (VR) and artificial intelligence (AI).

AR is the integration of digital information with a person’s environment in real time, enabled by devices such as smart glasses. AR users experience a real-world environment with increasingly realistic generated information layered on top of it. VR is a fully immersive 3D artificial digital environment in which a person can have experiences, such as the business meeting or visit to the mountain hut, from their own desk at home. Future developments include the ability to connect directly to the human brain. Collectively, AR and VR are referred to as XR. These are the technologies used to access the metaverse – the immersive 3D internet where much of future commerce, education and social interactions are projected to take place.

Considering the societal splits and online radicalisation that we’ve witnessed in recent years, […] the risks associated with XR become more apparent.

During the current era of interactive digital media, the average person spends 11 hours per day on their mobile phone or computer. During the immersive era, this daily average is expected to climb to 14 hours to 16 hours spent on digital devices.

The primary reason immersive XR is a more potent means of delivering disinformation is not only the increased number of hours spent on XR devices but also that, instead of observing something on a flat screen, it is experienced in the first person. For this reason, immersion ‘feels’ real, making it more difficult to distinguish reality from the extended (alternate) reality.

Further evidence of this increased potency to deliver disinformation is indicated in recent Chinese studies that found immersive XR experiences are 44% more addictive than interactive media and advertising in the XR metaverse is 2.7 times more effective than two dimensional videos.

Considering the societal splits and online radicalisation that we’ve witnessed in recent years with interactive media in addition to these scientific findings, the risks associated with XR become more apparent.

Data collection on overdrive. XR technologies require high volumes of customer data in order to create more authentic experiences but this vastly larger volume of data can also be used for more precise microtargeting. For example, Meta’s new Quest Pro VR headset includes five internally facing cameras that collect large volumes of data on the user, which can infer personality traits, mental health, cognitive processes, age, gender, drug consumption, cultural background, physical health, mental workload and geographical origin.

All of this is in addition to the kinds of microtargeting data that are already being collected on people in the current interactive internet. Data on body movements, which can indicate medical conditions such as autism, ADHD, PTSD or dementia, can also be collected in XR.

In 2016, Cambridge Analytica claimed to have 5000 data points on every American voter, which was used to support the social media campaign to help Donald Trump win the election that year. Cambridge Analytica had similar data sets used to support the ‘leave’ campaign in the 2016 Brexit vote, as well as influence elections in India, Australia, Malta, Mexico and Kenya.

By comparison, the generation of XR headsets already existing in 2018 were able to collect nearly two million data points on a person in just 20 minutes. Now imagine what kinds of precisely targeted disinformation and information manipulations can be delivered within immersive environments that ‘feel’ real and where discerning reality from unreality will become even more challenging as technologies continue to advance.

This would all be less concerning if tech companies were under more precise privacy regulation or had more respect for their customers’ privacy, but numerous studies of these new devices and the companies’ privacy policies show otherwise.

Studies by the Extended Reality Safety Initiative (XRSI), Stanford University and CommonSense.org all indicate that users are tracked from the moment they put on their XR device and that sensitive data collected on them is shared with third party advertisers, whose ads are also displayed in XR. Furthermore, current privacy policies do not indicate any stronger protections for children or teens.

This same future will be enabled by artificial intelligence (AI), which is already able to amplify disinformation narratives. AI capabilities, such as the GPT-3, can write convincing public comments and letters that appear to be written by humans. When used at scale, this type of capability can flood information spaces and engage in ‘reality jamming’, which targets populations on a mass scale to shape behaviour and foment social division.

Recent advances in AI are making the immersive worlds in XR more realistic each year, blurring the lines between reality and extended reality. Within the XR metaverse, instead of a text robot like the GPT-3, we’ll be faced with ‘virtual humans’, who – armed with hyper-precise microtargeting data on us – will serve as everything from sales clerks in virtual stores to people trying convince us to believe certain narratives.

In short, humanity is moving into a future in which ‘reality’ may no longer be a single objectively verifiable perspective with potentially serious consequences for democratic governance. Being prepared for this future requires urgent attention because without truth there is no accountability and without accountability there is no democracy.

In addition, the level of manipulation that may be possible in the XR metaverse, especially with the dawn of brain-computer interfaces, could represent an unprecedented ability to undermine human free will. Unless citizens and governments understand what is at stake and regulate these technologies now, they could endanger democracy and social cohesion to a degree far beyond what we see today.

How can we be prepared for the future in a way that maximises the benefits and minimises risks? What should the EU address in its Global Metaverse Regulation Initiative, which is set to be launched in 2023?

Firstly, instead of waiting for these new technologies to arrive en masse, we must proactively regulate them and build safeguards into security of XR and AI systems to protect users from microtargeting and manipulation.

Next, we must continuously expand and update the EU’s General Data Protection Regulation (GDPR), Digital Services Act and Digital Markets Act to protect citizens’ privacy and free will within the XR metaverse.

To better understand the challenges and opportunities this future will bring and to inform future laws and regulations, the EU should set aside funding for studies on the impact of XR on individuals, groups and societies.

Furthermore, we should plan for the active defence of citizens and societies inside the XR metaverse. It is laudable that Europol is already active on this front. Citizen protection authorities throughout Europe should follow their example and start building a depth of experience and knowledge of the XR metaverse so they can better protect citizens both now and into the future.

Without meaningful efforts to address the global spike in anxiety and depression related to the recent coronavirus pandemic, our efforts to address disinformation will remain incomplete and ineffective

In addition to building cyber-citizenship skills, we’ll need new educational efforts to prepare citizens (of all ages) so they can better navigate the new dynamics of the immersive digital age. These efforts should also build on existing programmes to build social cohesion with the added mandate of helping citizens stay grounded in reality.

We must continue fact-checking and debunking efforts but also address the underlying drivers of disinformation to include the parallel mental health pandemic and economic anxiety, both of which enable disinformation narratives to take root within significant parts of our societies. Without meaningful efforts to address the global spike in anxiety and depression related to the recent coronavirus pandemic, our efforts to address disinformation will remain incomplete and ineffective.

Finally, governments’ ability to deliver on promises to close socioeconomic gaps and reduce economic anxiety remains among the most effective tools to thwart disinformation in the current era and no doubt will continue to apply in the future as well.

The views expressed in this #CriticalThinking article reflect those of the author(s) and not of Friends of Europe.