
The challenges of protecting data and rights in the metaverse
Extended reality technologies including augmented and virtual reality are the foundations of the so-called metaverse. While these technologies are still in the early days of development and use, many tech evangelists and investors are claiming that the metaverse will be the future of the web and as transformative of our everyday life as the smartphone.
With Big Tech investing heavily in these technologies, the pace of development and deployment is likely to increase quickly. Meta’s Oculus Quest virtual reality headset is already a top-selling Christmas gift in the United States; big brands such as Nike are already buying real estate on virtual reality platforms; and some governments are even planning official events in the metaverse.
We need to pay far closer attention to the narrow interests driving these developments, and their likely societal impacts. Else, we risk leaving our future in the hands of a small section of technology companies and investors, mostly from Silicon Valley, with grave consequences for human rights, equity, and political accountability.
Protecting data in virtual reality
Virtual reality systems work by capturing extensive biological data about a user’s body, including pupil dilation, eye movement, facial expressions, skin temperature, and emotional responses to stimuli. Spending just 20 minutes in a VR simulation leaves nearly 2 million unique recordings of body language.
Existing data protection frameworks are woefully inadequate for dealing with the privacy implications of these technologies. Data collection is involuntary and continuous, rendering the notion of consent almost impossible. Research also shows that five minutes of VR data, with all personally identifiable information stripped, could be correctly identified using a machine learning algorithm with 95% accuracy. This type of data isn’t covered by most biometrics laws.
But a lot more than individual privacy is at stake. Such data will enable what human rights lawyer Brittan Heller has called “biometric psychography” referring to the gathering and use of biological data to reveal intimate details about a user’s likes, dislikes, preferences, and interests. In VR experiences, it is not only a user’s outward behaviour that is captured, but also their emotional reactions to specific situations, through features such as pupil dilation or change in facial expressions.
Imagine a user that sees a picture of a shiny red car — in a VR experience, their emotional response can be analysed and tracked, including how it changes over time. Pupil dilation could convey the excitement the user feels at seeing the car and galvanic skin responses can tell us how intensely a person feels a particular emotion.
Access to such data can lead to even more opaque and intrusive ways of profiling, categorising, and targeting individuals and communities. There’s already ample evidence of the harmful impacts of such algorithmic systems — from a loss of human agency; to workplace discrimination; to voter manipulation — with disproportionate impacts on the already vulnerable and marginalised.
Moderating content in the virtual world
Managing issues such as online harassment and content moderation will become even more complex in virtual worlds. Studies indicate that violence in virtual reality environments is far more traumatic than on traditional social media platforms. A survey of users of popular VR headsets showed that 49% of female and 36% of male respondents reported experiencing some form of sexual harassment.
The challenge of developing adequate regulatory mechanisms is even greater for many low- and middle-income countries because of limited institutional capacity and because data has become central to visions of socioeconomic development and the exercise of state power.
Big Tech and its investors are betting heavily on the metaverse because they want people to spend even more time online so that they can then collect more data, which ultimately, can be sold to advertisers. This model of “surveillance capitalism” will be unimaginably deepened with biometric data from virtual reality worlds, adding to the massive amounts of user data already extracted and harnessed by tech companies.
Facebook, for example, is looking for ways to revive its advertising business — its popularity is declining, particularly among young people, and it lost a significant revenue stream when Apple changed its privacy policy to allow users to opt out of being tracked. With a metaverse, it can appeal to younger users, and biological data from the Oculus Quest headset can be combined with other data collected on its platforms.
Fueling Big Tech’s power and state surveillance
Big Tech already has a huge influence on our lives — it can control what information we access, the social connections we make, the things we buy, and even how we vote. These companies have begun to fill essential social and civic functions in our societies — from enabling access to information to creating digital marketplaces to helping manage public health crises — but with little to no political accountability.
Despite mounting evidence of online and offline harms, Big Tech continues to prioritise profit margins instead of making meaningful changes to how they operate. The regulatory challenge is even greater in low- and middle-income countries because these companies are filling gaps in state capacity and have become an essential tool for driving socioeconomic development.
State surveillance is also likely to increase with governments accessing such granular data about our emotional state, bodies, location, and private spaces. Such pervasive and invisible surveillance can have a chilling effect on democracy, making individuals refrain from engaging in certain types of speech or online activity, threatening a wide range of rights, including the right to freedom of expression, association, and assembly.
Automated facial recognition and predictive analytics are already being used by law enforcement authorities in many parts of the world, often in the absence of data protection laws. There is already ample evidence of state actors manipulating citizens and censoring speech in digital spaces for their political gain, and of Big Tech platforms dodging political pressure to retain their market presence.
It has become common to hear technology companies and investors — and even states — argue that regulating too early or regulating too much can stifle innovation. There are certainly many good use cases for extended reality technologies — from specialised medical training; to industrial prototyping; to providing an alternate space for expression and community.
But beyond such specific cases, the hype around extended reality is an expression of what interdisciplinary social scientist Kean Birch has dubbed “rentiership in technoscientific capitalism” — meaning the focus of innovation is to convert human life and experience into an “asset” and develop ways to extract value from that asset.
We need to find ways to regulate and limit the personal, data-driven economy, not amplify it. The conversation on how to do so needs to start now before extended reality technologies become normalised — including for younger users — with lasting impacts on their mental development and well-being.