What Will Protect People From Hate Crimes in the Metaverse?
Credits: Shutterstock/Hitesh Sonar For The Swaddle
Media
/
Jan 2022

What Will Protect People From Hate Crimes in the Metaverse?

On a day like any other on the Internet, a few women woke up to the news that they were on sale. It is important to note that these were Muslim women. Still more important is the fact that these were outspoken Muslim women – activists, journalists, lawyers. It happened again, six months later. This time, the culprits were caught, but that was that. No larger conversations around systemic harm and its ubiquity in the virtual world.

The big question of our times is thus: what does it mean to be a person online?

It appears that to harass or harm someone online, the fact that a real body isn’t involved makes it easier. To seek justice and accountability, however, is difficult because – again – a “real” body was not involved. The second question, therefore, is this: does being a person with rights necessarily entail having a body, or being embodied, at all times?

Rohitha Naraharisetty of The Swaddle spoke with Urvashi to understand her perspective:

“We barely have adequate protections or frameworks to protect people from harm, particularly marginalized groups within digital spaces as we know it… you then have something like the metaverse where the problem will become deeply amplified,” says Urvashi Aneja, founder of the Digital Futures Collective. Aneja explains that the problem is multifold. First, jurisdiction issues will complicate any frameworks of protection from hate speech, crimes, or other forms of violence in VR.

The second is accountability. “It becomes very hard to kind of distinguish between who is human, who is bot, what is real, what is synthetic media,” Aneja adds.

Facebook has already furthered community harm in ways that the law has not yet caught up with. And yet, the emphasis is on censorship and content moderation – where liability is thin, and the problem can “go away” with just a click of a button.

Except, it’s not quite so simple. Accountability gets harder to define in any virtual setting. If so much harm can be done with mere images, as revenge porn and the Sulli deals incidents have shown, how much more can be done to sentient, 3-dimensional avatars? There is an “update” in how embodied we can be online, with no corresponding update in how the law understands this new reality. Will our avatars have human rights? Will avatars that harm others be liable to the same punishments as they would offline? We still do not recognize harm to communities as a unique category of harm on social media. If we did, the “Sulli deals” incident would have been dealt with as systematic disenfranchisement rather than individualised harassment.

The problem is that currently, there is confusion over what bodily integrity means in VR. The first articulation of this problem was in 1993, during an incident of violent sexual assault in a VR space. Journalist Julian Dibbell, in his essay “A Rape in Cyberspace,” notes that the facts over what happened get complicated “… for the simple reason that every set of facts in virtual reality (or VR, as the locals abbreviate it) is shadowed by a second, complicating set: the “real-life” facts. And while a certain tension invariably buzzes in the gap between the hard, prosaic RL facts and their more fluid, dreamy VR counterparts, the dissonance… is striking.”

These are excerpts from the original piece published here.