Metaverse would be an "existential threat" to Facebook if it turned off "mainstream customers from the medium entirely," according to media reports. Meta (formerly Facebook) is aware that virtual reality can be a "toxic environment," especially for women and minorities, and Metaverse would be an "existential threat" to Facebook if it turned off "mainstream customers from the medium entirely."
According to the Financial Times, Facebook wants its virtual worlds to achieve "nearly Disney standards of safety," citing an internal communication from Meta CTO Andrew Bosworth.
However, Bosworth admitted that "at any significant scale," controlling how people say and behave is "practically impossible."
Later, in a blog post, Bosworth stated that technology that creates new opportunities may also be used to damage people, and that "we must be cognizant of that when we build, iterate, and bring things to market."
"Harassment in digital settings is nothing new, and we've been trying to solve it for years with others in the business. That work is still underway and will very certainly never be completed. It's always changing, yet its significance remains constant. It's a huge undertaking, to say the least," He made a point.
Meta has committed $50 million to study practical and ethical challenges relating to their metaverse. This year, the social network expects to spend at least $10 billion on metaverse-related initiatives, and it's restructuring its financial reporting to segregate income from Facebook Reality Labs and its app family.
The metaverse will be a social, 3D virtual place where you can have immersive experiences with others even if you can't be with them in person - and accomplish things together that you couldn't do in the real world.
"There are, of course, limitations to what we can do. We can't record everything that happens in VR eternally, for example, because it would be a breach of people's privacy, and the headset would eventually run out of memory and power "According to Bosworth.
"We want everyone to feel like they're in charge of their VR experience and safe on our platform," he continued. "End of story." Later, Bosworth reposted the post on Twitter.
According to Bosworth, Meta could use a stricter version of its existing community rules to moderate spaces like its Horizon Worlds VR platform, saying that VR or metaverse moderation could have "a stronger bias towards enforcement along with some sort of spectrum of warning, successively longer suspensions, and ultimately expulsion from multi-user spaces."
While the complete letter isn't public, Bosworth made a reference to it in a blog post later that day. Several of Meta's current VR moderation features are mentioned in the post, titled "Keeping people safe in VR and beyond." This includes the ability to restrict other users in virtual reality, as well as a comprehensive Horizon surveillance system for monitoring and reporting inappropriate activity.
Meta's older platforms, including Facebook and Instagram, have been chastised for major moderation failings, including delayed and ineffective reactions to hateful and violent content. The company's recent rebranding might be a fresh start, but as the document points out, virtual reality and virtual worlds will certainly face a whole new set of obstacles on top of the ones they already have.
"We frequently have candid dialogues about the problems we confront, the trade-offs involved, and the potential results of our work, both internally and externally," Bosworth said in the blog post. "There are difficult sociological and technological issues at hand, and we deal with them daily."