Building the Open Metaverse

Trust and Safety in the Metaverse

Tiffany Xingyu Wang, a leader at the intersection of AI and Trust and Safety, and Mark DeLoura, game industry veteran, join Patrick Cozzi (Cesium) and Marc Petit (Epic Games) to discuss Trust and Safety in the Metaverse. This episode covers technical and human challenges, policy and business considerations, and the opportunities ahead in building an ethical future for the metaverse.

Guests

Tiffany Xingyu Wang
Chief Strategy and Marketing Officer, Spectrum Labs; Co-Founder, Oasis Consortium
Tiffany Xingyu Wang
Chief Strategy and Marketing Officer, Spectrum Labs; Co-Founder, Oasis Consortium
Mark DeLoura
Executive Director, Level Up Games
Mark DeLoura
Executive Director, Level Up Games

Listen

Subscribe

Watch

Read

Announcer:

Today on Building The Open Metaverse.

Tiffany Xingyu Wang:

In the current years, in the coming two years, we will see the legislations in place, and they will look like something like the GDPR (General Data Protection Regulation) for safety. Yeah. But if you look at those legislations, they have different ideologies embedded behind them because they think differently about what safety really means. So one size simply doesn't fit all.

Announcer:

Welcome to Building The Open Metaverse, where technology experts discuss how the community is building the open metaverse together, hosted by Patrick Cozzi from Cesium and Marc Petit from Epic Games.

Marc Petit:

All right. Hello, everybody. Welcome to our show, Building the Open Metaverse, the podcast where technologists share their insight on how the community is building the metaverse together. Hello, I'm Marc Petit from Epic Games, and my co-host is Patrick Cozzi from Cesium. Patrick, how are you today?

Patrick Cozzi:

Hi, Marc. I'm doing great. We have a lot to learn today.

Marc Petit:

Yeah, absolutely, because we're talking about a very relatively complex topic. So we invited two experts to help us understand, not just how we build a metaverse that's open, but also a metaverse that is safe for everyone. The topic, so you've understood, is trust and safety, and how they can be built and eventually enforced. So our first guest is Tiffany Xingyu Wang, Chief Strategy Officer at Spectrum Labs, but also co-founder of the Oasis Consortium. Tiffany, welcome to the show.

Tiffany Xingyu Wang:

Thank you.

Marc Petit:

And our second guest is game industry veteran Mark DeLoura, who is currently working on the educational technology project, but has deep background in technology at companies like Sony, Ubisoft, and THQ, and was also a technology advisor to The White House during the Obama administration. And more recently with the City of Seattle. Mark welcome to the show.

Mark DeLoura:

Thanks Marc. Thanks Patrick. Nice to see you guys.

Patrick Cozzi:

Tiffany, to kick things off. Could you tell us about your journey to the metaverse in your own words?

Tiffany Xingyu Wang:

Yes. to first start off, I have to say my purpose in the metaverse is to an build ethical digital future in this new digital society. And it really excites me just to think that as we are building the metaverse on Web 3, overall from the ground up, we actually stand a huge opportunity to get things right this time around. And we can unpack a little bit where we got things wrong in the past two decades in the social web. Now, how I got here, while I have been working with Spectrum Labs, focusing on digital safety. So we use artificial intelligence, helping digital platforms. That means gaming platforms, dating platforms, eCommerce, and social media platforms to keep billions of people safe online. Now with the concept is Marc and Patrick have always said on the podcast, really the building blocks of metaverse have been there for years, for decades before this point.

Tiffany Xingyu Wang:

But the proliferation of the concept of metaverse is now here. What I have observed is that the safety flaws and ethical flaws that we have seen in Web 2.0 will only be exacerbated if we don't have the ethical guardrails at this point now and here. So for that reason, I called for a group of experts, the trust and safety leaders from different platforms, industries, and across different staged companies about two years ago and saying, "Hey, if we have this chance right now, and we should achieve certain consensus and set certain guardrails and guidelines for any platforms to reference to, so that as we build technological innovations, we can embed the safety measures and the conscience in the products and in the technology right now." So that's my purpose and journey toward the metaverse.

Patrick Cozzi:

Yeah. Thanks Tiffany, really appreciate your passion and look forward to diving into your work. Before we do that, Mark, we'd love to hear about your journey to the metaverse.

Mark DeLoura:

Sure. Thanks, Patrick. This conversation makes me feel old and I definitely have gray hair. So maybe some of that works out for me, but I got my start in metaverse related technologies back in the late eighties, I guess I would say. I like to call it the second bump of virtual reality. First one being kind of being the Doug Engelbart era, the second one, late 80s, early 90s. So I was in grad school. I went to undergrad at University of Washington, where there was a research lab popping up to look at virtual reality. And this was led by Tom Furness who'd done a bunch of work in military in previous years. And so I was just in the right place, the right time and wound up working on VR related tech in school for four or five years, ran a group on Usenet with an old friend, Bob Jacobson.

Mark DeLoura:

And that's kind of how I started getting super excited about VR and the potential of VR specifically. So when I got out of school, there really wasn't much in the way of VR out there to be done unless you were at a research institution, but there was a lot of video games. And luckily for me, video games were just evolving to this point of being mostly 2D into 3D. Like what could we do at a 3D environment? I landed at Nintendo just as they were starting to come out with Nintendo 64, which was a 3D platform and Super Mario 64, really being the first big 3D game. And so I was able to apply what I learned about creating worlds and 3D technologies and push it into video games and these spaces for people to play in and find ways to make these spaces super engaging.

Mark DeLoura:

So since then, so this has been 20, 25 years for me now. I worked at Nintendo and Sony and Ubisoft and THQ and a bunch of startups and lots of consulting and kind of two thirds of the way along the way, got lucky and found myself in The White House, working for President Obama in the Office of Science and Technology Policy. And so that's a group in The White House, and it varies from about 30 to a hundred people who are focused on science and technology areas in which they have a particular expertise and think that there's some way what they're working on can be advanced more quickly and benefit America broadly, whether that's like nano materials or low cost spacecraft, or for me, it was how do we use games and game related technologies for learning, for healthcare, for physical fitness, for citizen science.

Mark DeLoura:

And then also I happened to be in the right place at the right time to talk about computer science education and helped spin up the big K12 computer science education effort that the Obama administration kicked off. So that got me really jazzed. I learned a lot about policy, which we'll talk about on this call. I'm always excited to talk about policy- might sound weird, but since that I've been combining those worlds, so how can we make exciting 3D engaging worlds that are game like, but also teach you something, whatever it is you're up to that you're trying to learn about the world or express to another person, how do I create a world that's engaging that my parents might want to play in and learn about this thing that I think is fascinating?

Mark DeLoura:

So that's what I'm up to these days. Yeah. And I think it's interesting for me to use the term metaverse just because I think of metaverse as VR in my head kind of interchangeably. And I know that saying metaverse also implies lots of other technologies, but what I tend to focus on really is the presence and the social aspect, and then all of the knock on effects that come from that.

Marc Petit:

Well, thanks, Mark. And yeah, we're happy to have you with us. You have this unique in depth technical expertise and knowledge of policies and government. So that's going to be interesting. So I go back to trust and safety and Tiffany, you alluded to learning from 15 to 20 years of social web. So what have we learned and how do you use that knowledge to create a strong ethical basis for the metaverse?

Tiffany Xingyu Wang:

Yes. I think we should first do a state of the union, checking how we are and where we are today. So there are three stats. In the US alone, 40% of the US internet users have reported to be harassed or be subject to hate speech, a safety concern. Yeah? And on a privacy side, every 39 seconds, there is a data breach and that's the privacy issue. And we have all seen the reports a couple of years ago that machines discriminate human beings, partially because of the lack of diverse and inclusive data. So in the facial recognition arena machines recognize white males 34% better than dark-skinned females in certain circumstances. Now that's where we are. As we are marching into this new era of the so-called Web 3, what I really look at is the fundamental technology paradigms that go to shape up this Web 3.

Tiffany Xingyu Wang:

So we are really talking about, as Mark mentioned in the world of AR/VR and in the world that Patrick, Marc you are creating, this super immersive universe. If you think about the issues of toxicity that we have seen so far prevailing in the Web 2, hate speech, racism, even like human trafficking and child pornography, all those issues can only be amplified. The impact will be much higher and because of the nature of being persistent in this universe and being interoperable in this universe, the truth is that the content moderation will be harder. And the velocity toward toxicity will be much higher. If I look at the Capitol Hill insurrection, it was somehow agitated by the social media toxic environment. And you can think of the metaverse place without safety guardrails to be the place to get to that catastrophic outcome much sooner. So in this first paradigm of the metaverse, we have to think about safety more seriously, and at the get go.

Marc Petit:

Yeah. I have a question actually, because one of the things that being an optimist I thought is because, Mark referenced presence and the sense of co-presence. If you are closer to people, much less anonymous than chatting. I know you can insult somebody very easy on the chat, but I find it all more difficult to do to his voice because you have more an implication with the person and ultimately in the metaverse, it will be closer. The social interaction, the promise of the metaverse is social interaction that is closer to real life. So in my mind, I would've thought that there would be a reason why they would be less issues. And now you're saying the time to issues is going to be fast. So I'm sure there is some research and some thinking behind it. So is this going to be more difficult?

Tiffany Xingyu Wang:

Yeah. So there are two things here. One is that we already have seen the toxic issues in the audio space. And the cost to address audio issues is much higher because you need to store a process at audio data. So it's actually more costly and we already have seen issues there. And we all have heard the groping issues in Horizons, right? So when I mentioned that when you have toxic behaviors, impact will be higher and velocity will be higher, is because of those incidences. And because of technology advancements in the so-called audio renaissance, or in this whole immersive environment, because we haven't yet fully thought through how we do safety, we didn't embed actually safety codes. I mean, the safety measures in writing the code as we proliferate the metaverse. And another thing, which is very interesting that you allude to, is my observation is across platforms, what I call the moveable middle.

Tiffany Xingyu Wang:

And so it is always a very little population on a platform for most toxic groups. And then they start to become the most visible groups of toxicity on the platforms, but really about 80% of the platform users are moveable middles. So one thing, and that we last talk about is how we incentivize positive play and positive behaviors, so that movable middle can understand and mimic the positive play and behaviors on the platforms and therefore translate the true brand identity and the game identities that actually platforms or brands want to translate to the broader community. Yes. And then, so coming back to the other two paradigms, one is the rise of the IOT, right? Again, when you think about the devices are no longer just laptops, no longer just iPhones, it's VR/AR sets, but actually every single device all across the supply chain.

Tiffany Xingyu Wang:

So today we think about privacy in a very centralized way. Is that chief privacy officer or chief security officer sitting in that corner office, or now at their home office? And then centralizing all measures about privacy. But with this new movement, we have to think about the people behind every single device. And there are a lot of privacy technologies we have to adopt with the rise of IOT. And I think the third technology paradigm under this definition of the Web 3 is the semantic web concept. But what it really means to me is that with the development at Web 2, today we see 80% of the content online is user generated content. Yeah. So in other words, we use user generated content to inform the machines to make the decisions for the future. So if the content is not inclusive or device and web seeing incidences back then when Microsoft put the AI "Tay" on Twitter and then that machine became racist overnight, right?

Tiffany Xingyu Wang:

And we can't let that happen in the metaverse. So how we think about creator economy in the metaverse in a way that can prevent that incidence from happening in the metaverse is very important. So just to recap, I think when we talk about Web 3, we talk about technological tsunami about IOT, about semantic web and AI. We talk about metaverse, but to make that sustainable, we have to think about the ethical aspect to come with each paradigm, which is safety for the metaverse and privacy with IOT and inclusion with a creator economy or the semantic web. And that's how I think about what we call the digital sustainability, because otherwise I can't see how metaverse can survive upcoming regulations. I'm pretty sure Mark has a ton to weigh in on this and how we can survive the government to not to shut down a metaverse because of the issues we potentially can see without guardrails.

Tiffany Xingyu Wang:

But either can see how people can come and stay if we don't create that inclusive and safe environment for people to live in, just as we do in the physical environment, Marc, as you mentioned today, that we don't feel as we are interacting in person and we will assault each other because fundamentally for decades, hundreds of years, in thousands of years there's this concept of civility existing in the physical world, which is not being viewed as yet in the digital world, which the digital civility that we need to build out. Safety is one side of it, but positive play and a positive behavior is another side of it.

Mark DeLoura:

I'm curious if you don't mind, if I jump in because guess I'm a programmer at heart or an engineer at heart. So I have a habit of taking things apart. [Laughs] So I have questions about a lot of the things you said, all of which I fundamentally agree with. But when I think about civil society broadly, we have a lot of rules and constraints and systems built to make sure that people behave well and still people don't behave well. So what do you think about, what are the systems that we need in place, aside from guardrails that can incentivize people to do the right thing or are there situations that you imagine where you have regions in which the standards are different? And over here, this is the right thing over here, you can be called a doody in a voice chat over here. You can choose. Have you thought about that?

Tiffany Xingyu Wang:

Oh gosh, I love it. So what I always say is one size doesn't fit all in this space. It just doesn't, right? It's just like in the physical world, different regions, different customs can be very different. So one size doesn't fit all, it is up to every single government to decide what obligations should be. And we have seen that EU, UK, Australia have already been working on the legislations. And in the current years, in the coming two years, we will see the legislations in place and they will look like something like the GDPR ((General Data Protection Regulation) for safety. But if you look at those legislations, they have different ideologies embedded behind them because they think differently about what safety really means. So once I simply identified or not mentioning that within a country, or even from a global perspective, a gaming platform can define a certain behavior very differently from a dating platform or a social media platform.

Tiffany Xingyu Wang:

Yeah. So one size simply didn't fit all. So it's a great question, Mark there. And I don't know if this group wants to discuss a little bit about the Oasis user safety standards that we launched on January 6th, and we chose that date for a reason. But to solve the exactly concern, Mark, you mentioned, we launched the standards to really do two things. One is to prescribe the how. So, even though you can achieve different goals, but how can stay the same or similar across different platforms? So that's the best practices. And I can explain how that works. The other side of it is, if you think about it, I always find it's interesting because when you do the product development. If you build a business, you don't say that I just want to do the bare minimum for obligation to be compliant with regulations.

Tiffany Xingyu Wang:

You don't say that. You say, I want to go above and beyond to differentiate my products in the market to get more users. And why can't that be the case for safety? Especially at this moment in time where all platforms are starting to lose trust from users because of the safety, privacy and inclusion issues we are seeing. And because the fact that the gen Z and the new generations care about those ethical aspects, why can't this become not only a moral imperative, but a commercial imperative for platforms and brands to think how I can talk about my brand with that differentiation of being a safer platform. So really the goal of Oasis Consortium and the standards behind it are two. One is to give the how for platforms to achieve those obligations. And the second is to make that more a commercial imperative as well as a moral imperative to do it.

Tiffany Xingyu Wang:

And in terms really of the how, I know you're programmers and engineers, I'll give you the how. So we call the 5P framework. So the key reason being that before user safety standards, I personally struggled working with all the platforms because different platforms have inconsistent policies for the platforms. And then they have different tech stacks to enforce the policies, which is even harder, right? That's why the tech platform's reaction to the upcoming regulations in EU, UK, Australia is a little bit rough because you don't switch on one button, and suddenly safety appear in your platform, right? It really comes down to how you build the products and processes. So the 5Ps are the five methods at which stand for priority, people, product, process, and partnerships.

Tiffany Xingyu Wang:

And under each method, we have five to 10 measures that any owner across those functions can use the measures to implement tomorrow and to unpack a little bit here and I can dive deeper into each measure if you want. But on a high level, the priority to solve this problem, which I call when five people own something, nobody owns it, in corporate America. And it's a key thing in America or anywhere, but it's specifically applicable to a nascent, but important industry like trust and safety. Because if you look ahead of trust and safety today, they can report to private officer. They can report to COO. Sometimes the ideal case, they report directly to CEO. Sometimes they report to CMO. So it's like anywhere and everywhere in the org.

Tiffany Xingyu Wang:

And you don't have one single owner who has a budget and team to do it. So the method of priority is to showcase the platforms and brands who have done well in terms of setting the priority and give the resources and how to do it. And people is about how you hire in the inclusive and diverse way. Because in earlier days, if you look at the people who work on the community policy making and enforcement team in trust and safety, they tend to be white males and you can't avoid the biases if you hire people in a very specific group. So it's very important to think about how you actually hire the policy and enforcement teams for your trust and safety in a diverse way. Now let's get to the core of product process, which you would care, especially a lot of technologies work here on the product side.

Tiffany Xingyu Wang:

I give you a few examples. So today, if you want to read safety policies somewhere on your website, you click button, you go to safety center and most platforms don't even have it. But what we should really think about is how you surface that community policy along your user experience journey. Like when you sign on, when you did something right, or you did something wrong, it should be embed in your code, in your user experience, right? As much as we invest in the growth features, we never so much invested in safety features, right? That's an example. To other, you think about how you actually even capture, collect process and store the data of those behaviors so that when you work with the enforcement, when there are certain incidences happen, that data is there for proof, or you can create analytics to enable transparency reporting for your platforms for the brand purpose.

Tiffany Xingyu Wang:

Right? And another piece of the product development to think about is how you embed the enforcement tooling through content moderation, to not only react to toxic behaviors, but to prevent toxic behaviors such as if you see a content which is toxic, you will know that. Do you decide to ban it, prevent that from posting? If you do with seen certain platforms do that pretty well. But we call the shadow banning. You didn't actually explain why it was banned and how you do that in the product. Now, if you ban it and if it was a true case, not a false, positive, not false negative, how do you actually educate the users to behave appropriately next time without leaving too much individual interpretation? Right? So all those aspects, which to create a digital civility. To create a civility as like, when we grow up, our parents will tell us, don't do that.

Tiffany Xingyu Wang:

The best manners will be that. And we don't have a product user flow when we engage in any platform today. Right? So that's a product development piece. So all the measures are to address what we can do. And process is the message which has the longest list of measures, because what we have observed in the market is that actually, after about five to 10 years in the past, platforms are getting way better at creating community policies, tied to the brand and identity. However, the scandals, when you see them happen in headlines of the New York Times or the Wall Street Journal, and in headlines in the media, it's usually when enforcement falls short. So that means when you use humans or when you use machines to identify if a behavior is toxic or not, there will be false positives and false negatives.

Tiffany Xingyu Wang:

It just a sheer volume and math, right? If you have hundreds of millions of active users and then billions of messages every month, even if you catch 99.9% of the cases, there will be cases missing. And that is usually got you into trouble on how to prevent the choices that will exist. But there's so many things we can do to make the enforcement more buttoned up. Things such like, most of the platforms don't have an appeal process, right? If it's a false positive case, I don't know where to tell people. And they're like oversight board, etc. So there's whole list of how to make sure that all the processes are in place. And the last is the partnership is, we have seen different countries are issuing regulations.

Tiffany Xingyu Wang:

It's very important not to be the last bear to run down the hill from the commercial and the brand perspective, right? Make sure we stay ahead of curve working with the governments. We also do think about how to work with nonprofits, like Oasis to get actually the best practices to enforce it, but also working with other nonprofits who are specialized in human trafficking, encounter child pornography. Those are illegal behaviors offline and if found online, especially with new regulations will consider illegal and there will be penalties on the platform. So how you partner with all those nonprofits to stay ahead of the curve and also think how to partner with media. You don't want to talk with media when crisis already happened. You want to talk with media ahead of time to showcase how you lead the way to think about it and make people understand it's not a rosy picture today.

Tiffany Xingyu Wang:

It's a hard problem to solve, but you are the platform and brand who does the most. So I think it's very important to think about those 5 Ps and rally the companies around it to make sure that it's not only for compliance, but also become a strategic driver for business because in the new time the community is the brand. If the community are not safe, and if they don't rave about how inclusive your platform is, it will not be sustainable. So that's hopefully a detailed enough answer for Marc your question, how we actually hands on to do it.

Marc Petit:

Well, I just want to, at Epic, I'm observing, we did the Lego announcement and we use this much say that our intention is to create a very safe environment and the depth of the magnitude of problems you have to solve and the level of awareness is actually huge. And we have a group called SuperAwesome led by Dylan Collins. And they are, I mean, the complexity of doing the right thing and then matching the various frameworks that you have the legal frameworks, the platform rules, it's a very, very complex problem. And anybody who wants to create an online community will need to have this top of mind, that aspect of it. First, it needs to work. It needs to have no lag. Yes, but it has to have some of the basic measures that you talk about. I can attest that it's a very complex problem to solve.

Marc Petit:

Then moderation is such an expensive item as well. It takes thousands of people to keep an online community at scale together. So Mark, you've been exposed to government. So how do you, I know it's hard to guess, but how do you think the government looks at it and which roles, should government play all the various governments in these early stage of the metaverse given those challenges?

Mark DeLoura:

Yeah. My guess would be that there's not much attention being paid to it at the moment because it's early. Yeah. Even though I say like, it stems back 50, 60, whatever years before my time, and Doug Engelbart, and even further back. I think one of the really delicate balances with government and people who are experts at government who have been in government and focused on policy and regulation and incentivization for a long time, they understand that there needs to be a balance. If you get into an ecosystem too early and start making regulations and setting up guardrails and telling people what they can or cannot do, you might quell innovation that would've happened otherwise.

Mark DeLoura:

And you also make the barrier to entry for smaller companies, a lot higher, two things which you really want to not do. So it's hard to decide when to jump in. I think is one of the big challenges. At the same time, government's job isn't only guardrails. It's not only telling you what you can't do. It's trying to move the country forward and find ways to accelerate parts of the economy that are doing well and can benefit Americans or benefit whatever country.

Mark DeLoura:

So how do you do that as well? So you've got some people who are thinking to themselves, "Great. The metaverse looks like it could benefit our economy in so many different aspects. How do I encourage people to focus on whatever area they're in. So let's say somebody at NASA, how do I use the metaverse to forward interest in space? To make sensors and experiments and space more accessible to... Everybody, not just people who are up there in the space station? Things like this. And to find out there who are working on things related to this space who are going to have interesting ideas and surface those. And then there are other people whose job it is to look at that and say, "Well, Hey, metaverse folks, you're doing a really terrible job at keeping kids safe who are under 10."

Mark DeLoura:

And I'm going to say, "Here's a body of regulations that you're going to need to pay attention to. And if you don't, there are some ramifications for that." So you've got different groups of people trying different things inside of government. And I think what we're seeing now is this like popcorn popping up of different efforts in different countries, different places around the world, focusing on different aspects, you've got GDPR and EU, I even was thinking about China's real-name policy, which is what eight or 10 years old now. I mean, now it's a reaction to the same thing. And then we still have things like Gamergate pop up 10 years ago. And just go into any online video game and try to have a chat and a multiplayer competitive game, try to have any kind of reasonable chat.

Mark DeLoura:

That's not just horrific. I just mute it these days to be honest. But this is sort of a grown, adapted behavior. I always flash back to the first time I played Final Fantasy XI was like PlayStation 2 days. I got on Final Fantasy XI and it was 9:30 in the morning my time, Pacific time. And I was running around and I ran into somebody and they were trying to talk to me and Final Fantasy XI had this really interesting system where you would pick words from a list of words and it had all these words translated. And so somebody in another country, it was like, oh, you said, "Hello, great. Well, this is going to be.." in Japanese. And it would show that in Japanese.

Mark DeLoura:

So you could have these really broken conversations. And this was an effort by them for two things, one to encourage communication cross-culturally, which is super fantastic. Two to try to prevent toxic behavior and the kind of conversations they didn't want to see happen. That's a trust and safety perspective, but you know how creative players are, right? I mean, we all are familiar with peaches and eggplant and things like this, right? There may never be words to express the thing you're trying to express, but people will find a way to express it. And this is really one of the challenges as we go forward in the metaverse. Not only do we all have different standards about what is acceptable and what is not both culturally and personally, we just have really creative ways of communication. And if somebody wants to say something, they're going to say it. Do you have evolving AI?

Mark DeLoura:

Do you have armies of people behind the scenes who are watching all the real-time chats? For a tiny little company, it just makes your head explode to try to do any of these things. And yet you still want to be able to provide a service that's reliable and safe to your player base. So it's a lot of challenges. I think one of the interesting things for me, what we've tried in the game industry, there have been various efforts over time and Marc, I'm sure you're familiar with a lot of these to focus on diversity inclusion, to focus on trust and safety. And when we first started having online games, finding ways to decrease the amount of toxic behaviors and conversations and some work well, some don't work well.

Mark DeLoura:

We don't have a really good habit of building off of each other's work, unfortunately, but it sounds like that's getting better. But how do we take advantage of all of that body of material, and then by identifying the problems we have, encourage an ecosystem of technologies or middleware, open source, whatever it is, so that somebody who's trying to sprout up some new metaverse, or some new region of the metaverse has a tool that they can just grab to deal with and make sure their environment as safe as possible and not have to completely reinvent the wheel or hire an army of 10,000 people monitor the chat.

Mark DeLoura:

And I think those are the things we're starting to think about that some of that developed in the game space. And I hope we can use that and learn from that. But wow, does that really have to grow and develop in the metaverse space like times 10, because we're trying to simulate everything, ready go. It's very hard. So yeah. So you asked me a question about government and I kind of ran off into the weeds, but I think with all of these efforts, we're trying to really make a system that the people who inhabit it can feel safe to be there. And there are push methods, there are pull methods, you can incentivize and you can build guardrails and we need to do all of these things and we need to be flexible about it. And it's a hard problem. We'll never solve it, but we'll get better and better the more we focus on it.

Marc Petit:

Yeah. I like the idea. We talk a lot about the challenges and I think to some extent, the past 15 years of problems as raising the awareness of the public and if we can make safety a strategy, competitive separation for platforms and we get people to compete on that I think is good. And if any are really, I think you guys coming up with standards actually really good because it helps people think about it. And as you know, we have this very recent Metaverse Standards Forum I'm really hopeful that we can bring that trust and safety conversation as part of that effort.

Tiffany Xingyu Wang:

Yeah. And what I love both of Mark's, what you said was this is a super hard problem, mainly because of inconsistencies so far, because every platform went ahead building what was working back then. And often it was stop gap hacks. Right. And what all was standards did is say, "Hey, let's take the collective wisdom of the past 15 years to know what didn't work and what worked and make that available for everyone. So if you build a new platform tomorrow and you don't need to start from scratch, you don't need to make the same mistakes. Take that forward." That's one thing, the other thing is the evolutionary nature of this space. Mark, what you said was very interesting. That's what we observed. Players and users are super creative and they can find ways around a keyword based moderation tooling, right?

Tiffany Xingyu Wang:

I mean, I'm not going to, I know you will blip me out. I'm not going to say the word, but so the F word is profanity, right? And in the last generation of tooling, it is keyword based. So it's defined as profanity, but if the word is this is F awesome, nothing wrong about it, right? It is a positive sentiment. But if this is in the context of potential white supremacy issues or it is a child pornography issue, then it's a severe toxic issue. So we are evolving to the contextual AI space. Now we all know in this room that AI is only as good as data goes. So people find very creative way to get around that word with emojis, with different variations of the word.

Tiffany Xingyu Wang:

And so what I always say is we need to stay fluent in internet language. So we need to understand what is the next generation language, not only for positive behaviors, but also for toxic behaviors and then enable the AI engine to understand that. So there is a way, it's very expensive to develop, but once you develop the data bot of this generation for language, ideally you can open source it so that all the platforms can use it and to save the cost of reinventing the wheel.

Tiffany Xingyu Wang:

So I want to highlight, it's a very expensive problem to solve. And I think also there's an attitude today in the media or in the industries. If there's one thing which went wrong, we should all attack it. People need to acknowledge. It's super hard. And those platforms spend tens of minimum dollars investing in that. So having standards also for me, does the job to get empathy about how hard it is and have a benchmark to crosscheck every single stage forward, how much progress we've made and also enable the people behind that road to say, just like product management or DevOps. It is a proper industry you need to invest in and develop and evolve."

Mark DeLoura:

But I think the way you've identified is a perfect example of where government should be able to make a difference. You're talking about a technology that is extremely expensive to make and has to be adaptive. And you said, ideally you would open source it. These two things don't go together very well very often. But one place where they do is you get somebody like National Science Foundation to come in and incentivize do a competition, put millions of dollars behind it, get some cooperative partners to multiply the amount of money in the pot and you can get these kind of technologies developed. But it's really hard to do that without some kind of independent entity that's not profit driven to say, "Go spend $10 million. And then can you give me that thing you just made?"

Tiffany Xingyu Wang:

Yeah. So both Oasis and Spectrum work or collaborate very closely with, for example, the UK government. So they're looking into developing the lexicons of those behaviors and we try to partner with them and make the government understand better the challenge in the private sectors to invest in that and how fast this problem has been evolving so that when they build the regulations, they actually can understand it's not one size fits all by stages with companies. Right. Mark, one thing you mentioned is you don't want to apply that to the smallest companies at the same time with a very large company, right? Otherwise, you stifle the innovation, right? So we'll collaborate with them for them to understand the challenge and how the industry evolves and to your point. Yeah. I think that's where governments can play a huge role there.

Marc Petit:

Can I come back on Web 3? One topic which I've heard the question a few times, which I think is always interesting topic. The Web 3 is based on wallet and anonymity and one thing that keeps us honest in real life is our reputation. So if you can have an infinite number of identities in the metaverse, I mean, any attempt at a given platform to manage your reputation will fail because you can show up as somebody else. So how do we think about identity and should we have a single identity in the metaverse just like we have in the real world? I know it would be going too fast, but how do people think about this notion of identity and creating accountability with your reputation?

Mark DeLoura:

I'm not sure that we can really look at systems that have tried forcing people to have a singular identity. I don't think we can look at those systems and say that there's been success. I'm not sure we should copy that, at the same time. It is definitely like a behavior that we all want to do. Because we think that in normal society, we have these singular identities and that if it forces us to behave, but I'm not sure that's true. I don't know. What do you think, Tiffany? I think it's a challenging problem.

Tiffany Xingyu Wang:

Oh gosh. I honestly love this topic so much because I do think we haven't figured it out fully and it really goes back to quite philosophical discussion as well. So that's why I love it. I can't say it would be foolish for me to say I know the answer. I can share a few thoughts undergoing right now. So I think we try to play a balance between the convenience and value creation behind identity and the ethical aspects that means safety and privacy and security behind it to unpack that a little bit. I see huge values to have one single identity to enable interoperability because when you have that and then you have identity, then you have ownership of assets and then you can move things along just like in the physical world. So I see and there's so much value creation around that.

Tiffany Xingyu Wang:

So I'm a big proponent to create that identity. Maybe in the beginning, it's not all platforms, but through certain partnerships, right? And for me it's even more important from the use cases perspective. And if you look at all the gaming platforms who want to go into entertainment and you see all the social media platforms who want to go to dating and gaming, so it's only a matter of time that partnerships happen and the identity crosses over different use cases. But on the other side, the tricky part is when you have one single identity, just as in the physical world, we behave differently from one circumstance and situation from the other. So maybe one thing we should start doing is we can have the reputational score within the platform until we are ready to transport that into different platforms. So that's one thing.

Tiffany Xingyu Wang:

And the other aspect of the safety measures attached to the identity is today from an infrastructure perspective, different platforms create policies differently and enforce the policies differently. That's one thing that always tries to resolve, is that if you have the 5 Ps and the five measures and every single platform is doing the things in the quite similar and standardized way. And maybe one day, we actually can connect those platforms together in an easier way to enable both safety behind each identity. So I think that infrastructure has to happen before we actually can transfer identities from one platform to the other. And yeah, and then there are more conversations of course, around privacy and security, but I would say it's very similar. It's related to very similar considerations in terms of how privacy and security measures are done today to actually connect those platforms from the infrastructure perspective to enable the global identity.

Mark DeLoura:

I guess that's the question really is like, "What is the motivation behind wanting to have a singular identity?" What do we think that provides us to have that as a rule? And I think a lot of times it does center around safety and being able to hold people accountable for what they say online. So you see places like newspaper comment threads, where they say, you have to use your real name because they want people to behave and be accountable. But you can also imagine other communities where, for example, people who are exploring transgender to be able to go there and try different identities out and see how it feels for themselves and that's really appropriate. So it seems proper, there's no one size fits all. And for a long time, I really thought that the singular identity was a good idea, but I think I've changed my mind on that.

Marc Petit:

Yeah. We do have one identity, but multiple personas and so we would need to mimic that. So Patrick, take us home. We've been talking quite a bit here.

Patrick Cozzi:

Yeah. Well Mark, Tiffany, thank you both for joining us so much. And one thing we like to do to round out the episode is a shout out if there's any person or organization that you want to give a shout out to. Tiffany, do you want to go first?

Tiffany Xingyu Wang:

Yes. So on this occasion, I will give the shout out to the Metaverse Standards Forum that Patrick, Marc, I know you are deeply involved in and where you take to lead. I tell you the reason, I would say that Spectrum does a fantastic job to drive the technological innovations in safety technologies and always focuses on the ethical measures for the metaverse. And as I spend most of my time thinking about how to create ethical aspects for the metaverse, I need a place where I can be involved and absorb all the latest technological developments effectively and efficiently. And I've waited for a Forum like this for a long time, where I can not only inform the technologists how policy needs to be made at a get go, but also call on the conscience of technologists to write those codes together with all the other features they're building. So a big shout out for the launch of the Forum. I'm very excited about what it means to the metaverse and I'm very bullish on that.

Marc Petit:

Well, thank you. We will talk on this podcast about the Metaverse Standards Forum in our next episode actually.

Tiffany Xingyu Wang:

There you go!

Mark DeLoura:

I think that I have sort of two buckets of things that I would vector people towards that I really want to shout out just so people will point their web browsers to them. One is focusing on what has been done in the games industry in the past and this sort of sector. And there are two things I'd suggest you to look up. One is an organization called Take This that focuses on mental health and well-being in the game space. And then a second is the Games and Online Harassment Hotline, which is a startup by Anita Sarkeesian, the Feminist Frequency, and a few other folks. Both have done really interesting work talking about mental health, talking about these spaces that we inhabit and how to make them safely for people. And so we should definitely try to leverage all of the material that they've created and have learned.

Mark DeLoura:

And then the sort of second topic would be, we've talked a bit about policy today and I think policy has a habit of being a thing that like other people create. You always think about, "Oh, government's going to force that or going to make me do a thing." But government is just people. And I always think people make policy. So you're a people, I'm a people. Why can't I make policy? How do I learn how to make policy? And so I would point you to a couple of quick resources. Certainly, some internet searches will, you'll find all sorts of things, but I really love the Day One Project, which was an effort by the Federation of American Scientists that started up just before this presidential term, to try to get people to be policy entrepreneurs and create policy ideas and help them flesh it out.

Mark DeLoura:

So that potential future administrations could run these policies. And then another group that focuses more on high school and early university age folks is called the Hack+Policy Foundation. I've worked with them a little bit in the past. There a super interesting global organization that just tries to encourage kids to think about, if you could change the world through policy, what would you do? What would you try to change? How would you try to impact your environment? Now let me help you create a two page or four page policy proposal that maybe we can circulate to your government officials and see if you can make it happen. So always when you think about these kind of regulations and incentivization systems, it's not somebody else that has to be doing it. You can do it too. And you should.

Marc Petit:

Well, thank you, Mark. I never thought I would hear about the policy entrepreneur ever. I mean, I still have to digest this, but I really like the call to action. So one thing I want to say is that I got very lucky to go through racial sensitivity training and the bias are real and they're deeply rooted. So sometime you can hear about the thing, say “I'm not like this,” but it takes a lot of effort and a lot of awareness to actually not carry those biases through your natural behavior. And they're deeply rooted. So we all need to work a lot on those things. So I think Tiffany, it is probably something to mention, especially as the decision makers in that space tend to be a majority of white males. So the bias is real. So we’ll just make sure we are all aware of it. Well, Patrick?

Patrick Cozzi:

Fantastic episode.

Tiffany Xingyu Wang:

A big shoutout to Marc and Patrick to surface this critical topic. It is urgent and important for technologists to drive ethics and for ethicists to gain the foresight of technological changes.

Marc Petit:

Well, Tiffany, thank you so much, Oasis Consortium on the web. I think your user standards are really fantastic. Thank you for being such a passionate advocate of this important topic. Mark, pleasure seeing you. And I know you're still involved in a number of good causes, so keep up the good work. A big thank you to our listeners. We keep on getting good feedback, hit us on social. Give us feedback, give us topics, and thank you very much. It was a great day. Good to be with you guys today. Thank you so much.

Patrick Cozzi:

Thank you and goodbye.