Building the Open Metaverse

3D on the Web

Vladimir Vukićević, Director of Lightweight XR at Unity Technologies, and inventor of WebGL, joins hosts Patrick Cozzi (Cesium) and Marc Petit (Epic Games) to discuss the origin and current state of 3D on the web, open standards for the metaverse, and more.

Guests

Vladimir Vukićević
Director of Lightweight XR, Unity Technologies
Vladimir Vukićević
Director of Lightweight XR, Unity Technologies

Listen

Subscribe

Watch

Read

Announcer:

Today on Building the Open Metaverse ...

Vladimir Vukićević:

I think the biggest difference between any kind of web API and a native API is really security. Once you have a native application running locally, all bets are off. They can do whatever it wants, but once you are just a link away, click away, from who knows what, you need to inject a lot of security constraints, which does impose a performance hit.

Vladimir Vukićević:

I think we can get pretty close, though. I think there's a bit of a double-edged sword here because I think the graphics hardware vendors and the driver vendors could implement some of these security constraints much more efficiently, but they would have to do it in a way that the browser vendors would actually be able to trust them.

Speaker 1:

Welcome to Building the Open Metaverse, where technology experts discuss how the community is building the open metaverse together, hosted by Patrick Cozzi from Cesium and Mark Petit from Epic Games.

Marc Petit:

So welcome to our show, Building the Open Metaverse, the podcast where technologists share their insight on how the community is building the metaverse together. And today, we're extremely fortunate. Our guest is Vladimir Vukićević, and his team at Unity focuses on enabling developers to bring content to a broad range of our environment, including using cutting edge technology optimized for both size and performance.

Marc Petit:

What's also fascinating is prior to joining unity, Vlad spent 10 years at Mozilla working on foundational web technologies and the Firefox web browser, and he’s the inventor of WebGL. And what a credential. Vlad, we're so happy to have you. WebGL is the standard for high-performance 3D on the web, as well as the initiator of web VR and  to bring VR capabilities to the web, so we're so happy to have you with us. Welcome, Vlad.

Vladimir Vukićević:

Thanks, Mark. Thanks, Patrick. Yeah, thank you for having me. I'm really looking forward to the discussion.

Patrick Cozzi:

So Vlad, it's such a pleasure to have you. Myself personally and Cesium, we're such big fans of WebGL, and we've really built all of our tech on that. I mean, to start off, we always ask all guests about your journey to the metaverse, and, as part of that, can you please include the WebGL origin story?

Vladimir Vukićević:

Absolutely. Yeah, I mean, originally I come from a heavy open source background. I started working on the Linux GNOME desktop at a company called the Helix Code back in Boston in the early 2000s. So I've always been interested in open source and really enabling developers to have access to interesting, cutting edge technology. I ended up at Mozilla just before Firefox 1.0 with a passion for media. I wanted to give web developers access to video, audio, graphics, all of those things, although back then the web was really ... animated gifs were kind of the new hotness at the time.

Vladimir Vukićević:

So WebGL especially really started, actually, with Apple around the time of macOS 10, Tiger, I think, 10.4. Apple released something called dashboards where they included this canvas element, and back then, there was actually a big uproar in the web community about how dare Apple extend the web and do all these things. But I saw that at Mozilla, and I said, "Hey, this is really cool. You can actually do programmatic graphics in the web browser with this API."

Vladimir Vukićević:

I set about very quickly re-implementing a version of the 2D canvas API. I think we shipped it quickly after. And one thing that I don't get a lot of credit for, but I think actually is pretty important is the original Apple API used back then Apple style, extremely verbose "add line to," "current path," and "draw line to" or "move to point on current path" methods. I really pushed to make those be more of the [inaudible 00:03:55] style, "move to," "line to," so that we weren't breaking our keyboards typing all of that.

Marc Petit:

Well, thank you very much for that.

Vladimir Vukićević:

You're welcome. Shortly after that, though, I turned my sights to 3D. I always had a passion for 3D graphics. Really deeply interested in both what you can do just for the visualization aspect and what you can do just to actually create interactive experiences. And the thing with web 3D back then, I think the reason why I ended up doing it is because a lot of people told me it was impossible. I had a lot of people basically say, "The web can never do 3D. This is really the slow clunky thing," that it's never going to happen. And that didn't seem right. "No, of course it can." You've got a programming language. Sure, it's slow at the time, but it'll get faster.

Vladimir Vukićević:

And so I went ahead, gave it a shot. Back then, it was a very hacky, OpenGL, P-buffers, all that stuff. Five surface copies per frame to get it rendering in the browser. But it worked. You had the spinning cubes, and you had some textured things, and it grew from there. And enough folks, both in the web community and the graphics community, started seeing the value, started seeing that there is actually something there. And of course, at that point, the standardization effort began with Khronos, and folks like Neil Trevett were instrumental in helping shepherd that along. And we ended up where we are now with WebGL available on pretty much every computing platform there is so that you can actually finally do 3D on the web.

Marc Petit:

I mean, a newbie question. We have decent hardware on our phones and for the web and our PCs, but let's think about our phones. Do you think ... How far are we from parity to have the same capabilities from within a browser like a native app? And are there any steps or roadblocks that we need to ... for us to get to that milestone?

Vladimir Vukićević:

Yeah, Marc, it's a great question. I think the biggest difference between any kind of web API and a native API is really security. Once you have a native application running locally, all bets are off. They can do whatever it wants, but once you are just a link away, click away from who knows what, you need to inject a lot of security constraints, which does impose a performance hit.

Vladimir Vukićević:

I think we can get pretty close, though. I think there's a bit of a double-edged sword here because I think the graphics hardware vendors and the driver vendors could implement some of these security constraints much more efficiently, but they would have to do it in a way that the browser vendors would actually be able to trust that because if you're a browser vendor and there's an exploit in caused through 3D in somebody's driver, doesn't really matter who caused that. Your users are being exploited.

Vladimir Vukićević:

So it's tricky. I mean, one of the interesting things is with ... I think WebGL actually helped driver stability quite a bit. I think something like Shadertoy, especially, who really drove fixing a lot of bugs and issues that even games didn't ... I mean, if you're shipping a game and the game crashes on various drivers, you're going to try to work around it because you don't want the users to have that experience, but if you're making a Shadertoy that works on your particular machine and crashes somebody else's, not your problem.

Vladimir Vukićević:

So I think the web has driven a lot of that stability, but I think it's always going to be a challenge. I don't think it's ever going to get to full parity, but especially with new APIs like WebGPU and such to really bring things a little bit closer to the middle, I think we'll get much closer. I think we'll get to a point where it almost doesn't really matter. It'll be an insignificant difference.

Patrick Cozzi:

Vlad, do you think those experiences will be written in JavaScript, or will they be written in WebAssembly?

Vladimir Vukićević:

I think that the web right now has a bit of an identity crisis. We see people talking about Web 2.0, Web 3.0, and things like HTML and CSS ... this is completely my opinion ... I don't think actually have a place as the core of the web of the future. I think those things need to be taken and put in a box that is your traditional, documented oriented web, and we need to define a new web that is based on things like WebAssembly and WebGPU.

Vladimir Vukićević:

We're going to lose some things there for sure. We're going to lose things like "view source" that was one of the pillars of how the web got to where it is today. But honestly, we've lost that anyway. If you go to pretty much any site and you try to actually look at their source, it's useless for learning. It's useless for modifying. It's sort of this theoretical, nice to have thing that is lost, and we could gain a lot in performance and capability by boxing some of that old technology away and really focusing on the future.

Vladimir Vukićević:

The other aspect of that is I think the web today, it's very complex. If you wanted to start creating a new web browser today, you can't do it. There is no way to have enough of an investment to create a web browser from scratch. And I think that is a massive, massive negative of the web because it completely stifles competition. Nobody can try to do anything better, faster, different. You are going to spend all your time just trying to catch up to the current state, and if you don't do it perfectly, then nobody's going to use your tools because they can't actually participate in the web.

Marc Petit:

Do you think WebAssembly makes this problem easier or that barrier to entry lower or higher?

Vladimir Vukićević:

I think web assembly makes the problem easier, in that you now have access to a wide range of programming languages, some of which are going to be simpler, better suited. Whatever capability you're looking for, you now have choice. You're not stuck with just JavaScript with its pros and cons. So I think it makes the barrier to entry easier for developers, and for folks that are trying to create a new browser or a new way of accessing the web, if we can find a way to actually sandbox things like HTML and CSS and whatever else, then maybe we could use some of the current engines, essentially, in that box. You have your HTML rendering engine, and it lives in that box, but you get to innovate on the WebAssembly side. You get to innovate on performance of WebGPU and integration and all of those pieces, which is where I think a lot of the rich evolution is really going to come from.

Patrick Cozzi:

So Vlad, if I'm a software developer and maybe it's a few years from now, and I'm thinking about making an immersive 3D developer for the metaverse. What do you think the trade offs are going to be on, am I targeting native? Am I targeting web? Am I targeting both?

Vladimir Vukićević:

That is a great question. I think we would have to figure out what the metaverse is first. Because the metaverse might actually not have any native software. It might all be something like WebAssembly modules or whatever else. On the flip side, it might actually be all native software, where it is a single environment that you run on whatever device you have and you have a walled garden, limited experience. I think that we need to figure that piece out first, and what's likely going to happen is we're actually going to have all of those options.

Vladimir Vukićević:

So I think if anybody wants to make an immersive experience for the metaverse, pretty much anything that they pick today will likely be relevant or fit in some niche over the next few years. It's going to take a while until we actually settle down on a particular model that actually really works for creating the environment that we want. And I hope that we do actually get to explore all of these areas. I hope somebody does try to build the all-in-one metaverse that is a single native app and somebody tries to build the fully distributed, every dynamic, downloaded, everything metaverse as well, because we need to try every spectrum in between.

Marc Petit:

Are we seeing games developers give WebAssembly a try to create the one click away game that's a decent interactive experience?

Vladimir Vukićević:

We are. The holy grail for a lot of types of games, especially mobile games, is for user acquisition, that one click and you're immediately playing the game. The tricky part is I think we don't quite have the right development model to make that possible. Pretty much all of our engines today involve some kind of large asset download, then a large startup to set up everything, just basically doing a lot of work up front to get to playing.

Vladimir Vukićević:

One of the things that I was working on at Unity previously to focus on lightweight XR was something called Project Tiny, where we were trying to use some of our data oriented approach to really reduce some of those initial things. What if all your asset data and your scene data was pre-processed in a way that you could just load it into memory and just go? Turns out you can get fairly far with that approach, but it does require a shift in how you build content. And so there's a tricky problem there where in general, developers really want all the richness of current engines and all the capabilities, but they also want that instant startup and speed that you would only get if you could actually switch models.

Marc Petit:

Let's geek out. How small did you get Project Tiny? What's the smallest footprint you got for Unity right there?

Vladimir Vukićević:

Yeah. For Project Tiny, we were targeting fairly small scale games, so think instant games and such. But we had little multiplayer cart racing games and things like that running in about 500 kilobytes of compressed WebAssembly, JavaScript, and such. So pretty small and also pretty fast to download. We actually did get to the literally tap a link on your phone and you're playing a game within one or two seconds. But we had to take a lot of constraints to get there, and some of those constraints are not really the ones that users really wanted. Yeah.

Marc Petit:

We're running game engines, as you know now, in vehicles on very, very low powered hardware platforms with a lot of security constraints and stringent performance requirements. So I think it's a very good exercise to try to shoehorn some of our technologies there. I think we're all learning a lot in this process.

Vladimir Vukićević:

Absolutely. And I think those learnings are really going to translate to the metaverse, because we have the same kind of constraints and requirements around security, around instant access and everything, and also are actually around low power devices, depending on what way we're using to access this, especially when we talk about augmented reality. That should be an all day type of experience, and so power consumption and all those things become extremely important.

Marc Petit:

I think it's fascinating, the amount of work and problems that we still have to solve to deliver that architecture. When we were talking with Bill Vass (Amazon Web Services), we double clicked on that, the various layers that we're going to have and how we really connect them and distribute the compute load amongst a very low powered wearable device, like glasses or even a watch, and then a phone that has a little bit more power, a little bit more battery and more graphics, and then the edge, and then the big scale? And how much work is there going to be for us to basically create a unique experience that spans all of those layers at the same time? Is it something you see that the web could be actually, and the WebAssembly with architecture could have a leg up in creating those layered environments and hybrid approaches?

Vladimir Vukićević:

I think WebAssembly will definitely play a part in whatever kind of future metaverse we end up building. I think as far as the hybrid approaches go though, that is going to be a model that we need to solve, but we need to figure out how to make that accessible to developers first. It's hard enough today to make a multiplayer game. I think we would close the door to many developers, many new developers, if we started telling them, "Well, you have to become distributed systems architect first, and then you can build whatever the thing is that you're creating." This is one of the areas that actually really excites me, because we're basically going to have to figure out a new way to develop software. We're going to have to figure out a way that those kind of transitions, like those layers, are just a natural way of how we develop, where maybe it doesn't matter, or maybe we there's a natural way to transition things from your wearable to your phone, to the edge, to the cloud, connect to bigger simulations or whatever the use case is.

Vladimir Vukićević:

But if we can't make it accessible to developers, it honestly doesn't really matter how powerful it is, because nobody will use it or it will only be used by a very small set of people.

Marc Petit:

So do you want to talk about open standards, Patrick?

Patrick Cozzi:

I love open standards. So Vlad, first, look, thank you again for joining us for the Building the Open metaverse Birds of a Feather session that we held this past SIGGRAPH Conference. Pleasantly to my surprise, a lot of that ended up being about open standards and building an open and interoperable metaverse and what standards do we have and what standards do we need. And I thought it was really interesting when you were talking about 3D assets and the segmentation of assets, attributes, and behaviors. And I was wondering if you could go over that in a little more detail and also share what you think about what standards we have today that may work for 3D interoperability in the metaverse and then where you think the gaps are, what we need to fill.

Vladimir Vukićević:

It's a little weird for me to talk about standards at this point, because it's definitely tricky to think about standardizing things before the thing actually even exists. And it can be a good way to almost stifle some of the creativity that might actually come out of a more much free ranging exploration. If you think about even 3D on the web, there was VRML well before WebGL was a thing, but it died out. And it was designed very much this sort of standardsy, very heavyweight process that seemed to be maybe popular in the nineties. But when I think what I think at least we're going to need in the metaverse... Yeah, during SIGGRAPH, I mentioned we've got assets and then attributes and behaviors.

Vladimir Vukićević:

What I meant was right now, a lot of the focus on the metaverse is really about assets. People ask like, "Well, if I get a sword in one game, how can I take that sword to other game?" In the back of my mind though, what I actually hear the background question is, "Well, making a game is really hard and I want to make money off making games, so I would really love it if I could actually just, open up a Ye Olde Sword Shoppe and start selling swords for $10 that people could take to any game they wanted. I'd make a bunch of money without having to actually invest in making a game." So I think a lot of these, how do we actually move assets from one game or another, is an economic question.

Vladimir Vukićević:

People want to actually start businesses on just selling those assets. I don't know that that's actually that interesting of a question though, because getting a mesh model from one environment to another, it's hard. There's work and things involved in pipelines, but it's not impossible. It's just, okay, but to what end? Okay, now you have this model. What does it actually do? That's where the second pieces are and where attributes and behaviors really come in, because I think those are going to be the building blocks of whatever metaverse we end up creating. So for example, for attributes, let's say I've got a torch that is obviously hot and has a light source, and then there's a block of ice with some item or something embedded in it.

Vladimir Vukićević:

If I was making an adventure game, I might say, "Well, if the player applies that torch to the block of ice, the ice melts, I get the item, everybody's happy." But if you think about the relevant pieces there, well, the block of ice is meltable, the torch is hot. And if you combine something that is hot and is meltable, that meltable thing melts. And so if we use very simple building blocks and use, at a very massive scale, maybe we actually get all these emergent behaviors where now, if you have this behavior of anytime a heat source is applied to something that is meltable and melts, you get to reuse that pretty much anywhere. And by building up this rich set of attributes in all these things, we also get to inject new behaviors by user choice. So for example, that torch is also a lightsource. And maybe what I actually want is a behavior, for whatever reason, that whenever a light source gets near something that is cold, it becomes brighter. I can inject that as a behavior into the overall simulation, maybe it actually applies to me and to my components and everything that is a light source, everything that is cold, all of these things, just automatically fall into it. And it's a pretty low level, almost, very simple rule set based approach.

Vladimir Vukićević:

But I think we have the simulation power that we can actually expand these rules out pretty massively. And we might get to something that will let us actually model some of the interactions that we see in the real world. That realism is really, what's going to make the metaverse real. It has to have some of these very natural feeling, real world interactions. And that's why when I-

Marc Petit:

Sorry, you think you could codify the world into a finite number of those behaviors, like break down the simulated walls. And actually, I was thinking about it, probably.

Vladimir Vukićević:

Yeah. I agree. I think, probably. I think it would be a significant effort, but maybe not, if you actually distribute it. Like, if you actually distribute the effort, if everybody contributes some piece of realism to whatever this massive metaverse simulation is, maybe you could actually get there.

Patrick Cozzi:

So Vlad, on that note and going back to moving fast versus standard standardizing, and when do you standardize, I mean, how much do you think we, the developer community, know now versus what we need to explore? If you look at all the decades of making games and what widely used attributes and behaviors may be there, is this just a lot of research and compiling, or do you think there's a lot of new exploration before you may be able to make a pragmatic standard?

Vladimir Vukićević:

I think there's definitely going to be a lot of exploration still needed, mainly because games especially, they're designed to take you away. When you start playing a game, you are immediately in somebody else's simulation and you have an assumption that the normal rules don't apply. Whereas if the metaverse is supposed to model reality, complement reality, augment reality, even though we might have things that are different as an essential attribute of whatever space you're in, we're still going to need to have a baseline reality, I guess.

Vladimir Vukićević:

And that's the piece that I think games don't really explore because they don't need to. They're designed to give you a particular experience. And so figuring out, how do we actually make the base metaverse experience feel normal and comfortable, I think that's where we're going to actually have to do a lot of exploration first.

Marc Petit:

We mentioned, we had Neil [Trevett, NVIDIA and The Khronos Group) on the podcast. And it's interesting to discuss about those behaviors, because you think a standard, like glTF, has the potential to start undertaking that effort to standardize those behavior and then grow those, like what we were discussing with Patrick earlier, like grow from static objects to smarter objects, with some simple behaviors to more smart objects and eventually to use, to grow glTF into the way to interchange, like simulated world almost. So do you think it's a logical path for us to pursue, as an industry right now? Or does it take a rebuilding from the ground up of something new?

Vladimir Vukićević:

I think it's interesting. I mean, I think any of the extensible formats, so even glTF, USDZ, even FBX, and such, has the potential to grow because they are extensible. I think the risk is trying to standardize that extensibility too early, but it's a mix because the opposite end of that is, somebody extending it very privately and ending up with that walled garden metaverse. So ideally, what I hope that we get to is, a lot of companies, individuals, really whoever, experimenting with a bunch of these areas, but really doing it in the open, not trying to recreate that walled garden that we somehow ended up with in the mobile ecosystem, now the metaverse.

Vladimir Vukićević:

Mainly because I think that approach is going to fail. Again, just that baseline kind of reality, it needs to be accessible to everybody. It needs to be shared. I mean, look, if let's say Patrick, Marc, let's say both of you came over to my house. Patrick, you had glasses from Samsung, Marc, you had glasses from Facebook. I had glasses from, I don't know, Microsoft. I just bought an AR board game and we want to play together.

Vladimir Vukićević:

That needs to be an experience that just works. That needs to be something that is effortless, frictionless, thoughtless, really. And if it doesn't, I think we will have failed as an industry, to actually really build the true potential of this. So at some point, open standards and such are going to be extremely important to allow something like that to happen because obviously, everybody has to agree on something. But the path to get there, I think is going to be a lot of exploration, a lot of dead ends and just a lot of work, really.

Marc Petit:

Since you're an XR expert, do you consider open XR to be a success right now and a model of us as an industry, coming together, or how do you look at open XR?

Vladimir Vukićević:

I think open XR has been pretty successful. I mean, I think a lot of the industry coming together around it has been relatively new, but I think it is a good model for providing, after some number of years of exploration and vendors taking different routes and different approaches, coming to an understanding of like, oh, okay, these are actually the common pieces. These are the things that are relevant. These are things that are unique. How do we actually define a way to simplify?

Vladimir Vukićević:

I mean, honestly, simplify life for both of them and also for developers. Nobody really likes to maintain their own API, if you can jump in on somebody else's. But it takes a while to figure out what you actually want. And that's where you have to have the flexibility to experiment yourself.

Patrick Cozzi:

Vlad, I really like this experimentation in the open. I think that's a key concept and that will help us eventually converge, when people can see what each other is doing.

Marc Petit:

When you introduce yourself, you talked about lightweight XR. So can you explain to us what lightweight XR means and what it really entails?

Vladimir Vukićević:

Yeah, sure. So I mean, the lightweight XR effort in Unity, is really trying to figure out some of the things we’ve talking about throughout the podcast, in terms of these future XR experiences are going to need to be instant. They're going to need to be potentially distributed, lightweight. How do we actually do that? How do we as Unity, do that? Unity is really designed right now for a very immersive experience. When you're running a Unity built game, or really actually any engine out there, game, it expects to take over your entire device, all your CPU, all your GPU, your entire display, everything.

Vladimir Vukićević:

That's likely not going to be the world that we're going to be in, in the metaverse. I'm sure there will be fully immersive experiences, but there's also going to be a lot of shared experiences, very contextual experiences, depending on location, time of day, whatever the thing is, how do we actually enable, for Unity at least, how do we enable our creators to have access to create those kinds of experiences?

Vladimir Vukićević:

That's the stuff that my team is really exploring, to figure out, what is a potential path to get there. Because like I said, I think we're going to be interacting with a lot more of those types of experiences in the future and the fully immersive ones will absolutely still be there, but they will be like choosing to go to a movie is today, versus just living out your regular daily life.

Marc Petit:

I mean, we expect to live in the world when every glass panel is potentially a screen where digital content can be displayed. So we'll have to scale from super low power or lightweight experiences, all the way to the big, heavy duty immersive stuff.

Patrick Cozzi:

So Vlad, as you're looking to optimize the XR experiences, I mean, how much of that is VR specific, AR specific or just XR, that will impact both of those?

Vladimir Vukićević:

That's a good question. I mean, look, I think the first metaverses that are likely going to be built are probably going to be actually VR experiences. I think that's where the hardware is available and of pretty good quality right now, that you can experiment with a lot of the other factors, whether it's the instant startup, the security, safety, dynamic, distributed, all of those pieces.

Vladimir Vukićević:

So VR, I think, is going to have a pretty big part to play, but AR is where I think we all ultimately want to get to. We actually want to get to that augmented reality world, where we have some way of actually augmenting our life. So I think a lot of these experiments and explorations are going to happen in both, and they're really going to benefit both, especially when you think about things like power usage and whatnot, a lot of the VR devices, like the Oculus Quest, it's all battery powered. So any improvements that you make there, will definitely help.

Marc Petit:

So I just think as those lightweight XR experiences, isn't that the role of the web? I mean, shouldn't we expect the web to be delivering those lightweight experiences and reserve the bigger engines for the more hardware demanding environments?

Vladimir Vukićević:

Possibly. But if you look at the web as something that is just another universal platform, right? If we take the HTMLs and the CSSs and shove them away, then I think the web just becomes another target. And I think, all the engines really should have a way to get to the web in a meaningful way, in this future web, at least. This is likely that web assembly, plus web GPU, plus web XR, plus whatever portion of the web. The one advantage that I think also a lot of the bigger engines have, is really the authoring environment. The web is a pretty great distribution medium. But I think the reason why people come to Unity, come to Unreal, come to others and say, "Hey, we want to get our content to the web," is because those tools are good at actually creating content. The web doesn't really have anything equivalent. It's a great kind of distribution and publishing medium, not a great authoring medium.

Marc Petit:

I have a history of being involved in investing in browser-based tools when I left Autodesk in 2012. It was hard, so I hear you. I'm still hopeful that we'll see investment. I think that that improvement in architecture, that's going to help play back free content from the web. I'm hopeful that we start to see better tools. I was actually surprised to see Photoshop running in the web last week.

Vladimir Vukićević:

Yeah, likewise.

Marc Petit:

I think it's interesting to think about, as well, building better tools inside the browser just also for the same level of convenience. But I hear you that you don't want author content for one platform. It is always going to be safer to author that content into a tool that is known to cut across a lot of publishing platforms.

Vladimir Vukićević:

I'm actually not even sure that authoring is going to look the same once you actually think about the metaverse. All those same things that you described earlier, Marc, around the distributed capability and kind of having to layer content execution. I think the same thing applies to authoring. If we look forward five, 10 years from now, whatever the metaverse looks like, the creators then, they're going to want to create while in the metaverse. They're not going to want to sit down in front of a computer screen and open up an editor and do all these things.

Vladimir Vukićević:

So we're going to have to do a lot of transition work even on the authoring side. And that's where, I think, the web or kind of web instant delivery models absolutely have a part to play in just making some of that broad accessibility really be possible with those tools.

Marc Petit:

In place editing, especially with AR, I mean, because for sure I think we'll have a location in the world so you probably want to tweak them when they're supposed to be. 

Vladimir Vukićević:

Yeah, absolutely.

Marc Petit:

A lot of fun on the horizon, Patrick.

Vladimir Vukićević:

I think one, you mentioned, kind of, in place editing. The other aspect that I think of the metaverse that's going to be super important is the users need own their own metaverse. So we talk about people authoring and in place editing of content and whatever else. I want to be able to, as a user, in place edit effectively my own reality. So everybody's going to become an author in some way, and I think that's going to bring with it a whole new set of challenges. So yeah, absolutely. A lot of fun, exciting stuff.

Marc Petit:

And hopefully a new economical model when people can actually make a living and have fair split of the value created between platforms and creators and everything. I think it feels like a pivotal moment and where it's our responsibility to lay those foundations for the next wave, the next generation of the internet.

Patrick Cozzi:

So glad we've covered a lot of ground today from WebGL to the future of the web to standards to lightweight XR. What didn't we talk about that you'd like to talk about?

Vladimir Vukićević:

I think one thing that would've been fun to dive into is some of the stuff that I'm doing on the side. I've started lately exploring a lot of ultra wide band positioning kind of systems and such. Because I think that kind of hyperlocal location is going to be obviously a key part of any kind of future metaverse augmented reality stuff. I've started building out a little UWB localization system here in my home office. Going to be extending that out to the entire house at some point in the future. I just really want to get an early glimpse of what things might be once we get to that future.

Marc Petit:

Awesome.

Patrick Cozzi:

What kind of accuracy do you get with the positions there?

Vladimir Vukićević:

Ask me after I install everything over Thanksgiving week. The actual installation is coming up soon.

Marc Petit:

What's your prediction? What's your simulation telling you?

Vladimir Vukićević:

Prediction, and I mean, honestly, from talking to some of the folks that make some of the devices that I make, should be within 10 centimeters, which is not amazing but it should be good enough for some of the stuff that I want to do. And it should be good to at least get a location fixed for static elements in the room. Like where the thermostat is, where the stereo is, where that kind of stuff is. So that, when I bring my phone close or bring some other kind of device close, I can actually get some of that contextual interaction there.

Vladimir Vukićević:

But yeah, first step is get the hardware installed and get everything set up and then feeding into some kind of likely unity simulation. Then, actually start exploring some of the stuff we've been talking about. How do I actually get a dynamic interface for my thermostat or for my stereo through this environment? It's going to be fun.

Marc Petit:

Great. Can you make sure you write a home assistant plugin? I'm a home assistant addict.

Vladimir Vukićević:

Likewise. There might actually be something there, so I'll make sure I start shooting now.

Marc Petit:

The other curiosity is what kind of hardware are you playing with to achieve that?

Vladimir Vukićević:

This is right now just some kind of experimental hardware. Once I started looking into this, it led me down to some contacts with some folks on the web. There's, as you can imagine, a number of startups that are really exploring this right now, and they were gracious enough to give me access to some early hardware. I'm not sure if they would be okay with me mentioning their name so I'm kind of a little hesitant there. But some fairly small devices, fairly low power, also relatively inexpensive as well.

Vladimir Vukićević:

I think that there's going to be something here for folks like us who automate everything through home assistance and such. I think we'll have something we can set up in our houses within the next few years to get some-

Marc Petit:

Then, you win the prize of the geekiest of the geeks so far on the podcast. I mean, that's pretty, from a white band positioning, that's pretty hardcore. Congratulations. Any organization or person that you think you want to give a shout out at it during this conversation in this context?

Vladimir Vukićević:

I mean, I think, WebGL was definitely not a solo effort. There were a number of folks including people like Ken Russell and Neil Trevett and a bunch of others who really helped shepherd it. But I think the one person that really made 3D on the web incredibly successful today is the author of Three.js, Ricardo Cabello. I think without him, it would be just the openGL API of the web, but Three.js really made it accessible to web developers, people that didn't and have a 3D background. I think where WebGL and really where 3D and the web is today is owed to him, I think, in a very, very large part. So big shout out to him. I think it's, the 3D would definitely be a different kind of environment without things like Three.js.

Patrick Cozzi:

And I'd give a big plus one to Mr. Cabello, Ken, and Neil. By the way, they all have something in common, which is they are fantastic community builders as well as technologists.

Marc Petit:

A hundred percent agree on Three.js. It's been a fantastic foundation for many new companies, actually many new businesses. So, it's fantastic.

Marc Petit:

All right. I think we've covered a lot. Vlad, I want to thank you for being there with us today. We have to do the regular, if you like this podcast hit whatever buttons you have to hit subscribe. If you like to geek out, please come back. If you have suggestions for people we should welcome on this podcast, please submit it to them. But Vlad, it's been an amazing conversation. Thank you so much for being with us. Hello to all of our friends at Unity. Patrick, I leave you the final word.

Patrick Cozzi:

Well, thank you, Marc. I'm glad it's been inspiring and forward looking and thank you once again.

Marc Petit:

No, you're welcome. It was a great conversation. Thanks for having me.