Close

What Can the Metaverse Learn From Second Life?

Creator Philip Rosedale says a virtual reality internet is still some way off

7 min read
In the foreground are two female avatars,  each on the back of a male avatar. In the background are a house, garden, trees and blue sky.

Hanging out in Second Life

Linden Lab

The tech world is abuzz with talk of the metaverse, a virtual world where millions of people could soon gather to work, play, and socialize. The idea isn’t as new as it might seem though. Since 2003, people have been gathering to do all of the above in the online world of Second Life.

Its creators, Linden Lab, go to great pains to emphasize that Second Life is not a game, unlike other proto-metaverse experiences such as Fortnite or Roblox. In Second Life, there are no goals or objectives. Instead, users create a digital avatar to represent them and are then free to explore the world, meet other users, create their own digital content and even trade goods and services in the in-world currency, the Linden Dollar.

Second Life had its heyday in the late 2000s, and its keyboard and mouse controls and blocky graphics are a long way from the polished vision of immersive worlds beamed through virtual reality headsets that companies like Facebook and Microsoft are touting. But Second Life still has a dedicated band of regular users and is probably the longest running experiment in the possibilities of a metaverse-like experience.

Its chief architect Philip Rosedale left Linden Lab in 2009 and now runs a new start-up called High Fidelity, which started out building software that let people design and deploy their own virtual reality worlds. But despite some promising progress Rosedale says VR headsets are still too immature for mass adoption and the company has now put the idea on the back burner and switched focus to its spatial audio technology.

As the creator of an early metaverse-like experience and someone well-versed in the limits of VR technology, Rosedale has plenty of insights for the latest pretenders to the metaverse crown. IEEE Spectrum spoke with him to find out the challenges involved in building an immersive virtual world. The conversation has been edited for length and clarity.

IEEE Spectrum: You've been talking about the metaverse for a long time. Why do you think the idea has suddenly started to catch on in such a big way?

Philip Rosedale: There are really two things that are external to the technology development itself that have happened recently. One is obviously COVID, where there has been the worry that maybe we would have to shift some social and entertainment activities online. I think there's a lot of the big companies trying to figure out how they can make some money from that. And then the other one is simply Facebook's claim that it's an important thing and renaming themselves to try to align with it.

A man with spiky blond hair stands in front of a red building which is also reflected on a surface he is leaning against. Philip RosedaleTim Wegner/laif/Redux

Spectrum: Second Life was almost like a proto-metaverse. Why do you think it didn't break through to the mainstream?

Rosedale: It's interesting to note that Second Life is, in my opinion, still the largest and the closest thing to a metaverse that we have as it relates to grown-ups. The environments that are used by kids, such as Roblox, are very interesting as well but very different in terms of what they offer. If you talk about people wanting to go to a live concert, or wanting to go shopping or something like that, I think Second Life is still US $650 million a year in transactions and a million people using it. But Second Life didn't grow beyond about a million people. It's been growing more with COVID, but as you say, it didn't break out, it didn't become a billion people. And the hope that Facebook has is that there'll be a billion people using a metaverse.

So I think the reason why it didn't, and this reason is still very true today, is simply that most adults are not yet comfortable engaging with new people, or engaging socially, in a multi-player context online. I've worked on this a lot and it's been incredibly rewarding for the people for whom it has worked. And even work we did more recently with High Fidelity, which was very similar—a full VR environment, but with the headsets rather than with desktop—there are small groups of people that have gotten immense pleasure or opportunity to make money, and things like that, out of these environments.

But they're still not for everybody. People are not able to communicate with facial and body language yet, in a way that is anywhere near adequate. And I think that it's a very steep cliff. If you have the alternative, to have your social life happen in the real world, I think a great majority of people make that choice, and it's a binary choice. They don't split their social life partly between the real world and partly online. I think that's the reason why we don't see the breakout yet, and nothing that Facebook has said or demonstrated changes what I just said.

Spectrum: Do you think that's partly why a lot of these companies, Facebook included, are targeting work before they target the social side of things?

Rosedale: Yes, I think communication at work is more utilitarian. It doesn't need to be super pleasurable. But I would add that the challenges for work are equally substantial. Most notably, in a hybrid work environment. Everybody has settled into a level playing field of using Zoom all day long and although that works, it gets tremendously worse if you have multiple people in a real conference room trying to talk to other people online. The other problem is the level playing field thing. If you have a real person in a real room to pay attention to, you're going to devote all your attention to that person, and none of it to the screen. There's just not a good solution for that yet.

The Horizons workroom that Facebook has demonstrated recently is certainly charming as a demonstration of the ideas. But there is as yet no evidence and, in fact, negative evidence from the work that we've done on people using head mounted displays at work. Those displays are not comfortable yet. The other problem with things like work is that if you're using VR goggles, you can't use your phone and you can't type. That's really incredibly bad.

I think we see some narrow market opportunities that are happening around, for example, shared brainstorming and design. There are creative opportunities for people to use VR to have successful meetings at a distance. But again, just as with regular entertainment and socialization, I don't think it'll be a majority of people for some time.

Spectrum: It's funny to hear the tone in which you're talking about this, because you've always been a booster of VR technology. Has something changed in the way you're thinking about it?

Rosedale: I feel as good as ever about the experience that people have in a place like Second Life. But I think after working on VR headsets, in particular, for the last five or six years with High Fidelity, we discovered how difficult it is to actually try to try to make that final jump to getting everybody using this stuff.

And then I think the second thing is, I'm really concerned that (and I said this all along with Second Life too, so my tone hasn't changed on this) any single-company, advertising-based, attention-based strategy for building virtual spaces would potentially be extremely damaging to people. I have become much more concerned than I was before. I think that we just didn't think about all the things that could go wrong 20 years ago. But now with the benefit of hindsight it's more obvious what we need to be concerned about.

Spectrum: At a more practical level, with Second Life you were running large social virtual experiences similar to what these other companies are now proposing. What were the biggest challenges?

Rosedale: One is how many people can be in the same place at the same time. Many human experiences that are interesting often require more than 100 people to be within earshot and visibility of each other. That's still a largely unsolved technical problem. Whether you're talking about Fortnite, or Second Life, or Roblox, it isn't yet possible to get that number of people in the same place. The famous Fortnite concerts that we've seen, all of them had less than 100 people together in copies of the concert space. And that's a very different experience than what we expect if we go to a live music event.

Another one would be user generated content. For any of these metaverse ideas to pan out, the content, the avatars, the buildings, the experiences, the games, they need to be entirely buildable by a really large number of people in much the same way that websites were buildable in parallel by a lot of people at once. We have to do the same thing with the metaverse, and there are not, as yet, toolkits and systems that would enable that.

That definitely seems like a hard requirement to get anything near the scale of the Internet. If you actually want to build a multi-billion scale virtual world, everyone will have to somehow work in parallel. To get all those spaces up, the idea that it would all be done by one company like Facebook or Google or Apple seems completely impractical.

The other thing we need is a digital currency so people can engage in trade. To get metaverse systems where one person can make a car and sell it to a lot of other people requires that you have some currency system that spans multiple local currencies. We do have that to some extent in the cryptocurrencies, but they have other problems.

Spectrum: What are you doing with your current company High Fidelity? Do you see the products you're producing as components of a future metaverse or is it something different?

Rosedale: We're entirely focused on spatial audio right now, because it's a good business and it's growing quickly. The ability to do good 3D audio for a whole bunch of people at the same time, is a critical component of this stuff. We also think it's progressive, it's a reasonable thing to work on that we can get working. We've been enthusiastic about every component of this, but we do feel like the audio is the best underlying component that everybody's going to need. We do continue as thinkers and leaders in the space to keep looking at it and thinking about what happens next and how we can help. I mean, I love this stuff. I'm always going to be working on it one way or another.

Spectrum: VR adoption has always lagged expectations. Do you see any reason why that might be different this time around?

Rosedale: In a word, no. I don't see a magic, new thing. I hoped that the VR headsets would be that. And that's why we raised so much money, hired so many people and did so much work on that in the first stage of High Fidelity. But I do think that the technical problems in front of us around comfort, typing speed, and then communicating comfortably with others are still very daunting. And so I don't think there's anything new.

This article appears in the January 2022 print issue as “Lessons From a Second Life.”

The Conversation (1)
James Weller 30 Nov, 2021

Mr. Rosedale made a few comments about typing being limited with VR headsets. But what about voice input to type/command/communicate? Seems like that has gotten pretty good and with the newest AI deep learning could get very good at translating voice to text. Wouldn't that be a solution for this problem at least?

The Future of Deep Learning Is Photonic

Computing with light could slash the energy needs of neural networks

10 min read

This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light.

Alexander Sludds
DarkBlue1

Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition. Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars.

The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.

Keep Reading ↓ Show less