Mixed Reality is the Future of Computing (Rosenberg / Midjourney)
For many people, the word “metaverse” has been tainted with negative connotations, conjuring either cartoonish virtual worlds filled with creepy avatars or opportunistic platforms selling “virtual real estate.” For others, the word metaverse inspires a shrug followed by questions like — “Why would anyone want to spend their time doing that?”
Personally, I believe it’s way too early to write-off the metaverse but we have real work to do. The industry needs to deploy immersive experiences that are more realistic, more artistic, and focus far more on creativity and productivity than on questionable NFT schemes. In addition, we need to overcome the biggest misconception about the metaverse: the flawed notion that our virtual future will replace our physical lives.
This is not how the metaverse will unfold.
Don’t get me wrong, there will be popular metaverse platforms that are fully simulated worlds, but these will be temporary “escapes” that users sink into for a few hours at a time, similar to how we watch movies or play video games today. On the other hand, the real metaverse, the one that will impact our days from the moment we wake to the moment we go to sleep, will not remove us from our physical surroundings.
Instead, the real metaverse will mostly be a Mixed Reality (MR / AR) in which immersive virtual content is seamlessly combined with the physical world around us, expanding and embellishing our daily lives with the power and flexibility of digital content. But don’t take my word for it — the industry is rushing in this direction. Just this month HTC unveiled an impressive new headset (the VIVE XR Elite) that enables powerful mixed reality. And it’s widely reported that Apple will unveil a mixed reality product in June. When talking about this product, Apple CEO Tim Cook said last year — “AR is a profound technology that will affect everything. . . we are really going to look back and think about how we once lived without AR.”
Why Mixed Reality?
Simple. We humans don’t like being cut off from our physical world. Sure, you can give someone a short demo in a fully simulated virtual world, and they will love it. But if you have that same person spend an hour in fully immersive VR, they may start to feel uneasy. Approach two hours, and for many people (myself included), it’s too much.
This phenomenon first struck me back in 1991 when I was a VR researcher at Stanford and NASA, studying how to optimize depth perception in early vision systems. Back then, the technology was crude and uncomfortable, with clunky graphics and lag so bad it could make you feel sick. Because of this, many researchers believed we just needed better hardware, and people wouldn’t feel uneasy. I didn’t quite agree.
Certainly, I knew better hardware would help, but I was pretty sure that something else was going on, at least for me personally — a tension in my brain between the virtual world I could see and the real world I could sense (and feel) all around me. I believed it was this perceptual conflict between two opposing mental models that made me feel uneasy and made the virtual world seem less authentic than it should.
To combat this, I wanted to take the power of VR and combine it with my physical surroundings, creating a single immersive experience in which my visual, spatial and physical senses were all perfectly aligned. My hope was that the mental tension would go away if we could interact with the real and the virtual as if they inhabited the same perceptual reality.
So in 1991, I pitched the Air Force Office of Scientific Research and was funded to develop a prototype mixed reality system at Wright Patterson Air Force Base. It was called the Virtual Fixtures platform, and it didn’t just support sight and sound, but touch and feel (with 3D haptics), adding virtual objects to the physical world that felt so authentic they could help users perform manual tasks with greater speed and dexterity. The hope was that one day this new technology could support a wide range of useful activities, from assisting surgeons during delicate procedures to helping technicians repair satellites in orbit through telerobotic control.
Of course, that first mixed reality prototype at the Air Force didn’t support surgery or satellite repair. It was developed to test whether virtual objects could be added to real-world tasks and enhance human performance. To measure this, I used a simple task of moving metal pegs between metal holes on a large pegboard. I wrote software to create a variety of virtual fixtures that could help you perform the task. The fixtures ranged from virtual surfaces and virtual cones to simulated tracks you could slide the peg along (via 3D haptic models) while optics and early passthrough cameras aligned the activity in 3D. I even used early 3D audio technology developed at the Air Force (AAMRL) to ensure sounds were spatially aligned. And it all worked, enabling greater speed and precision.
Virtual Fixtures project (USAF): user and task board / side by side 1993
I give this background because of the impact it had on me. I can still remember the first time I moved a real peg towards a real hole and a virtual surface automatically turned on. Although simulated, it felt genuine, allowing me to trace along its contour. At that moment, the real world and the virtual world became one reality, a unified mixed reality in which the physical and digital became a single perceptual experience that satisfied all your spatial senses — visual, audio, proprioception, kinesthesia, and haptics. When that was achieved, you stopped thinking about which part was physical and which was simulated — it was just reality.
That was the first time I experienced a true mixed reality. It may have been the first time anyone had. I say that because once you interact with the real and virtual combined into a single immersive experience, all your senses spatially aligned, the two worlds snap together in your mind. It’s almost like one of those visual illusions where there’s a hidden face you can’t see, and then something clicks, and it appears. That’s how a true mixed reality experience should be: a seamless merger of the real and the virtual that’s so natural and authentic that you immediately realize our digital future will not be real or virtual, it will be both — one world, one reality.
The Technology of Mixed Reality 30 years apart (1992 to 2022)
As I look ahead, I’m very impressed by how far the industry has come, particularly in the last few years. The image above (on left) shows me in 1992 at Wright Patterson Air Force Base developing mixed reality (MR / AR). The image on the right shows me in 2022, wearing a Meta Quest Pro headset with color mixed reality capabilities. Over the 30 year span during which my hair went gray, the technology has improved by staggering amounts — in performance, efficiency, and size.
What’s not apparent in the picture above are the numerous full-sized computers that were running to conduct my USAF experiments thirty years ago, or the cameras mounted on the ceiling, or the huge wire harness draped behind me with cables routed to various machines. That’s what makes this new wave of modern headsets so impressive. Everything is self-contained — the computer, the cameras, the display, the trackers. And it’s all comfortable, lightweight, and battery-powered. It’s remarkable.
And it’s just getting started. The invention of mixed reality is an ongoing process, with amazing new products poised to take off. And it’s not just the impressive new headsets from Meta, HTC, and (potentially) Apple that will propel this vision forward, but light weight eyewear and creative software tools from companies like Magic Leap, Snap, Microsoft, Google, Lenovo, Unreal, Unity and many other major players.
At the same time, countless developers are pushing the limits of creativity and artistry, unlocking what’s possible when you mix the real and virtual, from new types of board games (Tilt Five) and powerful medical uses (Mediview XR), to remarkable outdoor experiences from Niantic Labs.
This is why I am confident that the metaverse, the true metaverse, will be an amalgamation of the real and the virtual, so seamlessly combined that users will cease to think about which elements are physical and which are digital. We will simply go about our daily lives and engage a single reality that is magically embellished. It’s been a long time in the making, but 2023 will be the year that our mixed reality future really starts to take shape.
Louis Rosenberg, PhD is an early pioneer of virtual and augmented reality. His work began over 30 years ago in labs at Stanford and NASA. He then developed and validated the first mixed reality system at Air Force Research Laboratory. In 1993 he founded the early VR company Immersion. In 2004 he founded the early AR company Outland Research. He earned his PhD from Stanford University, was a tenured professor at California State University, and is currently the CEO of the AI company Unanimous AI, the Chief Scientist of the Responsible Metaverse Alliance, and Global Technology advisor to XRSI.
( this article first appeared in VentureBeat )