Skip to Content
    ^5###########?
  ^5&@@@@@@@@@@@@J
~P@@@@@@@@@@@@@@@J
#@@@@@@&BBBBBBBBBP??????!
B@@@@@@B         .&@@@@@@G
B@@@@@@B         .&@@@@@@P
B@@@@@@B         .&@@@@@@P
B@@@@@@B         .&@@@@@@P
#@@@@@@B.        :&@@@@@@P
!77777!G#########@@@@@@@P
       5@@@@@@@@@@@@@@&Y:
       P@@@@@@@@@@@@#J:
       JBGGGGGGGGGGJ.

// Prepare pages...
var pagesToLoad = [
  'vision',
  'portfolio',
  'people',
  'content',
  'careers',
  'contact',
];

function load() {
  showLoader();
  loadPages(pagesToLoad, onPagesLoaded);
}

function onPagesLoaded() {
  hideLoader();

  // Show intro if first time, otherwise show content
  if(firstTime) {
    showIntro();
  } else {
    showContent();
  }
}

// Loading website...
load();
Loading Percentage:
00

ArticleIs Mixed Reality Signaling a Platform Shift?

  • Insight
  • Immersive Technology
  • Venture Capital

Executive Summary

Note: As a pre-read to the following discourse, see our earlier piece here which gives a 30,000 feet view of Apple’s latest innovation. 

The Apple Vision Pro (“AVP”) has sparked conversations around whether its release is in anticipation of a pivotal technological platform shift. At the moment, extended reality (“XR”) remains in arrested development at the Start-up Phase of the S-curve wherein hardware and underlying infrastructure are still being built up and mainstream adoption is restricted by industry bottlenecks like form factor issues, technological limitations and the lack of compelling use-cases. 

However, while companies like Meta and Bytedance have traditionally focused on virtual reality (“VR”), they are beginning to strategically move over to mixed reality (“MR”). We are getting excited about MR too: we see it representing a paradigm shift in the rate of data transfer (i.e., the evolution to eye- and hand-tracking) and the “duality of presence,” or the feeling of having one step in and out of the virtual and the real. As Avi Bar-Zeev says, this is the AVP’s Macintosh moment: “[Macs] were truly groundbreaking, but too expensive for most people… and it took a while before they became widely popular.” 

It’s too soon to tell exactly how XR will grow but we can think of it probabilistically as two potential scenarios: the smartphone level outcome – the greatest scale of mass adoption and usage – or the console level outcome, which implies less portability and adoption on a smaller scale. Either way, while XR is not yet fully established, it is on a critical path towards significant evolution and adoption that is fueled by investment and innovation. This could unlock transformative applications and broader market profitability.

Why Does XR Matter?

Platform shifts in tech innovation represent pivotal moments when the foundational architecture and primary interface through which users interact with digital environments undergo a tectonic  transformation. This disrupts incumbent form factors and unlocks blue oceans of user demand and opportunity. These shifts are critical as they tend to foreshadow extended periods of rapid innovation, the emergence of new market leaders, and therefore, outsized investment opportunities for investors. 

Within days of the launch of the AVP, we observed takes from consumers and industry analysts/investors alike heralding its arrival as the final driver to spawn the definitive “platform shift of our time,” whilst contemporaneously seeing bearish and disdainful views of the prospects of the AVP and, as a result, XR at large. 

In our latest research piece, we peel back the layers and seek to figure out what, if not the AVP, has truly pushed XR across the rubicon into the promised land. Or are the cynics right and this is yet another nothing-burger? 

Where Are We on the XR S-Curve?

We often hear people say that XR is the next big thing, but also from skeptics with sardonic clapbacks like “but it’s been the next big thing for the past 20 years!” In this first section we explore the key bottlenecks that have been holding XR back from achieving mainstream adoption - these can be bucketed into 3 main categories:

  • Form factor issues causing physical discomfort/ motion sickness
  • Technological limitations including substantial processing power, robust tracking tech to achieve quality spatial computing, interoperability barriers, etc. 
  • Lack of compelling use-cases and “killer apps” that will honeypot the mass market

The fact that these are the bottlenecks is not news to anyone, but they are instructive in helping us understand where we are in the S-curve for XR, which is the more important question to address:  

To know where XR is going we first need to establish where XR is today. As with any S-curve, technological innovation tends to happen in a staged fashion, and can be delineated into the following waves: 

  • Start-up Phase: Hardware and underlying infrastructure being built up
  • Growth Phase: Application and content layer flourishes, virtuous user + content growth
  • Mature phase: Innovation is increasingly marginal, things are slowing
For illustrative purposes only.

Despite its extensive history, XR remains squarely in the start-up phase as the industry continues to iterate on the right medium and hardware format. Until it graduates from this phase we would not likely see a cambrian explosion of innovative new applications built on the hardware.

Yet, it is also overly punitive for skeptics to say that the industry is a nothing-burger since there has been incontrovertible evidence of the pivots undertaken in the space: We started with pure-play VR headsets, moved  on to lightweight AR devices, and now are seeing the emergence of Mixed Reality (“MR”) hardware. One way to think about this evolution is to segment its development over several discrete S-curves:

  • VR Era (~2012 onwards) - Oculus Rift/Meta Quest, HTC Vive, Bytedance’s PICO
  • AR era (~2018 onwards) - Ray Ban Smart Glasses, ROKID glasses, Spectacles by Snap
  • MR Era (~2023 onwards) - Apple Vision Pro, Meta Quest Pro, Varjo, Magic Leap 2

We can think of these three buckets as being defined by the parameters of i) level of immersion and ii) interactivity with content, which correspond respectively to the Y and X axis of the framework below: 

Prior to the launch of the Apple Vision Pro, the industry was largely dichotomized by fully immersive and artificial VR experiences: users could interact directly with virtual interfaces, as well as semi-immersive AR experiences which primarily overlaid virtual objects within a physical environment (most often on a mobile device). We’ve since seen the advent of MR, allowing users to interact with both the physical and virtual world.

MR therefore represents a leap forward for the state of VR, which has catalyzed various industry participants into pivotal actions: Meta started its foray into XR with the acquisition and development of the Oculus Rift VR headset, but has since rebranded that product line to focus on the MR-enabled Quest headsets. Similarly, Bytedance announced in Dec 2023 that it would discontinue its PICO 5 VR headset in order to focus on “Swan”, a high-end headset inspired by the Apple Vision Pro. 

We believe this wave of lane switching from VR to MR will continue, and it is this pivot that underscores our belief that we are currently only at the beginning of the MR S-curve. We should therefore expect anywhere from 5 - 10 years of further iteration before we arrive at a viable mass-market hardware form factor that resonates with consumers that will further catalyze the development of an app ecosystem.

“I think it has become clear to those of us working in this field that Apple's Vision Pro is kind of the perfect embodiment of this expectation that the industry has had, which is that rather than VR and AR being these distinct modes, we're going to want to have digital content and objects, share our physical environment and have precise physical locations. 

- Neil Redding, XR Futurist

This is not to say app development will not begin in tandem with the launch of the latest MR headsets - after all, numerous apps were published on the App Store in the early days of iPhone’s launch (e.g., Lightsaber Unleashed was one of the Top 10 most downloaded apps in 2008 but largely non-existent today), and it was not until many years later as the smartphone user base grew and startups figured out new use-cases unlocked by the smartphone that resulted in the birth of many of the genre-defining apps we see even today (e.g., Uber, Candy Crush Saga, Clash of Clans etc). It will take time for developers to research, tinker, and eventually figure out the novel use-cases that are unlocked by MR in ways which its predecessors did not, and in this regard MR could follow that same trajectory as the iPhone did. 

Before they can do so however, first we need to figure out the million dollar question:

Startup-Phase: What Does the End-State XR Form Factor Look Like?

In time, low profile form factors will rely heavily on cloud and edge computing to transfer processing power (and weight) outside of the headset. However, in the meantime, we are observing three approaches towards solving this issue: 1) maximize technological capability at the expense of convenience and target first adopters (which is the path of the AVP), 2) optimize for convenience and wearability in order to democratize XR for the masses, but at the expense of technological capability (the path of the Ray Ban Smart Glasses), or 3) create a middle-of-the-road option that is meant to provide a useful, but not fully loaded, experience through a somewhat low-weight headset (i.e., Meta Quest). 

“Meta two headsets demonstrate a striking economy of means. They deliver as much as possible using as little as possible, and in many ways the best-in-class value of its devices are as impressive a feat as Apple’s elegant vision statement. The comparison, if one holds, is between what their respective manufacturers call a ‘mixed reality VR headset’ and a ‘spatial computer’; the latter hoping to usher in a new generation of technology and a new era of media to go along with it. Time, along with those of us who buy and use these, will tell which path wins out in reality—virtual, augmented, and otherwise.” - Jon Bruner, Lumafield report

Over time, we think the likes of the Meta Quest and AVP will continue to get lighter and more wearable (not dissimilar to how the first PC took up over 1,800 sq ft of real estate and now it fits in your smartphone) whilst seeking to add more functionality. Meanwhile, Meta’s Smart Glasses will seek to add as much functionality as technology allows without getting too unwieldy. Ideally, the ultimate end-state product possesses the full functionalities of the AVP while retaining the barely-there form factor of Meta’s glasses. 

We believe the paths of the AVP and smart glasses will converge towards the median over the long run, but given they are operating on such diametrically opposed ends of the technical and hardware spectrum, we think it may be challenging to envision a steady state where they meet exactly in the middle. There is a non-zero possibility of the Smart Glasses and AVP eventually addressing two separate user segments, but as early adopters ourselves we fundamentally get excited about the new paradigm which a lightweight MR headset would bring, and pine for its arrival.

Ultimately, it is the cloud and edge computing infrastructure being developed for AI that could hold the key to balancing form factor with processing power: 

“AI plus spatial. That's the big story here… I had a Game Boy, I had a camera, I had a flip phone, I had an MP3 player, all in my backpack that I was carrying around. Now they've all converged into what I call my phone. There's going to be the same thing that's going to happen where the phone will be kind of a local GPU that I can access with a lightweight headset, but it'll triangulate data from the cloud and the edge… Developers are going to have to learn how to code differently for that transition because it's a much thinner client on the device and that application on the device is doing mostly traffic shaping. It maps directly to mobile.” - Nanea Reeves, Co-Founder and CEO of immersive wellness app TRIPP

Why Does XR Matter to Us?

One way to think about the potential of XR is the effect it has on data storage and transfer, which Chris Paik of Pace Capital talks about here. Historically, step function increases in the throughput of data transfer have unlocked new modalities of media production and consumption, which in turn have birthed some of the largest products and tech behemoths of today. A great example from Chris was the prevalence of mobile social networks which were founded in ascending order of data transfer throughput: Twitter in 2006, Instagram in 2010, Snapchat in 2011, and TikTok in 2016, which was made possible by the evolution in data transfer from written word to images and, finally, videos. 

"It could not have happened in any other order because of the minimum packet size of the content on that platform. So Twitter was text-based, with the smallest packet size. And then when mobile bandwidth was in its infancy, it was still sufficient to transfer those kilobits of information. But it basically wasn't until 3G that Instagram was possible. You had compressed images. And in order for the average person to have a high-fidelity experience, you needed a wide distribution and attachment rate of that. Snapchat coming shortly thereafter, also imagery and short-form video, and then TikTok three years later with longer-form video with audio also part of the consumption experience” - Chris Paik, Pace Capital

We agree with Chris, and this is fundamentally why as venture investors we continue to pay close attention to the space. Mixed reality represents a paradigm shift in the rate of data transfer as we go from using hands for writing with the mouse and keyboard, towards eye- and hand-tracking. 

Eye-tracking in particular is revolutionary in its unlocking of a completely novel and rapidly faster modality of data transfer since it completely bypasses the lossy physical loop between our hands and brain. This interface unlock will potentially birth new modes of everything: work, communication, media consumption, etc. It also has the potential to revolutionize spaces like healthcare with optometrical enhancements such as autofocals, or lenses that automatically focus your vision as you look around.

Just like it was hard to envision the possibilities of the iPhone when it was first launched, we are now once again at the early innings of the MR S-curve and exploring the possibilities of the applications which would live natively on MR devices. 

So now onto the billion dollar question that everyone has an opinion on: What is the killer-use case for XR?

Growth Phase: What Does The Killer-App for MR Look Like? 

In order to theorize what a potential killer use-case would look like for XR, we spent time in the previous section thinking about how mass-market adoption for XR devices would look like, and whether there is any merit to the current idea that AVP represents “the iPhone moment of XR”. 

This is where we stumble into a bit of a Catch-22 since a mass-adoption outcome implies devices possess i) a lightweight form factor and a ii) horizontally vibrant application ecosystem of use-cases that cannot be replicated by any other device, but from our earlier discourse we are still too early in the MR S-curve to know how the form factor actually plays out, and without visibility and stability in the form factor we will not expect to see that vibrant app ecosystem flourish.

Our approach therefore is to think probabilistically about potential scenarios for the XR industry, which we think can be broadly outlined into two discrete scenarios:

  • Smartphone Level Outcome - Greatest scale of mass adoption, users carry their XR devices everywhere they go with them similar to their iPhones/wearable devices
  • Console Level Outcome - Still widespread adoption but smaller scale than that of the smartphone, implies it is mostly for home/workplace use

Should the North Star outcome for XR devices be that of the smartphone (~7b users globally, or 85% of the world population), or will it look more like a Playstation/XBOX outcome (~600m console gamers globally)? 

The Smartphone Level Outcome

The most monolithic platform shifts of our time such as the PC and the mobile phone, whilst capable of gaming, more critically catered to a broader market via diversified use-cases beyond gaming such as communications (phone, messaging, emails), geography (map navigation, weather), and media consumption (music, video, social media), amongst others. That the mobile phone is capable of all these functions is not surprising – it was after all always meant to be a “pocket PC” whose form factor was made possible by decades of innovation. 

Non-consumer applications have traditionally proved to be key catalysts to growing opportunities in evergreen spaces. Training, simulations, education, manufacturing, AECD, and more are entry ways for users who would otherwise likely not have access to a headset as a consumer. This has been a contributing factor to the increased adoption of headsets, and XR’s trajectory in growth.

However, for now, XR cannot beat the convenience and form factor of the smartphone:

“Unless you're getting rid of the phone, [XR displays are] redundant until it's better than the phone display… You're not going to wear it everywhere. There's a significant amount of friction. If you're a gamer and you like the big screen experience, go ahead. But that's just a pair of screen reflectors. Is that really spatial computing as we understand it?” –Charlie Fink, Journalist and Professor 

So if there is not a better reason to use XR devices than the smartphone, it is unlikely that they will proliferate to the same level. 

The Console Level Outcome

“You can deny seriousness, but not play,” Johan Huizinga wrote in his book Homo Ludens, which portrays the desire to play as an innate and immutable aspect of human nature. Naturally with new consumer technologies there is a desire to create recreational experiences that provide enjoyment. Just as people found a way to play on mobile phones, PCs, and even televisions, human nature will drive that same evolution with XR creators and users too. 

For this reason plus the tech’s novelty, XR is often considered a gaming device, but history has shown us this is not always true; gaming was originally and will likely continue to be thought of as the tip of the spear for onboarding early adopters onto XR, but in our research we have found that platforms which offer non-gaming use cases on top of gaming use-cases address a far larger total addressable audience and represent a far more exciting and outsized outcome for XR. In fact, adoption of XR headsets for non-gaming use-cases are starting to find their product market fit with users.

If our hypotheses are right, then the AVP is likely not the “iPhone moment of XR”, but more akin to the “Macintosh moment of XR”, as Avi Bar-Zeev so eloquently puts it. Which also means it is currently way too early to theorize on what the “killer-use case” for XR will be - many have rushed to claim that gaming will be the killer use-case of MR, but we think the jury is still out on that question.

Our view? We think that trying to figure out the killer use-case of XR is basically the wrong question to ask. We think the most important question to ask today is “What capabilities do MR devices possess that unlock new use-cases that were never before possible?” Once you break that question down further, you are left with one last question to ask: “What is the core value proposition of XR devices”? 

And only then would we have broken down this entire problem by first principles into the most important axiom of MR: The innovation of “presence”

Why Presence is the Billion Dollar Question

What exactly is presence, and why is MR driving a paradigm shift there? Presence has largely been linked to fully occlusive VR, but also affects the user experience in AR and MR to varying degrees.

Milgram, P., and Colquhoun, H. (1999). A Taxonomy of Real and Virtual World Display Integration. Mixed Reality: Merging Real and Virtual Worlds

MR in particular – such as that of the AVP – lives on a spectrum wherein real and visual entities converge. One unique aspect compared to VR and AR is the spatial presence in MR that is facilitated by both real and virtual environments. 

Put another way, the fact that a user can toggle between the real and the virtual allows for varying degrees of occlusion but preserves the physical reference to presence as it can allow the user to simultaneously be present in reality while overlaying and interacting with virtual elements. 

Nonny de la Peña, dubbed the Godmother of VR, calls what we feel in immersive experiences as the duality of presence. As she puts it, “you know that you’re still in the room where you are, but you feel like you’re [in another place] too, so you feel like you’re here and there at the same time.” She goes on to say that “it’s interesting that we have that ability—that our minds have the ability to split into two like that. But it must come out of something like all the many years of storytelling and hearing and feeling. This just takes that and brings it to another level, where it really tricks your mind into feeling present on the scene.”

Once someone is in the headset they are able to immerse themselves beyond the 2D in a way that elicits a stronger emotional response – thus, the reason why VR is often called the empathy machine. Many agree it can be similar to what we feel in real life when confronted with emotional immersive experiences. As such, a prime example of how this can affect users is in how it can attract donors and boost donations for nonprofits or build journalistic experiences that bring awareness to social issues

Spatial audio and video, and volumetric video allow users to perceive their inhabitance of the space on multiple levels of sensory stimulation. A user’s sense of presence is redefined to fit into the parameters of the virtual world surrounding them. As Arcturus CEO Kamal Mistry says, when someone is in volumetric video, they forget they’re “actually in a virtual world because it's video, it looks real. And so what you'll find is as these human beings that are [in the video], like holograms, come close to you… you follow social norms of distancing, and if someone gets close to you, you get out of the way.”

Beyond building empathy, this sensory override can have a profound impact on entertainment experiences. In the same way gamers immerse themselves in games via accessories, or people go to movie theatres to block out the outside world, XR allows users to blend the real and virtual so they effectively feel like a different realm. Artificial olfactory and gustatory innovation is still being worked out, but in the meantime our heightened sight, hearing and touch provide for an acceptable amount of immersion to trick your body into thinking the things that are happening to you are real. And frankly, that might be all we need.

Conclusion

So… is XR “there” yet? As in, has XR found its entry point and is it living up to the status it needs to proliferate technology’s evolution? No. Will it very likely get “there”? Yes. That means the present is one of the most crucial times to accelerate and support XR’s journey on the S-curve. Investment is a key driver for innovation and adoption since it provides creators with capital to diversify the current offerings and help XR find its way to critical mass. As hardware prices decline and more consumers are willing to give the tech a chance, developers and investors could potentially reap massive benefits as they shape MR’s killer applications and spatial computing developments.

We believe the AVP has ushered in a new era of a technology that has always had promise but no clear use case outside of specific enterprise applications. Profitability in the market is currently a challenge, but MR’s potential goes beyond the niche or the novel. We believe the keys to a fundamental platform shift include form factor configuration, technological advancement and compelling use-cases – any of which could hinge on innovative startups with transformative potential. 

_______________________________

Thank you to our contributors who were kind enough to share their insights, experiences and ideas for this piece:

  • Nanea Reeves, Co-Founder and CEO, TRIPP
  • Calin Pacurariu, Co-Founder and CEO, Spatial Inc.
  • Matthew Price, Responsible Innovation & Emerging Technology, Accenture / Fellow, World Economic Forum
  • Vinay Narayan, Global Executive Solutions Lead, AI and R&D, Meta
  • Charlie Fink, Professor, Chapman University and Arizona State University / Columnist at Forbes
  • Neil Redding, Near Futurist
  • Kamal Mistry, CEO and Chairman, Arcturus