loader image

A Lucid Look at The State of AR Technology

Reading Time: 14 minutes



Image source: Brielle Garcia, via Snap

In October last year, we shared our Perspectives on Extended Reality — an umbrella term by which we encompass Augmented, Mixed, and Virtual Reality. Much has happened since then. From Apple reportedly working on an AR/VR “reality OS,” to Qualcomm introducing a wireless version of its Smart Viewer headset, to Google now testing prototypes in public, the AR race is officially on. This presents us with the perfect opportunity to follow up with a dedicated deep dive on the state of Augmented Reality (AR).

Now, a quick reminder: as opposed to VR, whose sense of immersion relies on the creation of a reality distinct from ours, AR is about enhancing the world we live in. Last year, Niantic’s Founder and CEO John Hanke notably dismissed the idea of a “digital escape” as “a dystopian nightmare” and instead encouraged others in the industry to “lean into the ‘reality’ of augmented reality — encouraging everyone, ourselves included, to stand up, walk outside, and connect with people and the world around us.” This is the vision we’ll be discussing here.

AR has seen some notable traction over the past few years. Research conducted by Deloitte for Snap last year found AR adoption to be “tracking with the mobile usage boom,” and estimated that “nearly 75% of the global population and almost all smartphone users will be frequent AR users” by 2025.

Today, however, most AR consumption still takes place on social and communication apps and within the confines of the smartphone screen, rather than on the AR-native wearables that have been consistently presented to us as inevitable. It seems like the various layers of the stack are progressing at different speeds.

On the one hand, software appears to be pushing forward. Multiple companies — most of them, tech giants — now provide developers with the core capabilities they need to build compelling consumer AR applications. These toolboxes are lowering barriers to creation and empowering brands and indie creators alike to experiment with the medium.

On the other hand, hardware is facing challenges. Despite their much-publicized ambitions, even deep-pocketed companies like Apple and Meta have been repeatedly postponing the release of their first AR headset. The dream of experiencing AR on a dedicated device still seems like a distant one. In the meantime, however, the use of AR on smartphones, already widespread, offers a convincing transition toward a more fully immersive future.

Looking at these two aspects will help us grasp where AR stands today, and what needs to be solved if we want to bring the technology to the next level.

Fast-Forward: A Look At Our AR Future

Before we dive into the nitty-gritty of today’s market, let’s remind ourselves what it is that AR is building toward.

At its core, AR promises to enrich our surroundings with a digital layer of content and context that can be activated at will depending on our needs. The technology itself is agnostic: it can (and will) be used across all use cases, from fun to commerce and from education to navigation. Snap found that “65% of AR consumers around the world and across generations use AR to have fun,” but “76% of people expect and desire to use it as a practical “tool” in their everyday lives.” We expect both fun and utility to blend more and more seamlessly in the future.

But what does that mean for your everyday life? Imagine donning your AR glasses in the morning. A snapshot of your agenda pops up, complete with the day’s weather and news, both pulled from your favorite apps. As you go on with your day, your device helps you get to work, displaying arrows and warnings to shorten your commute. Perhaps you need to make a call while driving? Your glasses display names and faces as you go through your contact list. While at work, you’re able to seamlessly read visual data, manipulating it to instantly share it with your colleagues. Dining out? Pointing at a marker on your empty plate, you conjure up a digital preview of a dish to get a sense of what it looks like. Back at home, you browse through your favorite streaming service and watch a movie from your headset instead of from your TV screen.

Source: Brielle Garcia, via Snap

You get the picture. Throughout your day and across all these activities, you’ve been using the same AR-powered wearable, a (hopefully) lightweight, inconspicuous headset that’s able to surface contextualized information and data based on your specific needs. Beside context- and object-based content, this device can also pull relevant insights from external apps — whether time-sensitive like traffic information or evergreen like someone’s contact info. In addition, that content can not only be consumed individually, but shared with others, too. Whether indoors or outdoors, for fun or for work, you can instantly enter and influence a shared, persistent digital experience.

This future relies on the “AR cloud:” an invisible, ubiquitous, multi-layered, real-time 3D representation of our physical world. As noted in our report last year, “the AR cloud enables the unification of physical and digital worlds by delivering persistent, collaborative and contextual digital content overlaid on people, objects and locations to provide people with information and services directly tied to every aspect of their physical surroundings.” Although names may vary — Meta mentions “Live Maps” while Magic Leap talks about the “MagicVerse” — the goals are similar: it’s about capturing, understanding, and tagging our physical world in order to mirror and, quite literally, augment it digitally.

With that in mind, let’s see how close to this future we stand today.

Software: Continuous Innovation Empowers Creators With New Tools

Looking at recent developments on the software front, there are many reasons to be optimistic. Here’s a refresher on some of the announcements from the past 12 months:

October 2021: Meta (still known as Facebook at the time) teased Polar, a lightweight mobile app for AR creation that leverages the company’s Spark AR platform.November 2021: Niantic opened  Lightship, the foundation for all of Niantic’s products, to developers globally.March 2022: Snap launched Custom Landmarkers, which lets creators produce geospatially anchored AR content.April 2022: Snap launched Lens Cloud, a suite of backend services that allows developers to build dynamic, multiplayer experiences.June 2022: Apple unveiled a series of improvements coming to the ARKit toolkit across motion capture, camera access, and new “location anchors” for outdoors navigation.July 2022: Snap announced Snapchat for Web, a Snapchat+ exclusive that will bring the company’s signature Lenses to millions of new screens.

The picture is clear: from one product release to the next, creators and developers are getting access to increasingly advanced AR tools. These platforms provide them with a variety of technology blocks that include everything from face detection to mesh 3D to plane tracking to particle effects. In particular, visual positioning, which lets users place persistent virtual objects in specific locations, has seen multiple breakthroughs that are paving the way for ever-larger and more social experiences. All this is dramatically lowering past barriers to entry and enabling experimentation across all use cases and on all kinds of objects — from faces to products and from landmarks to scenes.

While it’s easy to marvel at the current pace of innovation, it’s all the result of years of patient innovation. Apple’s ARKit and Snap’s Lens Studio both launched in 2017; Google’s ARCore is a continuation of the Tango platform, which launched in 2014 and was discontinued in 2018. Niantic’s Lightship was first introduced in 2018 as the Niantic Real World Platform, while Facebook’s creative platform Spark launched in 2019. Over the years, all these players have not only increased the range of their capabilities, but also made them more accessible through a mix of funding and education.

This process was also bolstered through active M&A. Niantic alone has acquired no less than seven AR-focused startups, including: Escher Reality, a startup developing technology for persistent multiplayer experiences; Matrix Mill, which specialized in 3D occlusion effects; 6D.ai, a spatial mapping company; and 8th Wall, a WebAR development platform. In 2020, Facebook acquired Scape, a company that built 3D maps from ordinary 2D content, while game engine giant Unity acquired RestAR for high-res 3D. (Note that we’re only mentioning software-related acquisitions here, the complete list including hardware would be much longer!) We expect more such moves in the coming months as competitors aim to fill gaps in their respective stacks.

To draw developers in, each player has been leaning on its set of own strengths.

In Snapchat, Snap has one of the most popular social apps and a notorious Trojan horse for AR: as of April 2022, over 250,000 developers on the app had published a total of 2.5 million Lenses, which had been viewed over 3.5 trillion times in aggregate. Given Snapchat’s demographics, most of this consumption notably comes from younger users: according to Snap, “Gen Z / Millennials are both 71% more likely to use AR all the time vs. older generations.”Apple has the reach and authority of iOS, as well as decades of know-how in maximizing the potential of its hardware. The addition of LiDAR technology to the “Pro” versions of the iPad and iPhone further increased the company’s lead on that front by bringing improved depth-sensing capabilities to these devices.Google can leverage services like Google Translate, Lens, Search, and Maps to enable real-time translation, browsing, shopping, and navigation.Alongside Lightship, Niantic is now pushing Campfire, “a new social app that helps Niantic Explorers discover new people, places and experiences around them.”Finally, TikTok’s up-and-coming Effect House and startups like Blippar, Zappar, and VuForia further broaden the range of options, enabling developers to choose their preferred platform depending on their own priorities, whether it be reach, ease of use, or specific features.

All these efforts have been a boon for the application layer, as developers leveraged these newfound capabilities to wow consumers — in June, Apple’s Tim Cook teased an impressive 14,000 AR apps on the App Store. Retail was among the first industries to dive in in 2017 with IKEA Place, followed by numerous household names, with Sephora, Warby Parker, Adidas, and Nike all using the technology for virtual try-ons. New use cases continue to appear, including wellness: TRIPP, a mindfulness-focused immersive platform, recently expanded from VR and mobile to AR, too. (N.B. TRIPP is a BITKRAFT portfolio company.)

On the entertainment front, Pokémon Go remains unassailable — the game hit $6B in lifetime revenue in June — but others like SpotX (recently acquired by Niantic) and Resolution Games (another BITKRAFT portfolio company) are now tackling the segment too. Still, success is anything but a given: even Niantic has been struggling to replicate the success of Pokémon Go, and had to shut down Harry Potter: Wizards Unite after the game failed to take off. As with gaming at large, even the most powerful IP won’t make up for lackluster execution.

We’re excited to see this space continue to develop in the coming months and years. Along with more and more diverse use cases, we expect to see progress on the monetization front, as more brands start allocating some of their advertising budgets to the medium and the platforms try to incentivize their creator communities. With more prominent IP entering the space, we anticipate consumer interest to only grow from here.

Hardware: Despite Big Ambitions, Ongoing Challenges Delay The Standalone AR Future

Compared to the software side of things, the current state of AR hardware feels underwhelming. In the past few months, Meta, Apple, Snap, and others have all made it clear the technology is still years away from mainstream availability.

That’s mostly because they’re intent on headset-powered AR. Indeed, while phone-based AR is only getting better, headset-based AR faces a number of obstacles. Beside serious technological challenges, players in the space must also account for both pricing — lest they cut themselves off from the general public — and cultural concerns. Let’s work through them.

Technology

Compute power

Although we designate it under a single term, “Augmented Reality” encompasses a myriad of technologies that will be activated differently depending on the use case. Applying a “simple” AR lens onto someone else’s face, for example, takes a mix of 2D face detection, 3D face mash prediction, and visual effects rendering. Meanwhile, displaying an interactive animation over your surroundings may require scene segmentation and understanding, custom markers, and physics and collision.

All these capabilities require substantial compute power — something that, unfortunately, clashes with the core promise of mobility. At the moment, AR glasses come either with limited sets of features embedded in a small-sized frame; or with more advanced capabilities at the expense of portability. Magic Leap’s first and second generations of its industrial-grade headset use a wired processing puck, the “Compute Pack.” Even Apple might have to forgo its signature sleekness: reports last year suggested that the company’s rumored mixed reality headset was “designed to rely on another device, and may have to offload more processor-heavy tasks to a connected iPhone or Mac.”

These trade-offs can be forgiven for now as most players openly treat their first releases as experiments. But if tech companies truly expect consumers to carry AR devices all day in the future, they’ll need to learn to make the most of inevitably limited design space.

Battery life

Battery requirements are closely connected to those of computing power.

Think of your average smartphone or laptop today. If your screen time is anything like ours, it’s unlikely you can go even two days in a row without charging your devices. Video alone, which now accounts for a significant share of our screen time, is known to significantly drain a battery. Now, imagine how much of a toll rendering high-res, interactive 3D visuals will take on your battery life. The battery on the Magic Leap 2, for example, only goes for “about 3.5 hours,” while Snap’s latest Spectacles have an average battery life of about… 30 minutes. That’s hardly enough to replace our phones anytime soon.

What’s already an issue with flat screens is sure to be an even bigger one when it comes to wearables. As Android Authority put it a few years ago, “Battery life is effectively inversely correlated to how frustrated you will be with a mobile device.” And unlike with smartphones and computers, which at least remain usable when plugged in, any time that you have to charge your AR glasses is time that you can’t have them on your face. AR companies will need to address this fact if they don’t want their devices to collect dust.

Optics

Optics are another notable pain point — we are talking about glasses after all.

One aspect of this issue has to do with Field of View (FOV for short), a measure of how narrow or wide your view is wearing a given device, typically measured by a lens’s “diagonal degrees.” FOV is central to comfort and the overall user experience, as it determines how much visual information you’re able to catch at a glance — “the extent of observable world at any given moment.” The smaller the FOV, the tinier the window you can see content through, which then forces you to move around to align that window with the specific part of content you’re trying to see. Whether you’re looking at a map or trying to run away from digital zombies, you want your FOV to be as large as possible, if only to avoid unnecessary head movements.

Still, progress on this front has been slow. The Magic Leap 2 offers 70 diagonal degrees, after the initial Magic Leap 1 offered only 50 degrees. Microsoft’s HoloLens 2 has a 52° diagonal FOV, whereas Snap’s Spectacles 4 offer only a 26.3° FOV — for comparison, the VR headset standard is now around 110 degrees. Clearly, much is still needed to expand that visual window, but the industry is pushing forward with the help of academia, itself funded in part by hardware manufacturers like Intel.

Another challenge with optics lies in overall visual clarity. AR glasses rely on optical engines, or “combiners,” combinations of a waveguide — a thin piece of clear glass “with unique light-transmitting characteristics” — and a projector tucked away out of the line of vision that projects digital images in an area of the lens, “then propagates [them] along the lens to an extraction point in front of the eye.” This is what allows virtual and “real-world” information to arrive simultaneously to the human eye.

Because AR superimposes digital information on top of your surroundings, you need that information to be clearly visible no matter which kind of environment you find yourself in. Put simply, “If the display is instead strong enough to illuminate your eyes with enough power to compete with the sun rays, then you see augmentations also outdoors.” If not, you may see only vague silhouettes, too translucent to make out. To achieve a sufficient level of clarity takes a savvy mix of brightness (measured in nits), contrast ratio, color accuracy, and more, all of which may need to be dynamically adjusted based on real-world backgrounds and lighting conditions. This requires significant improvements not just in materials technology but also in our understanding of how all those components work together. For this reason, we expect optics to be a clear area of focus in the coming years.

Networking capabilities

The need for solid networking capabilities is a direct consequence of the previous points.

The focus on portability and the limited hardware space available for compute power mean AR content won’t be stored locally but will need to be streamed over mobile networks. While some data may still be downloaded ahead of time to make sure we can access it even with no internet, the sheer volume and diversity of content required to instill AR into our daily lives will make it impractical to rely on local storage alone.

The experience shouldn’t just be ubiquitous, either; consistency will be paramount. Every kind of content should be delivered smoothly and reliably, whether we’re indoors or outdoors, static or moving; any lag, stutter, or buffering would seriously hinder immersion. Meta’s Founder & CEO Mark Zuckerberg said as much, stating that: “creating a true sense of presence in virtual worlds delivered to smart glasses and VR headsets will require massive advances in connectivity.”

Large-scale, location-based social AR will come with even higher demands. The need to simultaneously drive and update dozens to hundreds of instances of a single virtual experience, while maintaining persistence across all devices, will take consistent, high-bandwidth, low-latency capabilities that today’s networks can’t provide. This is something that every internet infrastructure provider will need to consider in order to prepare for the future. From Qualcomm to Ericsson and from SK Telecom to Orange, the major network operators are all well aware of upcoming challenges and are increasingly betting on 5G to meet the market’s needs.

Price

Another challenge has to do with price.

This isn’t specific to AR. A study conducted by Morning Consult in March this year found that consumers “want cheaper VR technology before entering the Metaverse.” More generally, the democratization of any technology has historically been reliant on a decrease in price, whether you consider compute or local storage and internet transit. Only under certain thresholds can a given technology truly claim to appeal to consumers, rather than to tech-savvy enterprise clients.

From this perspective, AR is nowhere near mainstream status. The first Magic Leap headset shipped in 2018 at about $2,300, while the Magic Leap 2 will go for a whopping $3,299. In February 2021, a report from The Information claimed Apple’s mixed reality headset would cost around $3,000. At these prices, most of the general public would be excluded from adoption.

Unfortunately, it’s not certain when we’ll see these prices start to decrease. AR glasses take a multitude of specialized physical components, many of which have their own distinct supply chain and only a limited number of potential suppliers. Because the technology is still very much in “R&D mode,” industry players are also not yet in a position to optimize for price. From micro-LED screens to waveguide displays, a great deal of research and experimentation is needed before individual components reach consumer-ready price points.

For now, the leaders in this space are working with external partners — in fact, whatever specs we do have for their upcoming devices are often inferred from which manufacturers we know they’ve been talking to. But it’s hard to imagine them depending on third parties for too long. In May 2021, Snap acquired WaveOptics, a company that developed the waveguides and projectors used in Snap’s Spectacles, for $500 million; in May 2022, Google acquired Raxium, a microLED display manufacturer. If AR becomes as important a platform, and as large a business, as these groups envision, owning the entire stack may become increasingly central to their success. Logistical concerns, the desire to control costs, and the need for increased integration between the hardware and software layers may prompt tech giants to integrate more of the supply (and value) chain, the same way they’ve been doing for years with their current product lines.

Culture

One final challenge in making AR truly mainstream is cultural.

The issue is actually twofold. Part of it has to do with what we’re willing to wear on our faces in public as we go about living our lives. Part of it has to do with how much technology others are comfortable being surrounded with. Here, the cultural and the technological overlap: since we’re probably never going to tolerate heavy facial gear, the technology’s acceptability will be a function of how seamless and miniaturized we can make it.

On that front, the industry offers some cautionary tales. The release of the infamous Google Glass in 2012 saw significant backlash against both the device and its adopters. One of the main concerns opponents had about the device was over privacy: because Glass had no external indicator light, many were wary of being recorded without their knowledge. This made the product a lasting symbol of the cultural disconnect between tech enthusiasts and more mainstream consumers.

Industry players have been careful not to repeat that mistake: they know getting the form factor right will be crucial to ensuring that people are comfortable wearing their devices, and seeing others wear them.

But sharing the same goal doesn’t mean they have the same vision for what that form factor should be. The first generation of Snap’s Spectacles was a bet on making technology fun, a way to appeal to a “cool kids” crowd already familiar with the Snapchat app. In contrast, the product’s fourth, developer-only iteration features a much more futuristic look. Others, like the now Google-owned North, or Meta through its collaboration with Ray-Ban, aimed for a more conventional style from day 1. With its removable Air Glass, Oppo is taking yet another approach to normalizing AR tech.

What all these examples demonstrate is that there is no single path to the mainstream: each particular form actor will come with its own trade-offs in terms of features and wearability. Until all AR glasses look equally inconspicuous, different models will continue to serve different audiences.

However, a successful AR hardware approach doesn’t have to solve all use cases at once with do-it-all glasses. Tilt Five (a BITKRAFT portfolio company) aims to “reinvent game night” with a holographic gaming system composed of tethered glasses, a gameboard, and a connected wand. Other companies, including Osmo and PlayShifu, are combining tangible pieces with readily available hardware to enable AR-powered learning. Whether these solutions are only transitional or here to last, these examples are proof that there are more avenues to explore in hardware beside standalone devices.

Parting Words

A lucid look at the state of AR hardware, and the warnings of the industry’s main contenders, remind us how important it is to take the long view. Massive improvements are still needed across the board to make AR wearables a reality, and we’re likely to see many more iterations in terms of form factors end up being unconvincing experiments. Until standalone AR is a thing, more hybrid approaches could serve as valuable onramps to accustom users to the merits of immersive computing.

In the meantime, there are plenty of opportunities to seize on the software and content side. To enrich the world with ubiquitous AR content and context will take strong ecosystems of artists and developers, whose work will draw consumers to AR platforms the same way they did to app stores in the early days of mobile. With more and more creative tools at their disposal, and more and more technology blocks on their way to being commoditized, creators can now produce and distribute AR more easily than ever — with monetization on its way, we expect. We can only encourage forward-looking developers to tackle the potential of immersive experiences today.

We at BITKRAFT plan to take an active part in fostering that new reality. As always, if you’re building in this space, don’t hesitate to reach out.



Source link

share this article
  • This field is for validation purposes and should be left unchanged.

Subscribe to receive the latest business and industry news in your inbox.

  • This field is for validation purposes and should be left unchanged.

latest from the industry
esports news

Whitepaper

  • This field is for validation purposes and should be left unchanged.

  • This field is for validation purposes and should be left unchanged.

Use