• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

AR Development

Snap’s Top AR Exec Quits Ahead of Specs Consumer Debut

February 20, 2026 From roadtovr

Scott Myers, Snap’s top executive in charge of Specs, has left the company ahead of the planned release of its consumer AR glasses.

The News

Myers reportedly left his six-year tenure at the company due to a dispute with Snap CEO Evan Spiegel, tech outlet Sources claims, characterizing the dispute as a “blow-up” centered around the company’s strategy.

A Snap spokesperson confirmed Myers’ departure on Reddit, nothing that Specs are still on track for release this year:

“Scott Myers has decided to step down from his role at Snap. We thank him for his contributions and wish him the best in his next chapter. We can’t wait to bring Specs to the world later this year. We remain focused on disciplined execution and long term value creation for our developer partners, community and shareholders.”

Myers came to Snap in 2020 to oversee all aspects of Specs, including hardware, software, product and operations. He previously held senior positions at SpaceX, Apple, and Nokia, according to his LinkedIn profile.

Snap Spectacles (gen 5) | Image courtesy Snap Inc

This comes at a critical moment for Snap. In September 2025, Spiegel noted in an open letter that the company is heading into a make-or-break “crucible moment” in 2026, positioning Specs are an integral part of the company’s future.

“This moment isn’t just about survival. It’s about proving that a different way of building technology, one that deepens friendships and inspires creativity, can succeed in a world that often rewards the opposite,” Spiegel said.

The consumer version of Specs is set to be the company’s sixth generation glasses following the release of its fifth-gen hardware in 2024. As ‘true’ AR glasses (re: not smart glasses like Meta Ray-Ban Display), the device is ostensibly set to frontrun some of Snap’s largest competitors.

My Take

It’s uncertain why Myers left Snap; the company even disputed the “blow-up” narrative with TechCrunch, providing no other reasoning, which makes Myers’ departure an even greater mystery—especially on the eve of the company’s big consumer AR glasses launch.

Speculatively speaking, there is at least one recent sign that could point to trouble brewing in the background. Myer’s departure follows a recent move by the company to form a wholly-owned subsidiary dedicated to Specs.

Snap says the so-called ‘Specs Inc’ subsidiary will primarily allow for “new partnerships and capital flexibility,” including the potential for minority investment. More concretely, Specs Inc also insolates Snap from any potential failure.

Whether that betrays a lack of confidence is unclear, although the top executive who oversaw the release of the fourth and fifth-gen versions—notably the only two with displays and AR capabilities—doesn’t smack of confidence.

Filed Under: AR Development, ar industry, News, XR Industry News

Apple Reportedly Accelerates Smart Glasses Development Amid Wider Push for AI Hardware

February 18, 2026 From roadtovr

Apple is reportedly accelerating the development of smart glasses, as the company is ostensibly making a shift toward AI-centric hardware.

According to a report from Bloomberg’s Mark Gurman, Apple is ramping up development of its forthcoming smart glasses, which are slated to head into production as early as December 2026, with public release expected sometime in 2027.

Apple’s smart glasses are being positioned to compete with Meta and EssilorLuxottica’s most recent smart glasses, the report maintains.

While this mostly echoes previous reports from last October, Apple appears to be accelerating development, having recently distributed a broader set of glasses prototypes within its hardware engineering division.

According to an all-hands meeting with employees earlier this month, CEO Tim Cook supposedly also hinted that Apple would be pushing hard into AI devices, noting that the company was working on new “categories of products” centered around AI.

“We’re extremely excited about that,” Cook said in the internal meeting, saying “[t]he world is changing fast.”

Citing people familiar with Apple’s plans, the smart glasses (allegedly codenamed ‘N50’) are said to include two cameras: a high-resolution camera for photos and video, and another dedicated to computer vision tasks. The high quality onboard cameras and overall build quality are expected to set it apart from competing products, the report maintains.

Array of Meta smart glasses | Image courtesy Brad Lynch

Similar to Meta’s audio-only smart glasses though, Apple’s N50 hardware isn’t expected to include a display of any kind, instead relying on cameras, speakers and microphones for things such as phone calls, AI queries, listening to music, and capturing images.

Apple allegedly floated the idea of partnering with eyewear brands—similar to Meta’s partnership with EssilorLuxottica or Google’s partnership with Warby Parker and Gentle Monster—the company seems to have more recently decided on developing in-house designs, which are said to arrive in a variety of sizes and colors.

“Early prototypes of the glasses connect via a cable to a standalone battery pack and an iPhone, but newer versions have the components embedded in the frame,” Bloomberg reports. “The design uses high-end materials, including acrylic elements intended to give the glasses a premium feel. Apple is already discussing launching the device in additional styles over time.”

This comes as Apple is investing more heavily in AI in effort to better compete with Google and OpenAI, which comes part and parcel with a critical redesign of Siri. The report also maintains Apple is working on an AI-powered pendant and AirPods with expanded AI capabilities—all three of which will rely on visual input.

Notably, the report maintains that all three will rely on connection to iPhone. Apple did not respond to Bloomberg’s request for comment.

Filed Under: AR Development, News, XR Industry News

Meta Sold Over 7 Million Smart Glasses Last Year, Effectively Tripling Sales in 2025 Alone

February 12, 2026 From roadtovr

EssilorLuxottica reported its Q4 2025 financial results, revealing the company sold over seven million smart glasses last year.

The French-Italian eyewear conglomerate has been making smart glasses in partnership with Meta since the launch of the original Ray-Ban Stories in 2021.

Now, in its fourth-quarter results, EssilorLuxottica revealed it sold over seven million smart glasses last year—more than tripling sales since last reported.

In February 2025, the company announced it had sold two million Ray-Ban (Gen 1) smart glasses since release in late 2023.

Image courtesy Brad Lynch

It’s no wonder 2025 was a landmark year for the company though. Alongside Meta, EssilorLuxottica not only released a hardware refresh of its popular Ray-Ban Meta glasses, but also Oakley Meta HSTN, Oakley Meta Vanguard, and the $800 Meta Ray-Ban Display glasses—the company’s first smart glasses to include a heads-up display.

In addition to its smart glasses efforts, EssilorLuxottica maintains that 2025 marked a further acceleration in its “evolution from an optical company into a leading medtech and big-data group,” owing to growth across both its Nuance Audio hearing-aid glasses and AI-driven healthcare platform.

While Meta and EssilorLuxottica are current market leaders in smart glasses, the XR wearables race has really only just begun. As it appears today, companies largely see smart glasses as a first step towards creating all-day AR glasses of the near future, with  potential contenders including Google, Samsung and reportedly also Apple.

Filed Under: AR Development, News, XR Industry News

Ray-Ban Smart Glasses Get Massive Utility Boost with Cool (but risky) ClawdBot Hack

February 10, 2026 From roadtovr

If you’re comfortable mucking around with a new open source project, you could be shopping on Amazon just by looking at an object with your Ray-Ban Meta smart glasses.

Ray-Ban Meta smart glasses are pretty useful out of the box, offering photo & video capture, calls, music playback, and your standard assortment of AI chatbot stuff. They don’t have an app store though, which means you’re basically stuck with a handful of curated services.

Now, indie developer Sean Liu released an open-source project called VisionClaw that links Ray-Ban Meta smart glasses with OpenClaw (aka ClawdBot), essentially giving the autonomous AI agent eyes and ears.

Check out VisionClaw in action below, courtesy Liu:

now my clawdbot lives in my ray-ban meta glasses so i can just buy whatever i’m looking at pic.twitter.com/gWrijyTRhE

— xiaoan (@_seanliu) February 6, 2026

OpenClaw isn’t an AI model like ChatGPT or Google Gemini though. It’s an agentic layer—essentially a complex messaging layer built on top of an AI model that interacts with services on your behalf, like sending emails, managing shopping lists, or controlling smart home devices—just three of the 56+ tools OpenClaw can integrate with right now.

Basically, it works like this: VisionClaw uses Gemini Live for real-time voice and computer vision, which can do things like describe what you’re seeing and answer questions—basically the same sort of tasks you can do with the glasses’ native Meta AI.

Image courtesy Sean Liu

But once you want to actually interact with an app or service—like when you want to send a message over email or your favorite non-Meta messaging app like Signal or Telegram—Gemini Live hands off the request to OpenClaw, which takes action.

Users looking to run VisionClaw will need an iPhone, as Liu’s codebase is written as an Xcode/Swift app that specifically uses Meta’s Wearables Device Access Toolkit (DAT) for iOS to connect the phone to Ray-Ban Meta glasses.

Beyond that, you’ll also need a fair understanding of the risks involved with running OpenClaw on your personal hardware.

While it can do some pretty amazing things, it’s a third-party bit of software that could require you to input passwords, API keys, and personal information, which can open the user up to malicious actors. Notably, OpenClaw’s skill integrations could be written by anyone, so users need to be especially vigilant.

Filed Under: AR Development, News

Distance Technologies Reveals Military AR Goggles for Battlefield Awareness

January 21, 2026 From roadtovr

Distance Technologies has unveiled the Field Operator HUD (FOH), an AI-enhanced AR system designed for military vehicles ranging from light utility platforms to main battle tanks.

FOH is said to combine Distance’s own optics with AI-assisted data processing, which the company says improves situational awareness, survivability, and visual workload management in land combat environments.

Having undergone field trials with UK and Finnish forces, FOH integrates command-and-control functions with its AR optics by fusing multiple sensor inputs—ostensibly similar to Anduril’s EagleEye project, revealed in October 2025.

Image courtesy Distance Technologies

Distance says FOH is designed to present only the most critical information by using AI-driven sensor fusion, automated detection, and by integrating everything from thermal and night vision to data sourced from a wide range of vehicle-mounted sensors—something the company hopes will translate into more effective decision-making both in and outside of military vehicles.

The precise specs of the company’s various FOH configurations are predictably under wraps though, which are said to include models for on-the-ground soldiers, pilots, and various types of ground vehicle operators.

Image courtesy Distance Technologies

On the company website however, Distance says FOH includes technology that “creates an independent lightfield for each eye, allowing us to control the perceived distance of the content on a per-pixel level. This makes it possible to match virtual elements 1-to-1 with reality for a completely natural XR experience.”

It’s also said to allow for visualizations that “appear on top of reality across the entire field of view, perfectly matching the observable world people see and experience around them.”

FOH is expected to be available for NATO and allied field trials by the end of Q1 2026, with broader deployment planned from 2027 through defense prime contractors.

Founded in 2024 the Helsinki, Finland-based company is involved in building what it calls “the first true glasses-free XR solution.” It was founded by a host of XR veterans, including a cadre of alums from fellow Finnish XR startup Varjo, including Urho Konttori, Jussi Mäkinen, Mikko Strandborg, Thomas M. Carlsson, and Petteri Timonen.

Filed Under: AR Development, ar industry, News, XR Industry News

Meta Pauses International Release of Meta Ray-Ban Display Glasses

January 7, 2026 From roadtovr

Meta Ray-Ban Display glasses seem to be selling too well, as the company announced it’s delaying the international rollout of its first display-clad smart glasses.

The News

Initially released in the US back in September, Meta said it was hoping to bring the $800 smart glasses to a number of regions in early 2026, which includes a single color display embedded in the right lens.

Now, the company says in a blog post it’s decided to “pause” the planned expansion to the UK, France, Italy and Canada, citing “unprecedented demand and limited inventory.”

Meta Ray-Ban Display Glasses & Neural Band | Image courtesy Meta

The company characterizes stock as “extremely limited,” noting that its seen an “overwhelming amount of interest, and as a result, product waitlists now extend well into 2026.”

Meta says it will continue to focus on fulfilling orders in the US while they “re-evaluate [the] approach to international availability.”

My Take

I was looking forward to getting my hands on a pair of Meta Ray-Ban Display glasses here in Italy, one of the regions currently on “pause”—which my Corpo-to-English translator says I probably shouldn’t hold my breath.

While Meta Ray-Ban Display can’t do everything promised just yet—and doesn’t actually have an app store—the device can do a fair number of things I was hoping to test out if it fit into my daily life.

After all, it can do everything the audio-only Ray-Ban Meta glasses can do in addition to serving up a viewfinder for taking photos and video, the ability to see and respond to messages via WhatsApp, Facebook Messenger, and Instagram, and give you turn-by-turn walking directions in supported cities.

Turn-by-turn Directions in Meta Ray-Ban Display | Image courtesy Meta

Months after launch, Meta says it’s also now pushed an update that includes a teleprompter, the previously teased EMG handwriting, as well as more cities for pedestrian navigation.

Still, it makes a lot more sense from a manufacturing perspective. Meta needs to go slow and deliberate with Meta Ray-Ban Display though, if only based on the fact that the device has likely been heavily subsidized to not be eye-wateringly expensive out of the gate; the company is no doubt eating the fairly high bill of materials if only based on waveguide wastage rates. No app store also means no app revenue, making the first-gen decidedly more of a large beta test than anything.

So, right now it seems like Meta is deliberately going slow to make sure use cases, distribution, and supply chain are all in place before really cashing in on the second gen—maybe following Quest’s playbook; in 2019, the company released the original Quest only to toss out Quest 2 a year later, making for the company’s best-selling XR device to date—and also leaving everyone who bought the first-gen to upgrade only a year later.

Filed Under: AR Development, ar industry, News, XR Industry News

Alibaba Launches Smart Glasses to Rival Meta Ray-Ban Display

December 2, 2025 From roadtovr

Alibaba released a pair of display-clad smart glasses, ostensibly looking to go toe-to-toe with Meta Ray-Ban Display, which launched in the US for $800 back in September.

The News

China’s Alibaba, one the world’s largest retailers and e-commerce companies, just released its first smart glasses, called Quark AI Glasses, which run the company’s own Qwen AI model.

Image courtesy Reuters

Seemingly China-only devices for now, Alibaba is now offering Quark AI in two fundamental versions across Chinese online and brick-and-mortar retailers:

  • Quark AI Glasses S1: starting at ¥3,799 (~$540 USD), includes dual monochrome green displays
  • Quark AI Glasses G1: starting at ¥1,899 (~$270 USD), no displays, sharing core technology of ‘S1’ model

Quark AI Glasses S1 is equipped with a Qualcomm Snapdragon AR1 chipset and a low-power co-processor which drive dual monochrome green micro-OLED displays, boasting a brightness of up to 4,000 nits, according to South China Morning Post.

It also features a five-microphone array with bone conduction, 3K video recording which can be automatically upscaled to 4K, as well as low-light enhancement tech said to bring mobile phone-level imaging to smart glasses. Additionally, Quark AI Glasses S1 include hot-swappable batteries, which plug into the glasses’ stem piece.

You can see the English dubbed version of the Chinese language announcement below:

My Take

At least when it comes to on-paper specs, Quark AI Glasses S1 aren’t exactly a 1:1 rival with Meta Ray-Ban Display, even though both technically include display(s), onboard AI, and the ability to take photos and video.

While Meta Ray-Ban Display only feature a single full-color display, Quark S1’s dual displays only offer monochrome green output, which limits the sort of information that can be seen.

Meta Ray-Ban Display & Neural Band | Photo by Road to VR

Quark S1 also doesn’t come with an input device, like Meta Ray-Ban’s Neural Band, limiting it to only voice and touch input. That means Quark S1 user won’t be scrolling social media, pinching and zooming content, or other nifty UI manipulation.

Still, that might be just enough—at least one of the world’s largest e-commerce, cloud infrastructure, and FinTech companies thinks so. Also not worth overlooking is Quark S1’s unique benefit of being tightly integrated into the Qwen AI ecosystem, as well as the Chinese payment infrastructure for fast and easy QR code-based payments with Alipay; that last one is something most Chinese smart glasses are trying to hook into, like Xiaomi’s own Ray-Ban Meta competitors.

Although the company’s Qwen AI model is available globally, I find it pretty unlikely that Alibaba will ever bring its first-gen models of Quark AI Glasses S1/G1 outside of its usual sphere of influence, or meaningfully intersect with Meta’s supported regions.

Filed Under: AR Development, News, XR Industry News

Former Magic Leap Engineers Launch No-code AR Creation Platform, Aiming to Be ‘Canva of AR’

November 7, 2025 From roadtovr

Trace, a startup founded by former Magic Leap engineers, today announced the launch of a new augmented reality creation platform the company hopes will become the “Canva of AR”.

The News

Trace says it’s targeting everyone from global brands to independent creators wanting to build location-based immersive AR content, according to a recent press statement.

Notably, the platform doesn’t require coding or advanced design expertise, allowing users to design, drop, and share interactive AR experiences across mobile devices, headsets, and AR glasses.

To boot, Trace says it’s launching the platform at a pivotal moment; Adobe has officially discontinued its Aero AR platform, and Meta’s Spark AR platform was retired in January 2025. To seize the moment, Trace is offering three free months of its premium plan to Aero and Spark users who migrate to its platform.

“Even as XR devices become more capable, the creator ecosystem is still really limited,” said Martin Smith, Trace’s CTO and co-founder. “Empowering creators to build and share their vision is such an important part of the picture, whether they’re an educator, an artist, or a Fortune 500 brand. Trace runs anywhere, scales instantly, and supports the fidelity AR deserves.”

Founded in 2021, Trace has already worked with a host of early enterprise adopters, including ESPN, T-Mobile, Qualcomm, Telefónica, Lenovo, and Deutsche Telekom, who have used Trace for marketing, visualization, employee training, and trade show installations at Mobile World Congress and the Hip Hop 50 Summit.

Trace’s creation platform is available to download for free on iPhone and iPad through the App Store, with an optional premium subscription available starting at $20 per month. Creations can currently be viewed through the Trace Viewer app available for free on the App Store and Google Play, and users can import their existing 3D assets in the Web Studio, available at studio.trace3d.app.

My Take

There’s a reason Meta and Adobe haven’t put a massive amount of effort into their respective AR creation platforms lately: all-day AR glasses are still relatively far away, as the usual cadre of XR headset and glasses creators are only now stepping into smart glasses ahead of what could be a multi-year leadup to all-day AR glasses of the future.

Still, enterprise-level AR activations on mobile and mixed reality headsets, like Apple Vision Pro and Quest 3, can turn more than a few heads, making a quick and easy no-code solution ideal for companies and independent creators looking for a selective reach.

Quest 3 (left) and Apple Vision Pro (right) | Based on images courtesy Meta, Apple

I would consider Trace’s strategy of offering former Adobe Aero AR and Meta Spark AR a pretty shrewd move to get some market share out of the gate too, which is increasingly important since it’s the company’s sole occupation—and not a side project like it was for Adobe and Meta.

The more challenging test will be to see how Trace grows in that interminable leadup to wide-spread AR glasses though, and how it weathers the competition sure to come from platform holders looking to offer similarly easy-to-use AR creations suites of the future.

While the platform’s wide target and ease of use are big pluses, I can see it more squarely fitting in the enterprise space than something regular consumers might latch onto—which is probably the ideal fit for a company founded by Magic Leap alumni, who have undoubtedly learned a sharp lesson first hand. Magic Leap’s early flirtations with prosumers in 2018 with the launch of Magic Leap One eventually forced the company to pivot to enterprise two years later.

Filed Under: AR Development, News, XR Industry News

Cambridge & Meta Study Raises the Bar for ‘Retinal Resolution’ in XR

November 5, 2025 From roadtovr

It’s been a long-held assumption that the human eye is capable of detecting a maximum of 60 pixels per degree (PPD), which is commonly called ‘retinal’ resolution. Any more than that, and you’d be wasting pixels. Now, a recent University of Cambridge and Meta Reality Labs study published in Nature maintains the upper threshold is actually much higher than previously thought.

The News

As the University of Cambridge’s news site explains, the research team measured participants’ ability to detect specific display features across a variety of scenarios: both in color and greyscale, looking at images straight on (aka ‘foveal vision’), through their peripheral vision, and from both close up and farther away.

The team used a novel sliding-display device (seen below) to precisely measure the visual resolution limits of the human eye, which seem to overturn the widely accepted benchmark of 60 PPD commonly considered as ‘retinal resolution’.

Image courtesy University of Cambridge, Meta

Essentially, PPD measures how many display pixels fall within one degree of a viewer’s visual field; it’s sometimes seen on XR headset spec sheets to better communicate exactly what the combination of field of view (FOV) and display resolution actually means to users in terms of visual sharpness.

According to the researchers, foveal vision can actually perceive much more than 60 PPD—more like up to 94 PPD for black-and-white patterns, 89 PPD for red-green, and 53 PPD for yellow-violet. Notably, the study had a few outliers in the participant group, with some individuals capable of perceiving as high as 120 PPD—double the upper bound for the previously assumed retinal resolution limit.

The study also holds implications for foveated rendering, which is used with eye-tracking to reduce rendering quality in an XR headset user’s peripheral vision. Traditionally optimized for black and white vision, the study maintains foveated rendering could further reduce bandwidth and computation by lowering resolution further for specific color channels.

So, for XR hardware engineers, the team’s findings point to a new target for true retinal resolution. For a more in-depth look, you can read the full paper in Nature.

My Take

While you’ll be hard pressed to find accurate info on each headset’s PPD—some manufacturers believe in touting pixels per inch (PPI), while others focus on raw resolution numbers—not many come close to reaching 60 PPD, let alone the revised retinal resolution suggested above.

According to data obtained from XR spec comparison site VRCompare, consumer headsets like Quest 3, Pico 4, and Bigscreen Beyond 2 tend to have a peak PPD of around 22-25, which describes the most pixel-dense area at dead center.

Meta ‘Butterscotch’ varifocal prototype (left), ‘Flamera’ passthrough prototype (right) | Image courtesy Meta

Prosumer and enterprise headsets fare slightly better, but only just. Estimating from available data, Apple Vision Pro and Samsung Galaxy XR boast a peak PPD of between 32-36.

Headsets like Shiftall MeganeX Superlight “8K” and Pimax Dream Air have around 35-40 peak PPD. On the top end of the range is Varjo, which claims its XR-4 ($8,000) enterprise headset can achieve 51 peak PPD through an aspheric lens.

Then, there are prototypes like Meta’s ‘Butterscotch’ varifocal headset, which the company showed off in 2023, which is said to sport 56 PPD (not confirmed if average or peak).

Still, there’s a lot more to factor in to reaching ‘perfect’ visuals beyond PPD, peak or otherwise. Optical artifacts, refresh rate, subpixel layout, binocular overlap, and eye box size can all sour even the best displays. What is sure though: there is still plenty of room to grow in the spec sheet department before any manufacturer can confidently call their displays retinal.

Filed Under: AR Development, News, VR Development, XR Industry News

Meta to Ship Project Aria Gen 2 to Researchers in 2026, Paving the Way for Future AR Glasses

October 29, 2025 From roadtovr

Meta announced it’s shipping out Project Aria Gen 2 to third-party researchers next year, which the company hopes will accelerate development of machine perception and AI technologies needed for future AR glasses and personal AI assistants.

The News

Meta debuted Project Aria Gen 1 back in 2020, the company’s sensor-packed research glasses which it used internally to train various AR-focused perception systems, in addition to releasing it in 2024 to third-party researchers across 300 labs in 27 countries.

Then, in February, the company announced Aria Gen 2, which Meta says includes improvements in sensing, comfort, interactivity, and on-device computation. Notably, neither generation contains a display of any type, like the company’s recently launch Meta Ray-Ban Display smart glasses.

Now the company is taking applications for researchers looking to use the device, which is said to ship to qualified applicants sometime in Q2 2026. That also means applications for Aria Gen 1 are now closed, with remaining requests still to be processed.

To front run what Meta calls a “broad” rollout next year, the company is releasing two major resources: the Aria Gen 2 Device Whitepaper and the Aria Gen 2 Pilot Dataset.

The whitepaper details the device’s ergonomic design, expanded sensor suite, Meta’s custom low-power co-processor for real-time perception, and compares Gen 1 and Gen 2’s abilities.

Meanwhile, the pilot dataset provides examples of data captured by Aria Gen 2, showing its capabilities in hand and eye-tracking, sensor fusion, and environmental mapping. The dataset also includes example outputs from Meta’s own algorithms, such as hand-object interaction and 3D bounding box detection, as well as NVIDIA’s FoundationStereo for depth estimation.

Meta is accepting applications from both academic and corporate researchers for Aria Gen 2.

My Take

Meta doesn’t call Project Aria ‘AI glasses’ like it does with its various generations of Ray-Ban Meta or Meta Ray-Ban Display, or even ‘smart glasses’ like you might expect—even if they’re substantively similar on the face of things. They’re squarely considered ‘research glasses’ by the company.

Cool, but why? Why does the company that already makes smart glasses with and without displays, and cool prototype AR glasses need to put out what’s substantively the skeleton of a future device?

What Meta is attempting to do with Project Aria is actually pretty smart for a few reasons: sure, it’s putting out a framework that research teams will build on, but it’s also doing it at a comparatively lower cost than outright hiring teams to directly build out future use cases, whatever those might be.

Aria Gen 2 | Image courtesy Meta

While the company characterizes its future Aria Gen 2 rollout as “broad”, Meta is still filtering for projects based on merit, i.e. getting a chance to guide research without really having to interface with what will likely be substantially more than 300 teams, all of whom will use the glasses to solve problems in how humans can more fluidly interact with an AI system that can see, hear, and know a heck of a lot more about your surroundings than you might at any given moment.

AI is also growing faster than supply chains can keep up, which I think more than necessitates an artisanal pair of smart glasses so teams can get to grips with what will drive the future of AR glasses—the real crux of Meta’s next big move.

Building out an AR platform that may one day supplant the smartphone is no small task, and its iterative steps have the potential to give Meta the sort of market share the company dreamt of way back in 2013 when it co-released the HTC First, which at the time was colloquially called the ‘Facebook phone’.
The device was a flop, partly because the hardware was lackluster, and I think I’m not alone in saying so, mostly because people didn’t want a Facebook phone in their pockets at any price when the ecosystem had some many other (clearly better) choices.

Looking back at the early smartphones, Apple teaches us that you don’t have to be first to be best, but it does help to have so many patents and underlying research projects that your position in the market is mostly assured. And Meta has that in spades.

Filed Under: AR Development, News, XR Industry News

Next Page »

  • Home