• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

AR Development

Meta Pauses International Release of Meta Ray-Ban Display Glasses

January 7, 2026 From roadtovr

Meta Ray-Ban Display glasses seem to be selling too well, as the company announced it’s delaying the international rollout of its first display-clad smart glasses.

The News

Initially released in the US back in September, Meta said it was hoping to bring the $800 smart glasses to a number of regions in early 2026, which includes a single color display embedded in the right lens.

Now, the company says in a blog post it’s decided to “pause” the planned expansion to the UK, France, Italy and Canada, citing “unprecedented demand and limited inventory.”

Meta Ray-Ban Display Glasses & Neural Band | Image courtesy Meta

The company characterizes stock as “extremely limited,” noting that its seen an “overwhelming amount of interest, and as a result, product waitlists now extend well into 2026.”

Meta says it will continue to focus on fulfilling orders in the US while they “re-evaluate [the] approach to international availability.”

My Take

I was looking forward to getting my hands on a pair of Meta Ray-Ban Display glasses here in Italy, one of the regions currently on “pause”—which my Corpo-to-English translator says I probably shouldn’t hold my breath.

While Meta Ray-Ban Display can’t do everything promised just yet—and doesn’t actually have an app store—the device can do a fair number of things I was hoping to test out if it fit into my daily life.

After all, it can do everything the audio-only Ray-Ban Meta glasses can do in addition to serving up a viewfinder for taking photos and video, the ability to see and respond to messages via WhatsApp, Facebook Messenger, and Instagram, and give you turn-by-turn walking directions in supported cities.

Turn-by-turn Directions in Meta Ray-Ban Display | Image courtesy Meta

Months after launch, Meta says it’s also now pushed an update that includes a teleprompter, the previously teased EMG handwriting, as well as more cities for pedestrian navigation.

Still, it makes a lot more sense from a manufacturing perspective. Meta needs to go slow and deliberate with Meta Ray-Ban Display though, if only based on the fact that the device has likely been heavily subsidized to not be eye-wateringly expensive out of the gate; the company is no doubt eating the fairly high bill of materials if only based on waveguide wastage rates. No app store also means no app revenue, making the first-gen decidedly more of a large beta test than anything.

So, right now it seems like Meta is deliberately going slow to make sure use cases, distribution, and supply chain are all in place before really cashing in on the second gen—maybe following Quest’s playbook; in 2019, the company released the original Quest only to toss out Quest 2 a year later, making for the company’s best-selling XR device to date—and also leaving everyone who bought the first-gen to upgrade only a year later.

Filed Under: AR Development, ar industry, News, XR Industry News

Alibaba Launches Smart Glasses to Rival Meta Ray-Ban Display

December 2, 2025 From roadtovr

Alibaba released a pair of display-clad smart glasses, ostensibly looking to go toe-to-toe with Meta Ray-Ban Display, which launched in the US for $800 back in September.

The News

China’s Alibaba, one the world’s largest retailers and e-commerce companies, just released its first smart glasses, called Quark AI Glasses, which run the company’s own Qwen AI model.

Image courtesy Reuters

Seemingly China-only devices for now, Alibaba is now offering Quark AI in two fundamental versions across Chinese online and brick-and-mortar retailers:

  • Quark AI Glasses S1: starting at ¥3,799 (~$540 USD), includes dual monochrome green displays
  • Quark AI Glasses G1: starting at ¥1,899 (~$270 USD), no displays, sharing core technology of ‘S1’ model

Quark AI Glasses S1 is equipped with a Qualcomm Snapdragon AR1 chipset and a low-power co-processor which drive dual monochrome green micro-OLED displays, boasting a brightness of up to 4,000 nits, according to South China Morning Post.

It also features a five-microphone array with bone conduction, 3K video recording which can be automatically upscaled to 4K, as well as low-light enhancement tech said to bring mobile phone-level imaging to smart glasses. Additionally, Quark AI Glasses S1 include hot-swappable batteries, which plug into the glasses’ stem piece.

You can see the English dubbed version of the Chinese language announcement below:

My Take

At least when it comes to on-paper specs, Quark AI Glasses S1 aren’t exactly a 1:1 rival with Meta Ray-Ban Display, even though both technically include display(s), onboard AI, and the ability to take photos and video.

While Meta Ray-Ban Display only feature a single full-color display, Quark S1’s dual displays only offer monochrome green output, which limits the sort of information that can be seen.

Meta Ray-Ban Display & Neural Band | Photo by Road to VR

Quark S1 also doesn’t come with an input device, like Meta Ray-Ban’s Neural Band, limiting it to only voice and touch input. That means Quark S1 user won’t be scrolling social media, pinching and zooming content, or other nifty UI manipulation.

Still, that might be just enough—at least one of the world’s largest e-commerce, cloud infrastructure, and FinTech companies thinks so. Also not worth overlooking is Quark S1’s unique benefit of being tightly integrated into the Qwen AI ecosystem, as well as the Chinese payment infrastructure for fast and easy QR code-based payments with Alipay; that last one is something most Chinese smart glasses are trying to hook into, like Xiaomi’s own Ray-Ban Meta competitors.

Although the company’s Qwen AI model is available globally, I find it pretty unlikely that Alibaba will ever bring its first-gen models of Quark AI Glasses S1/G1 outside of its usual sphere of influence, or meaningfully intersect with Meta’s supported regions.

Filed Under: AR Development, News, XR Industry News

Former Magic Leap Engineers Launch No-code AR Creation Platform, Aiming to Be ‘Canva of AR’

November 7, 2025 From roadtovr

Trace, a startup founded by former Magic Leap engineers, today announced the launch of a new augmented reality creation platform the company hopes will become the “Canva of AR”.

The News

Trace says it’s targeting everyone from global brands to independent creators wanting to build location-based immersive AR content, according to a recent press statement.

Notably, the platform doesn’t require coding or advanced design expertise, allowing users to design, drop, and share interactive AR experiences across mobile devices, headsets, and AR glasses.

To boot, Trace says it’s launching the platform at a pivotal moment; Adobe has officially discontinued its Aero AR platform, and Meta’s Spark AR platform was retired in January 2025. To seize the moment, Trace is offering three free months of its premium plan to Aero and Spark users who migrate to its platform.

“Even as XR devices become more capable, the creator ecosystem is still really limited,” said Martin Smith, Trace’s CTO and co-founder. “Empowering creators to build and share their vision is such an important part of the picture, whether they’re an educator, an artist, or a Fortune 500 brand. Trace runs anywhere, scales instantly, and supports the fidelity AR deserves.”

Founded in 2021, Trace has already worked with a host of early enterprise adopters, including ESPN, T-Mobile, Qualcomm, Telefónica, Lenovo, and Deutsche Telekom, who have used Trace for marketing, visualization, employee training, and trade show installations at Mobile World Congress and the Hip Hop 50 Summit.

Trace’s creation platform is available to download for free on iPhone and iPad through the App Store, with an optional premium subscription available starting at $20 per month. Creations can currently be viewed through the Trace Viewer app available for free on the App Store and Google Play, and users can import their existing 3D assets in the Web Studio, available at studio.trace3d.app.

My Take

There’s a reason Meta and Adobe haven’t put a massive amount of effort into their respective AR creation platforms lately: all-day AR glasses are still relatively far away, as the usual cadre of XR headset and glasses creators are only now stepping into smart glasses ahead of what could be a multi-year leadup to all-day AR glasses of the future.

Still, enterprise-level AR activations on mobile and mixed reality headsets, like Apple Vision Pro and Quest 3, can turn more than a few heads, making a quick and easy no-code solution ideal for companies and independent creators looking for a selective reach.

Quest 3 (left) and Apple Vision Pro (right) | Based on images courtesy Meta, Apple

I would consider Trace’s strategy of offering former Adobe Aero AR and Meta Spark AR a pretty shrewd move to get some market share out of the gate too, which is increasingly important since it’s the company’s sole occupation—and not a side project like it was for Adobe and Meta.

The more challenging test will be to see how Trace grows in that interminable leadup to wide-spread AR glasses though, and how it weathers the competition sure to come from platform holders looking to offer similarly easy-to-use AR creations suites of the future.

While the platform’s wide target and ease of use are big pluses, I can see it more squarely fitting in the enterprise space than something regular consumers might latch onto—which is probably the ideal fit for a company founded by Magic Leap alumni, who have undoubtedly learned a sharp lesson first hand. Magic Leap’s early flirtations with prosumers in 2018 with the launch of Magic Leap One eventually forced the company to pivot to enterprise two years later.

Filed Under: AR Development, News, XR Industry News

Cambridge & Meta Study Raises the Bar for ‘Retinal Resolution’ in XR

November 5, 2025 From roadtovr

It’s been a long-held assumption that the human eye is capable of detecting a maximum of 60 pixels per degree (PPD), which is commonly called ‘retinal’ resolution. Any more than that, and you’d be wasting pixels. Now, a recent University of Cambridge and Meta Reality Labs study published in Nature maintains the upper threshold is actually much higher than previously thought.

The News

As the University of Cambridge’s news site explains, the research team measured participants’ ability to detect specific display features across a variety of scenarios: both in color and greyscale, looking at images straight on (aka ‘foveal vision’), through their peripheral vision, and from both close up and farther away.

The team used a novel sliding-display device (seen below) to precisely measure the visual resolution limits of the human eye, which seem to overturn the widely accepted benchmark of 60 PPD commonly considered as ‘retinal resolution’.

Image courtesy University of Cambridge, Meta

Essentially, PPD measures how many display pixels fall within one degree of a viewer’s visual field; it’s sometimes seen on XR headset spec sheets to better communicate exactly what the combination of field of view (FOV) and display resolution actually means to users in terms of visual sharpness.

According to the researchers, foveal vision can actually perceive much more than 60 PPD—more like up to 94 PPD for black-and-white patterns, 89 PPD for red-green, and 53 PPD for yellow-violet. Notably, the study had a few outliers in the participant group, with some individuals capable of perceiving as high as 120 PPD—double the upper bound for the previously assumed retinal resolution limit.

The study also holds implications for foveated rendering, which is used with eye-tracking to reduce rendering quality in an XR headset user’s peripheral vision. Traditionally optimized for black and white vision, the study maintains foveated rendering could further reduce bandwidth and computation by lowering resolution further for specific color channels.

So, for XR hardware engineers, the team’s findings point to a new target for true retinal resolution. For a more in-depth look, you can read the full paper in Nature.

My Take

While you’ll be hard pressed to find accurate info on each headset’s PPD—some manufacturers believe in touting pixels per inch (PPI), while others focus on raw resolution numbers—not many come close to reaching 60 PPD, let alone the revised retinal resolution suggested above.

According to data obtained from XR spec comparison site VRCompare, consumer headsets like Quest 3, Pico 4, and Bigscreen Beyond 2 tend to have a peak PPD of around 22-25, which describes the most pixel-dense area at dead center.

Meta ‘Butterscotch’ varifocal prototype (left), ‘Flamera’ passthrough prototype (right) | Image courtesy Meta

Prosumer and enterprise headsets fare slightly better, but only just. Estimating from available data, Apple Vision Pro and Samsung Galaxy XR boast a peak PPD of between 32-36.

Headsets like Shiftall MeganeX Superlight “8K” and Pimax Dream Air have around 35-40 peak PPD. On the top end of the range is Varjo, which claims its XR-4 ($8,000) enterprise headset can achieve 51 peak PPD through an aspheric lens.

Then, there are prototypes like Meta’s ‘Butterscotch’ varifocal headset, which the company showed off in 2023, which is said to sport 56 PPD (not confirmed if average or peak).

Still, there’s a lot more to factor in to reaching ‘perfect’ visuals beyond PPD, peak or otherwise. Optical artifacts, refresh rate, subpixel layout, binocular overlap, and eye box size can all sour even the best displays. What is sure though: there is still plenty of room to grow in the spec sheet department before any manufacturer can confidently call their displays retinal.

Filed Under: AR Development, News, VR Development, XR Industry News

Meta to Ship Project Aria Gen 2 to Researchers in 2026, Paving the Way for Future AR Glasses

October 29, 2025 From roadtovr

Meta announced it’s shipping out Project Aria Gen 2 to third-party researchers next year, which the company hopes will accelerate development of machine perception and AI technologies needed for future AR glasses and personal AI assistants.

The News

Meta debuted Project Aria Gen 1 back in 2020, the company’s sensor-packed research glasses which it used internally to train various AR-focused perception systems, in addition to releasing it in 2024 to third-party researchers across 300 labs in 27 countries.

Then, in February, the company announced Aria Gen 2, which Meta says includes improvements in sensing, comfort, interactivity, and on-device computation. Notably, neither generation contains a display of any type, like the company’s recently launch Meta Ray-Ban Display smart glasses.

Now the company is taking applications for researchers looking to use the device, which is said to ship to qualified applicants sometime in Q2 2026. That also means applications for Aria Gen 1 are now closed, with remaining requests still to be processed.

To front run what Meta calls a “broad” rollout next year, the company is releasing two major resources: the Aria Gen 2 Device Whitepaper and the Aria Gen 2 Pilot Dataset.

The whitepaper details the device’s ergonomic design, expanded sensor suite, Meta’s custom low-power co-processor for real-time perception, and compares Gen 1 and Gen 2’s abilities.

Meanwhile, the pilot dataset provides examples of data captured by Aria Gen 2, showing its capabilities in hand and eye-tracking, sensor fusion, and environmental mapping. The dataset also includes example outputs from Meta’s own algorithms, such as hand-object interaction and 3D bounding box detection, as well as NVIDIA’s FoundationStereo for depth estimation.

Meta is accepting applications from both academic and corporate researchers for Aria Gen 2.

My Take

Meta doesn’t call Project Aria ‘AI glasses’ like it does with its various generations of Ray-Ban Meta or Meta Ray-Ban Display, or even ‘smart glasses’ like you might expect—even if they’re substantively similar on the face of things. They’re squarely considered ‘research glasses’ by the company.

Cool, but why? Why does the company that already makes smart glasses with and without displays, and cool prototype AR glasses need to put out what’s substantively the skeleton of a future device?

What Meta is attempting to do with Project Aria is actually pretty smart for a few reasons: sure, it’s putting out a framework that research teams will build on, but it’s also doing it at a comparatively lower cost than outright hiring teams to directly build out future use cases, whatever those might be.

Aria Gen 2 | Image courtesy Meta

While the company characterizes its future Aria Gen 2 rollout as “broad”, Meta is still filtering for projects based on merit, i.e. getting a chance to guide research without really having to interface with what will likely be substantially more than 300 teams, all of whom will use the glasses to solve problems in how humans can more fluidly interact with an AI system that can see, hear, and know a heck of a lot more about your surroundings than you might at any given moment.

AI is also growing faster than supply chains can keep up, which I think more than necessitates an artisanal pair of smart glasses so teams can get to grips with what will drive the future of AR glasses—the real crux of Meta’s next big move.

Building out an AR platform that may one day supplant the smartphone is no small task, and its iterative steps have the potential to give Meta the sort of market share the company dreamt of way back in 2013 when it co-released the HTC First, which at the time was colloquially called the ‘Facebook phone’.
The device was a flop, partly because the hardware was lackluster, and I think I’m not alone in saying so, mostly because people didn’t want a Facebook phone in their pockets at any price when the ecosystem had some many other (clearly better) choices.

Looking back at the early smartphones, Apple teaches us that you don’t have to be first to be best, but it does help to have so many patents and underlying research projects that your position in the market is mostly assured. And Meta has that in spades.

Filed Under: AR Development, News, XR Industry News

Researchers Propose Novel E-Ink XR Display with Resolution Far Beyond Current Headsets

October 27, 2025 From roadtovr

A group of Sweden-based researchers proposed a novel e-ink display solution that could make way for super compact, retina-level VR headsets and AR glasses in the future.

The News

Traditional emissive displays are shrinking, but they face physical limits; smaller pixels tend to emit less uniformly and provide less intense light, which is especially noticeable in near-eye applications like virtual and augmented reality headsets.

In a recent research paper published in Nature, a team of researchers presents what a “retinal e-ink display” which hopes to offer a new solution quite unlike displays seen in modern VR headsets today, which are increasingly adopting micro-OLEDs to reduce size and weight.

The paper was authored by researchers affiliated with Uppsala University, Umeå University, University of Gothenburg, and Chalmers University of Technology in Gothenburg: Ade Satria Saloka Santosa, Yu-Wei Chang, Andreas B. Dahlin, Lars Österlund, Giovanni Volpe, and Kunli Xiong.

While conventional e-paper has struggled to reach the resolution necessary for realistic, high-fidelity images, the team proposes a new form of e-paper featuring electrically tunable “metapixels” only about 560 nanometres wide.

This promises a pixel density of over 25,000 pixels per inch (PPI)—an order of magnitude denser than displays currently used in headsets like Samsung Galaxy XR or Apple Vision Pro. Those headsets have a PPI of around 4,000.

Image courtesy Nature

As the paper describes it, each metapixel is made from tungsten trioxide (WO₃) nanodisks that undergo a reversible insulator-to-metal transition when electrically reduced. This process dynamically changes the material’s refractive index and optical absorption, allowing nanoscale control of brightness and color contrast.

In effect, when lit by ambient light, the display can create bright, saturated colors far thinner than a human hair, as well as deep blacks with reported optical contrast ratios around 50%—a reflective equivalent of high-dynamic range (HDR).

And the team says it could be useful in both AR and VR displays. The figure below shows a conceptual optical stack for both applications, with Figure A representing a VR display, and Figure B showing an AR display.

Image courtesy Nature

Still, there are some noted drawbacks. Beyond sheer resolution, the display delivers full-color video at “more than 25 Hz,” which is significantly lower than what VR users need for comfortable viewing. In addition to a relatively low refresh rate, researchers note the retina e-paper requires further optimization in color gamut, operational stability and lifetime.

“Lowering the operating voltage and exploring alternative electrolytes represent promising engineering routes to extend device durability and reduce energy consumption,” the paper explains. “Moreover, its ultra-high resolution also necessitates the development of ultra-high-resolution TFT arrays for independent pixel control, which will enable fully addressable, large-area displays and is therefore a critical direction for future research and technological development.”

And while the e-paper display itself is remarkably low-powered, packing in the graphical compute to put those metapixels to work will also be a challenge. It’s a good problem to have, but a problem none the less.

My Take

At least as the paper describes it, the underlying tech could produce XR displays approaching the size and pixel density that we’ve never seen before. And reaching the limits of human visual perception is one of those holy grail moments I’ve been waiting for.

Getting that refresh rate up well beyond 25 Hz is going to be extremely important though. As the paper describes it, 25 Hz is good for video playback, but driving an immersive VR environment requires at least 60 Hz refresh to be minimally comfortable. 72 Hz is better, and 90 Hz is the standard nowadays.

I’m also curious to see the e-paper display stacked up against lower resolution micro-OLED contemporaries, if only to see how that proposed ambient lighting can achieve HDR. I have a hard time wrapping my head around it. Essentially, the display’s metapixels absorb and scatter ambient light, much like Vantablack does—probably something that needs to be truly seen in person to be believed.

Healthy skepticism aside, I find it truly amazing we’ve even arrived at the conversation in the first place: we’re at the point where XR displays could recreate reality, at least as far as your eyes are concerned.

Filed Under: AR Development, News, VR Development, XR Industry News

Former Oculus Execs’ AI Smart Glasses Startup ‘Sesame’ Raises $250M Series B Funding

October 24, 2025 From roadtovr

Sesame, an AI and smart glasses startup founded by former Oculus execs, raised $250 million in Series B funding, which the company hopes will accelerate its voice-based AI.

The News

As first reported by Tech Crunch, lead investors in Sesame’s Series B include Spark Capital and Sequoia Capital, bringing the company’s overall funding to $307.6 million, according to Crunchbase data.

Exiting stealth earlier this year, Sesame was founded by Oculus co-founder and former CEO Brendan Iribe, former Oculus hardware architect Ryan Brown, and Ankit Kumar, former CTO of AR startup Ubiquity6. Additionally, Oculus co-founder Nate Mitchell announced in June he was joining Sesame as Chief Product Officer, which he noted was to “help bring computers to life.”

Image courtesy Sesame

Sesame is currently working on an AI assistant along with a pair of lightweight smart glasses. Its AI assistant aims to be “the perfect AI conversational partner,” Sequoia Capital says in a recent post.

“Sesame’s vision is to build an ambient interface that is always available and has contextual awareness of the world around you,” Sequoia says. “To achieve that, Sesame is creating their own lightweight, fashion-forward AI-enabled glasses designed to be worn all day. They’re intentionally crafted—fit for everyday life.”

Sesame is currently taking signups for beta access to its AI assistants Miles and Maya in an iOS app, and also has a public preview showcasing a ‘call’ function that allows you to speak with the chatbots.

My Take

Love it or hate it, AI is going to be baked into everything in the future, as contextually aware systems hope to bridge the gap between user input and the expectation of timely and intelligent output. That’s increasingly important when the hardware doesn’t include a display, requiring the user to interface almost entirely by voice.

Some things to watch out for: if the company does commercialize a pair of smart glasses to champion its AI assistant, it will be competing for some pretty exclusive real estate that companies like Meta, Google, Samsung, and Apple (still unconfirmed) are currently gunning for. That puts Sesame at somewhat of a disadvantage if it hopes to go it alone, but not if it’s hoping for a timely exit into the coming wave of smart glasses by being acquired by any of the above.

There’s also some pretty worrying precedent in the rear view mirror too: e.g. Humane’s AI Pin or AI Friend necklace, both of which were publicly lambasted for essentially releasing hardware that could just as easily have been apps on your smartphone.

Granted, Sesame hasn’t shown off its smart glasses hardware yet, so there’s no telling what the company hopes to bring to the table outside of the having an easy-to-wear pair of off-ear headphones for all-day AI stuff—that, to me, would be the worst case scenario, as Meta refines its own smart glasses in partnership with EssilorLuxottica, Google releases Android XR frames with Gentle Monster and Warby Parker, Samsung releases its own Android XR glasses, and Apple does… something. We don’t know yet.

Whatever the case, I’m looking forward to it, if only based on the company’s combined experience in XR, which I’d argue any startup would envy as the race to build the next big computing platform truly takes off.

Filed Under: AR Development, AR Investment, News, XR Industry News

Amazon is Developing Smart Glasses to Allow Delivery Drivers to Work Hands-free

October 23, 2025 From roadtovr

Amazon announced it’s developing smart glasses for its delivery drivers, which include a display for real-time navigation and delivery instructions.

The News

Amazon announced the news in a blog post, which partially confirms a recent report from The Information, which alleged that Amazon is developing smart glasses both for its delivery drivers and consumers.

The report, released in September, maintained that Amazon’s smart glasses for delivery drivers will be bulkier and less sleek than the consumer model. Codenamed ‘Jayhawk’, the delivery-focused smart glasses are expected to rollout as soon as Q2 2026, and include an initial production run of 100,000 units.

Image courtesy Amazon

Amazon says the smart glasses were designed and optimized with input from hundreds of delivery drivers, and include the ability to identify hazards, scan packages, capture proof of delivery, and navigate by serving up turn-by-turn walking directions.

The company hasn’t confirmed whether the glasses’ green monotone heads-up display is monoscopic or stereoscopic, however images suggest it indeed features a single waveguide in the right lens.

Moreover, the glasses aren’t meant to be used while driving, as Amazon says that the glasses “automatically activate” when the driver parks their vehicle. Only afterwards does the driver receive instructions, ostensibly done to reduce the risk of driver distraction.

In addition to the glasses, the system also features what Amazon calls “a small controller worn in the delivery vest that contains operational controls, a swappable battery ensuring all-day use, and a dedicated emergency button to reach emergency services along their routes if needed.”

Additionally, Amazon says the glasses support prescription lenses along with transitional lenses that automatically adjust to light.

As for the reported consumer version, it’s possible Amazon may be looking to evolve its current line of ‘Echo Frames’ glasses. First introduced in 2019, Echo Frames support AI voice control, music playback, calls, and Alexa smart home control, although they notably lack any sort of camera or display.

My Take

I think Amazon has a good opportunity to dogfood (aka, use its own technology) here on a pretty large scale—probably much larger than Meta or Google could initially with their first generation of smart glasses with displays.

That said, gains made in enterprise smart glasses can be difficult to translate to consumer products, which will necessarily include more functions and apps, and likely require more articulated input—all of the things that can make or break any consumer product.

Third-gen Echo Frames | Image courtesy Amazon

Amazon’s core strength though is generally less focused on high-end innovation, and more about creating cheap, reliable hardware that feeds into recurring revenue streams: Kindle, Fire TV, Alexa products, etc. Essentially, if Amazon can’t immediately figure out a way to make consumer smart glasses feed into its existing ecosystems, I wouldn’t expect to see the company put its full weight behind the device, at least not initially.

After the 2014 failure of Fire Phone, Amazon may still be gun-shy from going head-first into a segment it has near-zero experience entering. And I really don’t count Echo Frames, because they’re primarily just Bluetooth headphones with Alexa support baked in. Still, real smart glasses with cameras and displays represent a treasure trove of data that the company may not be so keen to pass up.

Using object recognition to peep into your home or otherwise follow you around could allow Amazon to better target personalized suggestions, figure out brand preferences, and even track users as they shop at physical stores. Whatever the case, I bet the company will give it a go, if only to occupy the top slot when you search “smart glasses” on Amazon.

Filed Under: AR Development, News, XR Industry News

Meta Ray-Ban Display Repairablity is Predictably Bad, But Less Than You Might Think

October 9, 2025 From roadtovr

iFixit got their hands on a pair of Meta Ray-Ban Display smart glasses, so we finally get to see what’s inside. Is it repairable? Not really. But if you can somehow find replacement parts, you could at least potentially swap out the battery.

The News

Meta launched the $800 smart glasses in the US late last month, marking the company’s first pair with a heads-up display.

Serving up a monocular display, Meta Ray-Ban allows for basic app interaction beyond the standard stuff seen (or rather ‘heard’) in its audio-only Ray-Ban Meta and Oakley Meta glasses. It can do things like let you view and respond to messages, get turn-by-turn walking directions, and even use the display as a viewfinder for photos and video.

And iFixit shows off in their latest video that cracking into the glasses and attempting repairs is pretty fiddly, but not entirely impossible.

Meta Ray-Ban Display’s internal battery | Image courtesy iFixit

The first thing you’d probably eventually want to do is replace the battery, which requires splitting the right arm down a glued seam—a common theme with the entire device. Getting to the 960 mWh internal battery, which is slightly larger than the one seen in the Oakley Meta HSTN, you’ll be sacrificing the device’s IPX4 splash resistance rating.

And the work is fiddly, but iFixit manages to go all the way down to the dual speakers, motherboard, Snapdragon AR1 chipset, and liquid crystal on silicon (LCoS) light engine, the latter of which was captured with a CT scanner to show off just how micro Meta has managed to get its most delicate part.

Granted, this is a teardown and not a repair guide as such. All of the components are custom, and replacement parts aren’t available yet. You would also need a few specialized tools and an appetite for risk of destroying a pretty hard-to-come-by device.

For more, make sure to check out iFixit’s full article, which includes images and detailed info on each component. You can also see the teardown in action in the full nine-minute video below.

My Take

Meta isn’t really thinking deeply about reparability when it comes to smart glasses right now, which isn’t exactly shocking. Like earbuds, smart glasses are all about miniaturization to hit an all-day wearable form factor, making its plastic and glue-coated exterior a pretty clear necessity in the near term.

Another big factor: the company is probably banking on the fact that prosumers willing to shell out $800 bucks this year will likely be happy to so the same when Gen 2 eventually arrives. That could be in two years, but I’m betting less if the device performs well enough in the market. After all, Meta sold Quest 2 in 2020 just one year after releasing the original Quest, so I don’t see why they wouldn’t do the same here.

That said, I don’t think we’ll see any real degree of reparability in smart glasses until we get to the sort of sales volumes currently seen in smartphones. And that’s just for a baseline of readily available replacement parts, third-party or otherwise.

So while I definitely want a pair of smart glasses (and eventually AR glasses) that look indistinguishable from standard frames, that also kind of means I have to be okay with eventually throwing away a perfectly cromulent pair of specs just because I don’t have the courage to open it up, or know anyone who does.

Filed Under: AR Development, News, XR Industry News

Why Ray-Ban Meta Glasses Failed on Stage at Connect

September 19, 2025 From roadtovr

Meta CEO Mark Zuckerberg’s keynote at this year’s Connect wasn’t exactly smooth—especially if count two big hiccups that sidetracked live demos for both the latest Ray-Ban Meta smart glasses and the new Meta Ray-Ban Display glasses.

Ray-Ban Meta (Gen 2) smart glasses essentially bring the same benefits as Oakley Meta HSTN, which launched back in July: longer battery life and better video capture.

One of the biggest features though is its access to Meta’s large language model (LLM), Meta AI, which pops up when you say “Hey Meta”, letting you ask questions about anything, from the weather to what the glasses camera can actually see.

As part of the on-stage demo of its Live AI feature, which runs continuously instead of sporadically, food influencer Jack Mancuso attempted to create a Korean-inspired steak sauce using the AI as a guide.

And it didn’t go well, as Mancuso struggled to get the Live AI back on track after missing a key step in the sauce’s preparation. You can see the full cringe-inducing glory for yourself, timestamped below:

And the reason behind it is… well, just dumb. Jake Steineman, Developer Advocate at Meta’s Reality Labs, explained what happened in an X post:

So here’s the story behind why yesterdays live #metaconnect demo failed – when the chef said “Hey Meta start Live AI” it activated everyone’s Meta AI in the room at once and effectively DDOS’d our servers 🤣

That’s what we get for doing it live!

— Jake Steinerman 🔜 Meta Connect (@jasteinerman) September 19, 2025

Unfortunate, yes. But also pretty foreseeable, especially considering the AI ‘wake word’ gaffe has been a thing since the existence of Google Nest (ex-Home) and Amazon Alexa.

Anyone with one of those friendly tabletop pucks has probably experienced what happens when a TV advert includes “Hey Google” or “Hey Alexa,” unwittingly commanding every device in earshot to tell them the weather, or even order items online.

What’s more surprising though: there were enough people using a Meta product in earshot to screw with its servers. Meta AI isn’t like Google Gemini or Apple’s Siri—it doesn’t have OS-level access to smartphones. The only devices with default are the company’s Ray-Ban Meta and Oakley Meta glasses (and Quest if you opt-in), conjuring the image of a room full of confused, bespectacled Meta employees waiting out of shot.

As for the Meta Ray-Ban Display glasses, which the company is launching in the US for $799 on September 30th, the hiccup was much more forgivable. Zuckerberg was attempting to take a live video call from company CTO Andrew Bosworth, who after several missed attempts, came on stage to do an ad hoc simulation of what it might have been like.

Those sorts of live product events are notoriously bad for both Wi-Fi and mobile connections, simply because of how many people are in the room, often with multiple devices per-person. Still, Zuckerberg didn’t pull a Steve Jobs, where the former Apple CEO demanded everyone in attendance at iPhone 4’s June 2010 unveiling turn off their Wi-Fi after an on-stage connection flub.

You can catch the Meta Ray-Ban Display demo below (obligatory cringe warning):

Filed Under: AR Development, News, XR Industry News

Next Page »

  • Home