• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

Meta Ray-Ban Display Repairablity is Predictably Bad, But Less Than You Might Think

October 9, 2025 From roadtovr

iFixit got their hands on a pair of Meta Ray-Ban Display smart glasses, so we finally get to see what’s inside. Is it repairable? Not really. But if you can somehow find replacement parts, you could at least potentially swap out the battery.

The News

Meta launched the $800 smart glasses in the US late last month, marking the company’s first pair with a heads-up display.

Serving up a monocular display, Meta Ray-Ban allows for basic app interaction beyond the standard stuff seen (or rather ‘heard’) in its audio-only Ray-Ban Meta and Oakley Meta glasses. It can do things like let you view and respond to messages, get turn-by-turn walking directions, and even use the display as a viewfinder for photos and video.

And iFixit shows off in their latest video that cracking into the glasses and attempting repairs is pretty fiddly, but not entirely impossible.

Meta Ray-Ban Display’s internal battery | Image courtesy iFixit

The first thing you’d probably eventually want to do is replace the battery, which requires splitting the right arm down a glued seam—a common theme with the entire device. Getting to the 960 mWh internal battery, which is slightly larger than the one seen in the Oakley Meta HSTN, you’ll be sacrificing the device’s IPX4 splash resistance rating.

And the work is fiddly, but iFixit manages to go all the way down to the dual speakers, motherboard, Snapdragon AR1 chipset, and liquid crystal on silicon (LCoS) light engine, the latter of which was captured with a CT scanner to show off just how micro Meta has managed to get its most delicate part.

Granted, this is a teardown and not a repair guide as such. All of the components are custom, and replacement parts aren’t available yet. You would also need a few specialized tools and an appetite for risk of destroying a pretty hard-to-come-by device.

For more, make sure to check out iFixit’s full article, which includes images and detailed info on each component. You can also see the teardown in action in the full nine-minute video below.

My Take

Meta isn’t really thinking deeply about reparability when it comes to smart glasses right now, which isn’t exactly shocking. Like earbuds, smart glasses are all about miniaturization to hit an all-day wearable form factor, making its plastic and glue-coated exterior a pretty clear necessity in the near term.

Another big factor: the company is probably banking on the fact that prosumers willing to shell out $800 bucks this year will likely be happy to so the same when Gen 2 eventually arrives. That could be in two years, but I’m betting less if the device performs well enough in the market. After all, Meta sold Quest 2 in 2020 just one year after releasing the original Quest, so I don’t see why they wouldn’t do the same here.

That said, I don’t think we’ll see any real degree of reparability in smart glasses until we get to the sort of sales volumes currently seen in smartphones. And that’s just for a baseline of readily available replacement parts, third-party or otherwise.

So while I definitely want a pair of smart glasses (and eventually AR glasses) that look indistinguishable from standard frames, that also kind of means I have to be okay with eventually throwing away a perfectly cromulent pair of specs just because I don’t have the courage to open it up, or know anyone who does.

Filed Under: AR Development, News, XR Industry News

Apple Reportedly Shelves Cheaper & Lighter Vision Pro for Smart Glasses to Rival Meta

October 2, 2025 From roadtovr

Apple seems to be releasing its next Vision Pro with M5 chip soon, but according to a new report from Bloomberg’s Mark Gurman, the company may have shelved plans for a follow-up headset that’s cheaper and lighter in favor of releasing smart glasses set to compete with Meta.

The News

According to previous rumors, Apple was developing a Vision Pro follow-up more squarely aimed at consumers—often referred to as ‘Vision Air’ (codenamed ‘N100’). Analyst Ming-Chi Kuo reported in September that Vision Air was expected to be “over 40% lighter and more than 50% cheaper” than the current Vision Pro, putting the device at less than 400g and less than $1,750.

Notably, a hardware refresh of Vision Pro featuring Apple’s latest M5 chip is likely releasing soon, according to recent FCC filings, although its 600g weight and $3,500 price tag are likely to remain the same.

Meta Ray-Ban Display Glasses & Neural Band | Image courtesy Meta

Now, Bloomberg’s Mark Gurman maintains Apple is putting Vision Air on hold, citing internal sources. Instead, Apple is reportedly shifting resources to accelerate development of smart glasses, which aim to take on Ray-Ban Meta and the new Meta Ray-Ban Display glasses.

Gurman reports that Apple is pursuing at least two types of smart glasses: an audio-only pair codenamed ‘N50’, which are meant to pair with iPhone, and ostensibly compete with Meta’s fleet of $300+ smart glasses built in partnership with EssilorLuxottica. Apple is reportedly set to preview N50 as soon as 2026, with a release by 2027.

A second pair is said to contain a display, similar to Meta Ray-Ban Display, which launched late last month in the US for $800. In contrast, Apple’s display smart glasses were expected to release in 2028, however the company is reportedly fast-tracking the device’s development.

Both versions are said to emphasize voice interaction and AI integration, and offer multiple styles and a new custom chip to power the devices.

My Take

The shifted development timeline feels a little out of character for Apple, which typically enters segments after a technology is generally proven. Apple didn’t invent the smartphone, smart watch, laptop, or desktop, although it owns a significant slice of each in 2025 thanks to its unique brand of ‘ecosystem stickiness’ and inherent cool factor.

The entrance of Meta Ray-Ban Displays however marks an important inflection point in the race to own the next big computing paradigm. Smart glasses with displays aren’t the end destination, but they are an important stepping stone along the way to all-day augmented reality. And a strong foothold in AI is integral.

“Let’s wait and see what Apple does” has been a pretty common thought process when it comes to emergent tech—something people have been saying for years in VR. The big hope was Apple would eventually swoop in, redefine VR for the masses, and make the best version of it with their first device.

Vision Pro (M2) | Image captured by Road to VR

But Vision Pro isn’t the first-gen iPhone (2007). While a lighter, cheaper version could address pain points, it would still have a hard time not drawing direct comparisons to Meta devices 5-10 times cheaper.

But AI isn’t one of those technologies you can afford to sleep on, if only from a user data collection perspective. In contrast to its biggest competitors, Apple has notably lagged in AI development, having only released its Apple Intelligence platform in late 2024 to counter Google Gemini, Microsoft Copilot, and OpenAI’s ChatGPT. Apple needs to play catchup.

While Apple is expected to release a rebuilt Siri this year to power its hardware ecosystem, smart glasses are the tip of the AI spear. Even without displays, wearing an always-on device represents a treasure trove of data and user behavior that companies will use to improve services, figure out what works and what doesn’t, and ultimately build the next big platform that companies have been salivating over: all-day AR glasses.

That’s the real battle here. Not only does Apple need smart glasses to compete in the next computing paradigm, but they also need them to bridge a very real component price gap. Economies of scale will eventually bring fiddly components down in price, like the extremely expensive and difficult to manufacture silicon carbide waveguides seen in Meta’s Orion AR prototype revealed at last year’s Connect, which cost the company $10,000 each to build. Companies also need to create parts capable of fitting into a glasses form factor, with smart glasses representing an important first testing ground.

Filed Under: Apple Vision Pro News & Reviews, News

Next Apple Vision Pro Inches Closer to Launch, FCC Documents Suggest

October 1, 2025 From roadtovr

Apple may be preparing to release its long-rumored M5 hardware refresh of Vision Pro, according to new certification tests filed with the US Federal Communication Commission (FCC).

The News

As first spotted by MacRumors, Apple’s next Vision Pro seems to be right around the corner, as the FCC has published a trove of transmission tests, SAR test reports, and WLAN test reports for a new “Head Mounted Device” from Apple.

The FCC documents in question don’t include any specs as such, however they do include a single image that seems to confirm the device is Apple’s next Vision Pro, and not, say, a pair of smart glasses.

Image courtesy FCC

This follows a leak in August, which seemingly confirmed that Vision Pro isn’t getting a massive overhaul, instead pointing to a hardware refresh that could feature Apple’s upcoming M5 chipset, according to code shared by Apple and discovered by MacRumors. 

The report also suggested that the new Vision Pro hardware refresh “isn’t expected to feature any design changes or hardware updates aside from the new chip,” although it could feature a new, more comfortable head strap.

My Take

The inclusion of M5 alone doesn’t feel like a massive overhaul, although it is a fair leap in chipset generations. Released in February 2024 for $3,500, the original Apple Vision Pro was saddled with the then two year-old M2—still the most powerful consumer standalone to date, but just not one on par with the rest of its ‘Pro’ lineup at the time.

Notably, despite having access to almost all iPad apps in addition to built-for-XR apps of its own, Vision Pro (M2) doesn’t run some of the most requested productivity apps natively, like Final Cut Pro or Logic Pro. There’s no guarantee the new hardware refresh will either, but it could do a few things.

Apple Vision Pro with ANNAPRO A2 Strap | Photo by Road to VR

Provided we’re getting what’s reported (no more, no less), that essentially puts Vision Pro on par with the rest of Apple’s core products. It could allow developers to build apps that perform consistently across all of the reported ‘Pro’ Mac and iPad devices coming with M5, new Vision Pro included.

As Road to VR reader Christian Schildwaechter points out in the comments of the initial report, the M5 Vision Pro refresh might actually be a distinctly pragmatic move by Apple though, and less about enabling more powerful apps for prosumers, making it more of a stopgap measure.

As Schildwaechter puts it, “most users won’t benefit from an M5. Enterprise customers creating inhouse apps will be happy about the extra performance, but developers targeting consumers probably won’t bother with it.”

So, Apple could be killing two birds with one stone. Hypothetically, the company can flush its stock of Vision Pro parts and plonk in the new M5 to keep enterprise buyers engaged until the company releases its first real headset targeted squarely at consumers.

As reported by independent analyst Ming-Chi Kuo, Apple’s next big XR push could be a cheaper and lighter version expected to release in 2027, called ‘Vision Air’. Kuo maintains Vision Air will be “over 40% lighter and more than 50% cheaper” than the current Vision Pro, making it around 350g and less than $1,750.

Questions worth some healthy speculation and rapid fire answers: When is the M5 Vision Pro coming?—possibly in the October/November timeframe alongside its new new MacBook Pro M5 model release. How much will it cost?—likely nothing short of the $3,500 if Apple is, you know, still Apple.

Filed Under: Apple Vision Pro News & Reviews, News

Meta Ray-Ban Display Waveguide Provider Says It’s Poised for Wide Field-of-view Glasses

September 30, 2025 From roadtovr

SCHOTT—a global leader in advanced optics and specialty glass—working with waveguide partner Lumus, is almost certainly the manufacturer of the waveguide optics in Meta’s Ray-Ban Display glasses. While the Ray-Ban Display glasses offer only a static 20° field-of-view, the company says its waveguide technology is also capable of supporting immersive wide field-of-view glasses in the future.

The News

Schott has secured a big win as perhaps the first waveguide maker to begin producing waveguides at consumer scale. While Meta hasn’t confirmed who makers the waveguides in the Ray-Ban Display glasses, Schott announced—just one day before the launch of Ray-Ban Display—that it was the “first company capable of handling geometric reflective waveguide manufacturing in [mass] production volumes.”

In anticipation of AR glasses, Shott has spent years investing in technology, manufacturing, and partnerships in an effort to set itself up as a leading provider of optics for smart glasses and AR glasses.

The company signed a strategic partnership with Lumus (the company that actually designs the geometric reflective waveguides) back in 2020. Last year the company announced the completion of a brand new factory which it said would “significantly enhance Schott’s capacity to supply high-quality optical components to international high-tech industries, including Augmented Reality (AR).”

Image courtesy Schott

Those investments now appear to be paying off. While there are a handful of companies out there with varying waveguide technologies and manufacturing processes, as the likely provider of the waveguides in the Ray-Ban Display glasses, Schott can now claim it has “proven mass market readiness regarding scalability;” something others have yet to do at this scale, as far as I’m aware.

“This breakthrough in industrial production of geometric reflective waveguides means nothing less than adding a crucial missing puzzle piece to the AR technology landscape,” said Dr. Ruediger Sprengard, Senior Vice President Augmented Reality at Schott. “For years, the promise of lightweight and powerful smart glasses available at scale has been out of reach. Today, we are changing that. By offering geometric reflective waveguides at scale, we’re helping our partners cross the threshold into truly wearable products, providing an immersive experience.”

As for the future, the company claims its geometric reflective waveguides will be able to scale beyond the small 20° field-of-view of the Ray-Ban Display glasses to immersive wide field-of-view devices.

“Compared to competing optical technologies in AR, geometric reflective waveguides stand out in light and energy efficiency, enabling device designers to create fashionable glasses for all-day use. These attributes make geometric reflective waveguides the best option for small FoVs, and the only available option for wide FoVs,” the company claims in its announcement.

Indeed, Schott’s partner Lumus has long demonstrated wider field-of-view waveguides, like the 50° ‘Lumus Maximus’ I saw as far back as 2022.

My Take

As the likely provider of waveguides for Ray-Ban Display, Schott & Lumus have secured a big win over competitors. From the outside, it looks like Lumus’ geometric reflective waveguides won out primarily due to their light efficiency. Most other waveguide technologies rely on diffractive (rather than reflective) optics, which have certain advantages but fall short on light efficiency.

Light efficiency is crucial because the microdisplays in glasses-sized devices must be both tiny and power-efficient. As displays get larger and brighter, they get bulkier, hotter, and more power-hungry. Using a waveguide with high light efficiency thus allows the displays to be smaller, cooler, and less power-hungry, which is critical considering the tiny space available.

Light and power demands also rise with field-of-view, since spreading the same light across a wider area reduces apparent brightness.

Schott says its waveguide technology is ready to scale to wider fields-of-view, but that probably isn’t what’s holding back true AR glasses (like the Orion Prototype that Meta showed off in 2024).

It’s not just wide field-of-view optics that need to be in place for a device like Orion to ship. There’s still the issue of battery and processing power. Orion was only able to work as it does because a lot of the computation and battery was offloaded onto a wireless puck. If Meta wants to launch full AR glasses like Orion without a puck (as they did with Ray-Ban Display), the company still needs smaller, more efficient chips to make that possible.

Additionally, display technology also needs to advance in order to actually take advantage of optics that are capable of projectinga wide field-of-view

Ray-Ban Display glasses are using a fairly low resolution 0.36MP (600 × 600) display. It appears sharp because the pixels are spread across just 20°. As the field-of-view increases, both brightness and resolution need to increase to maintain the same image quality. Without much room to increase the physical size of the display, that means packing smaller pixels into the same tiny area, while also making them brighter. As you can imagine, it’s a challenge to improve these inversely-related characteristics at the same time.

Filed Under: News, XR Industry News

Pimax Delays Dream Air and Dream Air SE to December, SLAM Versions Likely to 2026

September 22, 2025 From roadtovr

Pimax Dream Air | Image courtesy Pimax

Pimax issued an update detailing its upcoming fleet of micro-OLED PC VR headsets, which also included info on a delay affecting its thin and light headsets, Dream Air and Dream Air SE.

The update, seen at the bottom of the article, details three products Pimax is preparing to ship:

  • Dream Air – Thin and light PC VR headset containing Sony Micro OLED panels (3,840 × 3,552 pixels per eye) and concave-view pancake optics, delivering 110° horizontal FOV, eye-tracking, auto-IPD adjustment, spatial audio, and DisplayLink. 
  • Dream Air SE – Lower resolution version of Dream Air containing Sony Micro OLED panels (2,560 × 2,560 pixels per eye) and all of the above.
  • Crystal Super (Micro OLED Engine) – A new swappable optical module for Pimax’s flagship Crystal Super, serving up to 116° horizontal FOV with the same panels and lenses as Dream Air.

Pimax announced Dream Air last December, which was set to serve up competition to thin and light PC VR headsets like Bigscreen Beyond and Shiftall MaganeX Superlight 8K. While launch was initially planned for May 2025, the headset was subsequently delayed to Q3 2025.

Now, Pimax says both the SteamVR tracking versions of Dream Air and Dream Air SE, the latter of which was announced in May, are scheduled to ship sometime in December.

Pimax Dream Air | Image courtesy Pimax

While the SteamVR tracking version (aka ‘Lighthouse’) are shipping this year, Pimax is offering SLAM versions of both headsets, which don’t require external base stations. The SLAM variants are said to start an “external beta test” in December—so no word on when those ship just yet.

As for Crystal Super’s new swappable micro-OLED optical module, a version of the headset containing the module will start shipping in October. There’s no mention of whether that also means prior Crystal Super owners will be able to purchase the module by itself in that time frame.

Check out all the specs, price and release date info Pimax announced during its big update below:

Note: Pimax breaks up its pricing structure with an upfront cost of around 60% of the final price. The remainder is paid as a software fee that gives users unlimited access to Pimax Play, which is offered in a 14-day trial. Pimax Play is required for the headset to work.

Pimax Dream Air Specs

Image courtesy Pimax
  • Display: Sony Micro-OLED screen (3840 × 3552 pixels per eye)
  • Optics: 110-degree horizontal FOV with Pimax’s ConcaveView optics
  • Weight: <170 grams>
  • Features:
    • DFR-ready eye-tracking
    • Hand tracking
    • SLAM tracking or Lighthouse tracking
    • 6DOF controllers
    • Integrated spatial audio
    • Dual fan for proper cooling
    • Powered by Pimax Play
    • Split DisplayPort Cable
  • Price: $1,999 for SteamVR tracking version (shipping in December), $2,299 for SLAM tracking version (beta testing in December)

Pimax Dream Air SE Specs

Image courtesy Pimax
  • Display: Sony Micro-OLED screen (2,560 × 2,560 pixels per eye)
  • Optics: 105-degree horizontal FOV with Pimax’s ConcaveView optics
  • Weight: <140 grams>
  • Features:
    • DFR-ready eye-tracking
    • Hand tracking
    • SLAM tracking or Lighthouse tracking
    • 6DOF controllers
    • Integrated spatial audio
    • Dual fan for proper cooling
    • Powered by Pimax Play
    • Split DisplayPort Cable
  • Price: $899 for SteamVR tracking version (shipping in December), $ 1,199 for SLAM tracking version (beta testing in December)

Crystal Super Micro-OLED Specs

Image courtesy Pimax
  • Display: Sony Micro-OLED screen (3840 × 3552 pixels per eye)
  • Optics: 116-degree horizontal FOV with Pimax’s ConcaveView optics
  • Weight: ?
  • Features:
    • DFR-ready eye-tracking
    • Hand tracking
    • SLAM tracking (Lighthouse optional)
    • 6DOF controllers
    • Integrated spatial audio
    • Dual fan for proper cooling
    • Powered by Pimax Play
    • Split DisplayPort Cable
  • Price: $2,199 for full headset & module (shipping in October)

Filed Under: News, PC VR News & Reviews

Exclusive 3D Trailer of ‘Avatar 3’ on Quest Teases a Possible Full Release on the Headset

September 19, 2025 From roadtovr

During Meta Connect this week, the company released an exclusive 3D trailer of James Cameron’s upcoming film Avatar: Fire and Ash. There’s no confirmation yet that we’re getting the full thing, although Cameron is enthusiastic about Quest’s ability to open up new distribution models.

The short trailer is now available on Meta Horizon TV until September 21st, which the company says is “just the beginning of how fans can experience Pandora like never before on Quest, following the film’s theatrical release this December.”

The Avatar 3 clip comes amid a wider partnership with Lightstorm Vision, Cameron’s 3D film studio, which Meta tapped in late 2024 to produce spatial content across multiple genres, including live events and full-length entertainment.

Andrew Bosworth (left), James Cameron (right) | Image courtesy Meta

Talking to Meta CTO Andrew Bosworth on stage at Connect, Cameron says he sees a new distribution model on the horizon that could bring “theater-grade 3D” to VR headsets.

“I just see a future, which I think can be enabled by the new devices that [Meta has], the Quest series, and then some of new stuff that’s hopefully coming down the line,” Cameron says. “I think that we’re looking at a future that’s a whole new distribution model, where we can have theater-grade 3D basically on your head.”

To Cameron, VR headsets like Quest 3 actually outperform traditional movie theaters in a number of ways.

“It’s interesting, I’ve been fighting so hard with movie theaters to get the brightness levels up, to install laser projection, but they’re caught in an earlier paradigm. No business can survive being stuck in technology [that’s] 15 years old.”

And, in comparison to traditional theater projection, Quest 3 is “an order of magnitude brighter,” Cameron says.

“The brightness gives you the dynamic range, it gives you the color space as it was meant to be. And that’s so much more engaging. The work that [Meta] has done in the Quest series to expand the field the view, brightness and spatial resolution. To me, it’s like being in my own private movie theater.”

Cameron especially admires VR’s immersive ability to create a greater connection with audiences, which he envision as a “stereo ubiquity future” coming to all forms of entertainment—not just big budget films, but everything from short-form content to sports and even news.

“You mostly look at flat displays: phones, laptops, wall panels, all that sort of thing. This is going to be, I think, a new age. Because we experience the world in 3D, our brains are wired for it, our visual-neural biology is wired for it, and we’ve been able to prove that there’s more emotional engagement, there’s more sense of presence.”

Provided Meta is indeed bringing the full-fat version of Avatar: Fire and Ash to Quest, we’d expect it sometime after the film’s theatrical debut on December 19th, aligning with its wider release on streaming platforms later down the line.

You can see the full conversation below, time stamped as Cameron and Bosworth take the stage:

Filed Under: Meta Quest 3 News & Reviews, News

Why Ray-Ban Meta Glasses Failed on Stage at Connect

September 19, 2025 From roadtovr

Meta CEO Mark Zuckerberg’s keynote at this year’s Connect wasn’t exactly smooth—especially if count two big hiccups that sidetracked live demos for both the latest Ray-Ban Meta smart glasses and the new Meta Ray-Ban Display glasses.

Ray-Ban Meta (Gen 2) smart glasses essentially bring the same benefits as Oakley Meta HSTN, which launched back in July: longer battery life and better video capture.

One of the biggest features though is its access to Meta’s large language model (LLM), Meta AI, which pops up when you say “Hey Meta”, letting you ask questions about anything, from the weather to what the glasses camera can actually see.

As part of the on-stage demo of its Live AI feature, which runs continuously instead of sporadically, food influencer Jack Mancuso attempted to create a Korean-inspired steak sauce using the AI as a guide.

And it didn’t go well, as Mancuso struggled to get the Live AI back on track after missing a key step in the sauce’s preparation. You can see the full cringe-inducing glory for yourself, timestamped below:

And the reason behind it is… well, just dumb. Jake Steineman, Developer Advocate at Meta’s Reality Labs, explained what happened in an X post:

So here’s the story behind why yesterdays live #metaconnect demo failed – when the chef said “Hey Meta start Live AI” it activated everyone’s Meta AI in the room at once and effectively DDOS’d our servers 🤣

That’s what we get for doing it live!

— Jake Steinerman 🔜 Meta Connect (@jasteinerman) September 19, 2025

Unfortunate, yes. But also pretty foreseeable, especially considering the AI ‘wake word’ gaffe has been a thing since the existence of Google Nest (ex-Home) and Amazon Alexa.

Anyone with one of those friendly tabletop pucks has probably experienced what happens when a TV advert includes “Hey Google” or “Hey Alexa,” unwittingly commanding every device in earshot to tell them the weather, or even order items online.

What’s more surprising though: there were enough people using a Meta product in earshot to screw with its servers. Meta AI isn’t like Google Gemini or Apple’s Siri—it doesn’t have OS-level access to smartphones. The only devices with default are the company’s Ray-Ban Meta and Oakley Meta glasses (and Quest if you opt-in), conjuring the image of a room full of confused, bespectacled Meta employees waiting out of shot.

As for the Meta Ray-Ban Display glasses, which the company is launching in the US for $799 on September 30th, the hiccup was much more forgivable. Zuckerberg was attempting to take a live video call from company CTO Andrew Bosworth, who after several missed attempts, came on stage to do an ad hoc simulation of what it might have been like.

Those sorts of live product events are notoriously bad for both Wi-Fi and mobile connections, simply because of how many people are in the room, often with multiple devices per-person. Still, Zuckerberg didn’t pull a Steve Jobs, where the former Apple CEO demanded everyone in attendance at iPhone 4’s June 2010 unveiling turn off their Wi-Fi after an on-stage connection flub.

You can catch the Meta Ray-Ban Display demo below (obligatory cringe warning):

Filed Under: AR Development, News, XR Industry News

New Meta Developer Tool Enables Third-parties to Bring Apps to its Smart Glasses for the First Time

September 19, 2025 From roadtovr

Today during Connect, Meta announced the Wearables Device Access Toolkit, which represents the company’s first steps toward allowing third-party experiences on its smart glasses.

If the name “Wearables Device Access Toolkit” sounds a little strange, it’s for good reason. Compared to a plain old SDK, which generally allows developers to build apps for a specific device, apps made for Meta smart glasses don’t actually run on the glasses themselves.

The “Device Access” part of the name is the key; developers will be able to access sensors (like the microphone or camera) on the smart glasses, and then pipe that info back to their own app running on an Android or iOS device. After processing the sensor data, the app can then send information back to the glasses for output.

For instance, a cooking app running on Android (like Epicurious) could be triggered by the user saying “Hey Epicurious” to the smart glasses. Then, when the user says “show me the top rated recipe I can make with these ingredients,” the Android app could access the camera on the Meta smart glasses to take a photo of what the user is looking at, then process that photo on the user’s phone before sending back its recommendation as spoken audio to the smart glasses.

In this way, developers will be able to extend apps from smartphones to smart glasses, but not run apps directly on the smart glasses.

The likely reason for this approach is that Meta’s smart glasses have strict limits on compute, thermals, and battery life. And the audio-only interface on most of the company’s smart glasses doesn’t allow for the kind of navigation and interaction that users are used to with a smartphone app.

Developers interested in building for Meta’s smart glasses can now sign up for access to the forthcoming preview of the Wearables Device Access Toolkit.

As for what can be done with the toolkit, Meta showed a few examples from partners who are experimenting with the devices.

Disney, for instance, made an app which combines knowledge about its parks with contextual awareness of the user’s situation by accessing the camera to see what they’re looking at.

Golf app 18Birdies showed an example of contextually aware information on a specific golf course.

For now, Meta says only select partners will be able to bring their app integrations with its smart glasses to the public, but expects to allow more open accessibility starting in 2026.

The examples shown so far used only voice output as the means of interacting with the user. While Meta says developers can also extend apps to the Ray-Ban Display glasses, it’s unclear at this point if apps will be able to send text, photo, or video back to the glasses, or integrate with the device’s own UI.

Filed Under: News, XR Design & Development, XR Industry News

Meta Says New ‘Horizon Worlds’ Engine Update Brings Faster Loading and Up to 100 Concurrent Users

September 18, 2025 From roadtovr

Today at Connect, Meta said it’s rolling out an updated version of the engine that powers Horizon Worlds. The new tech will purportedly speed up loading of Horizon Worlds spaces and allow up to 100 users in a single space.

The new tech, which Meta is calling the ‘Horizon Engine’ is said to be replacing the original foundation of Horizon Worlds which was based on Unity. The engine has been rebuilt with the goals of Horizon Worlds in mind—namely, enabling players to hop between interconnected social spaces.

Meta says the new system can increase loading times for Worlds spaces by four times, making jumping between different spaces more seamless. The improved performance also means that Worlds experiences can now host up to 100 players simultaneously, which is five times as many as the previous limit.

Meta says it has also rebuilt ‘Horizon Home’ using the new engine, which is the default space you see when you put on Quest. This purportedly brings improved visual quality and some functionality upgrades, like being able to pin apps to the walls for quick access.

The changes to Horizon Home appear to move Meta one step closer to merging Horizon Home and Horizon Worlds together. Now, running on the same engine, the space will also allow users to pin portals to various Worlds spaces for quick access.

At Connect, Meta also announced that it is working on an ‘agentic editor’ for Horizon Worlds called Meta Horizon Studio. While the company has already released AI features that allow creators to generate various assets for building Worlds experiences, the new agentic editor melds multiple tools together under a chat-based interface.

Image courtesy Meta

The new tool allows creators to build new Worlds experiences by asking for additions and changes in natural language, like ‘change the style to sci-fi’, or ‘add a new character that’s a talking bear who is lost and wants the player to help them get home’.

Meta Horizon Studio will be rolling out in beta in the near future, the company says.

Filed Under: Meta Quest 3 News & Reviews, News

Hands-on: Meta Ray-Ban Display Glasses & Neural Band Offer a Glimpse of Future AR Glasses

September 18, 2025 From roadtovr

The newly announced Meta Ray-Ban Display glasses, and the ‘Neural Band’ input device that comes with them, are still far from proper augmented reality. But Meta has made several clever design choices that will pay dividends once their true AR glasses are ready for the masses.

The Ray-Ban Display glasses are a new category for Meta. Previous products communicated to the user purely through audio. Now, a small, static monocular display adds quite a bit of functionality to the glasses. Check out the full announcement of the Meta Ray-Ban Display glasses here for all the details, and read on for my hands-on impressions of the device.

A Small Display is a Big Improvement

Meta Ray-Ban Display Glasses | Image courtesy Meta

A 20° monocular display isn’t remotely sufficient for proper AR (where virtual content floats in the world around you), but it adds a lot of new functionality to Meta’s smart glasses.

For instance, imagine you want to ask Meta AI for a recipe for teriyaki chicken. On the non-display models, you could definitely ask the question and get a response. But after the AI reads it out to you, how do you continue to reference the recipe? Well, you could either keep asking the glasses over and-over, or you could pull your phone out of your pocket and use the Meta AI companion app (at which point, why not just pull the recipe up on your phone in the first place?).

Now with the Meta Ray-Ban Display glasses, you can actually see the recipe instructions as text in a small heads-up display, and glance at them whenever you need.

In the same way, almost everything you could previously do with the non-display Meta Ray-Ban glasses is enhanced by having a display.

Now you can see a whole thread of messages instead of just hearing one read through your ear. And when you reply you can actually read the input as it appears in real-time to make sure it’s correct instead of needing to simply hear it played back to you.

When capturing photos and videos you now see a real-time viewfinder to ensure you’re framing the scene exactly as you want it. Want to check your texts without needing to talk out loud to your glasses? Easy peasy.

And the real-time translation feature becomes more useful too. In current Meta glasses you have to listen to two overlapping audio streams at once. The first is the voice of the speaker and the second is the voice in your ear translating into your language, which can make it harder to focus on the translation. With the Ray-Ban Display glasses, now the translation can appear as a stream of text, which is much easier to process while listening to the person speaking in the background.

It should be noted that Meta has designed the screen in the Ray-Ban Display glasses to be off most of the time. The screen is set off and to the right of your central vision, making it more of a glanceable display than something that’s right in the middle of your field-of-view. At any time you can turn the display on or off with a double-tap of your thumb and middle finger.

Technically, the display is a 0.36MP (600 × 600) full-color LCoS display with a reflective waveguide. Even though the resolution is “low,” it’s plenty sharp across the small 20° field-of-view. Because it’s monocular, it does have a ghostly look to it (because only one eye can see it). This doesn’t hamper the functionality of the glasses, but aesthetically it’s not ideal.

Meta hasn’t said if they designed the waveguide in-house or are working with a partner. I suspect the latter, and if I had to guess, Lumus would be the likely supplier. Meta says the display can output up to 5,000 nits brightness, which is enough to make the display readily usable even in full daylight (the included Transitions also help).

From the outside, the waveguide is hardly visible in the lens. The most prominent feature is some small diagonal markings toward the temple-side of the headset.

Photo by Road to VR

Meanwhile, the final output gratings are very transparent. Even when the display is turned on, it’s nearly impossible to see a glint from the display in a normally lit room. Meta said the outward light-leakage is around 2%, which I am very impressed by.

 The waveguide is extremely subtle within the lens | Photo by Road to VR

Aside from the glasses being a little chonkier than normal glasses, the social acceptability here is very high—even more so because you don’t need to constantly talk to the glasses to use them, or even hold your hand up to tap the temple. Instead, the so-called Neural Band (based on EMG sensing), allows you to make subtle inputs while your hand is down at your side.

The Neural Band is an Essential Piece to the Input Puzzle

Photo by Road to VR

The included Neural Band is just as important to these new glasses as the display itself—and it’s clear that this will be equally important to future AR glasses.

To date, controlling XR devices has been done with controllers, hand-tracking, or voice input. All of these have their pros and cons, but none are particularly fitting for glasses that you’d wear around in public; controllers are too cumbersome, hand-tracking requires line of sight which means you need to hold your hands awkwardly out in front of you, and voice is problematic both for privacy and certain social settings where talking isn’t appropriate.

The Neural Band, on the other hand, feels like the perfect input device for all-day wearable glasses. Because it’s detecting muscle activity (instead of visually looking for your fingers) no line-of-sight is needed. You can have your arm completely to your side (or even behind your back) and you’ll still be able to control the content on the display.

The Neural Band offers several ways to navigate the UI of the Ray-Ban Display glasses. You can pinch your thumb and index finger together to ‘select’; pinch your thumb and middle finger to ‘go back’; and swipe your thumb across the side of your finger to make up, down, left, and right selections. There are a few other inputs too, like double-tapping fingers or pinching and rotating your hand.

As of now, you navigate the Ray-Ban Display glasses mostly by swiping around the interface and selecting. In the future, having eye-tracking on-board will make navigation even more seamless, by allowing you to simply look and pinch to select what you want. The look-and-pinch method, combined with eye-tracking, already works great on Vision Pro. But it still misses your pinches sometimes if your hand isn’t in the right spot, because the cameras can’t always see your hands at quite the right angle. If I could use the Neural Band for pinch detection on Vision Pro, I absolutely would—that’s how well it seems to work already.

While it’s easy enough to swipe and select your way around the Ray-Ban Display interface, the Neural Band has the same downside that all the aforementioned input methods have: text input. But maybe not for long.

In my hands-on with the Ray-Ban Display, the device was still limited to dictation input. So replying to a message or searching for a point of interest still means talking out loud to the headset.

However, Meta showed me a demo (that I didn’t get to try myself) of being able to ‘write’ using your finger against a surface like a table or your leg. It’s not going to be nearly as fast as a keyboard (or dictation, for that matter), but private text input is an important feature. After all, if you’re out in public, you probably don’t want to be speaking all of your message replies out loud.

The ‘writing’ input method is said to be a forthcoming feature, though I didn’t catch whether they expected it to be available at launch or sometime after.

On the whole, the Neural Band seems like a real win for Meta. Not just for making the Ray-Ban display more useful, but it seems like the ideal input method for future glasses with full input capabilities.

Photo by Road to VR

And it’s easy to see a future where the Neural Band becomes even more useful by evolving to include smartwatch and fitness tracking functions. I already wear a smartwatch most of the day anyway… making it my input device for a pair of smart glasses (or AR glasses in the future) is a smart approach.

Little Details Add Up

One thing I was not expecting to be impressed by was the charging case of the Ray-Ban Display glasses. Compared to the bulky charging cases of all of Meta’s other smart glasses, this clever origami-like case folds down flat to take up less space when you aren’t using it. It goes from being big enough to accommodate a charging battery and the glasses themselves, down to something that can easily go in a back pocket or slide into a small pocket in a bag.

This might not seem directly relevant to augmented reality, but it’s actually more important than you might think. It’s not like Meta invented a folding glasses case, but it shows that the company is really thinking about how this kind of device will fit into people’s lives. An analog to this for their MR headsets would be including a charging dock with every headset—something they’ve yet to do.

Now with a display on-board, Meta is also repurposing the real-time translation feature as a sort of ‘closed captioning’. Instead of translating to another language, you can turn on the feature and see a real-time text stream of the person in front of you, even if they’re already speaking your native language. That’s an awesome capability for those that are hard-of-hearing.

Live Captions in Meta Ray-Ban Display Glasses | Image courtesy Meta

And even for those that aren’t, you might still find it useful… Meta says the beam-forming microphones in the Ray-Ban Display can focus on the person you’re looking at while ignoring other nearby voices. They showed me a demo of this in action in a room with one person speaking to me and three others having a conversation nearby to my left. It worked relatively well, but it remains to be seen if it will work in louder environments like a noisy restaurant or a club with thumping music.

Meta wants to eventually pack full AR capabilities into glasses of a similar size. And even if they aren’t there yet, getting something out the door like the Ray-Ban Display gives them the opportunity to explore, iterate—and hopefully perfect—many of the key ‘lifestyle’ factors that need to be in place for AR glasses to really take off.


Disclosure: Meta covered lodging for one Road to VR correspondent to attend an event where information for this article was gathered.

Filed Under: Feature, News, Smart Glasses, XR Industry News

« Previous Page
Next Page »

  • Home