• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

XR Industry News

Samsung to Launch Project Moohan XR Headset at Galaxy Event on October 21st

October 15, 2025 From roadtovr

Samsung announced it’s holding a Galaxy Event on October 21st, which will feature Project Moohan, the company’s long-awaited Apple Vision Pro competitor.

The News

The livestream event is slated to take place on October 21st at 10PM ET (local time here), which is said to focus on “the future of AI” and Project Moohan.

“Come meet the first official device on Android XR—Project Moohan,” the video’s description reads.

There’s no official indication yet on what the headset will be priced, or even officially named at this point. A previous report from South Korea’s Newsworks suggests it could cost somewhere between ₩2.5 and ₩4 million South Korean won, or between $1,800 and $2,900 USD.

The company’s event site does however allow users to register for a $100 credit, valid when purchasing qualifying Galaxy products.

We’re hoping to learn more about the headset’s specs and promised VR motion controllers, which Samsung has yet to reveal.

Since our previous hands-on from last year, we’ve learned Project Moohan includes a Qualcomm Snapdragon XR2 + Gen 2, dual micro‑OLED panels, pancake lenses, automatic interpupillary distance (IPD) adjustment, support for eye and hand-tracking, optional magnetically-attached light shield, and a removable external battery pack.

My Take

Personally, the teaser doesn’t really serve up the sort of “wow” factor I was hoping for, as it highlights some fairly basic stuff seen in XR over the past decade. Yes, it’s actually has been that long.

While I don’t expect Moohan to stop at a Google Earth VR-style map and immersive video—neat as those things are—it’s interesting to me the company thought those two things were worthy additions to a launch day teaser for its first XR headset since the release of Samsung Odyssey+ in 2018.

Smasung Odyssey+ | Image courtesy Samsung

As the first official headset supporting Google’s Android XR operating system though, I expect the event will also focus on Moohan’s ability to not only use the standard library of Android apps and native XR stuff, but also XR productivity—provided Samsung really wants to go toe-to-toe with Vision Pro.

By all accounts, Moohan is a capable XR headset, but I wonder how much gas Samsung will throw at it now that Apple is reportedly shifting priorities to focus on Meta-style smart glasses instead of developing a cheaper and lighter Vision Pro. While Apple is still apparently moving ahead with Vision Pro’s M5 hardware refresh, which is rumored to release soon, that’s going to mostly appeal to enterprise users, which leaves Samsung to navigate a potentially awkward middle ground between Meta and Apple.

Moohan’s market performance may also dictate how other manufacturers adopt Android XR. And there’s worrying precedent. Google did the same thing with Lenovo Mirage Solo in 2018, which was supposed to be the first headset to support its Android-based Daydream platform before Google pulled the plug due to poor engagement. Here’s to hoping history doesn’t repeat itself.

Filed Under: News, VR Development, XR Industry News

Lynx Teases Next Mixed Reality Headset for Enterprise

October 13, 2025 From roadtovr

Lynx teased its next mixed reality headset, which is hoping to target enterprise and professional users across training and remote assistance.

The News

At MicroLED Connect last month, Lynx CEO Stan Larroque announced he aimed to reveal the company’s next mixed reality standalone sometime in mid-November.

However Somnium CEO Artur Sychov and major investor in the company beat Lynx to the punch by posting a cropped image of the France-based company’s next device.

I will just say this – Lynx next headset news is going to be wild… 💣

Sorry @stanlarroque, I can’t hold myself not to tease at least something… 😬😅

October & November 2025 will be 🔥 pic.twitter.com/XidrdTqqlp

— Artur Sychov ᯅ (@ASychov) October 10, 2025

In response, Larroque posted the full image, seen above. Here’s a version with the white balance turned up for better visibility, courtesy MRTV’s Sebastian Ang:

Modified image courtesy Sebastian Ang

There’s still a lot to learn, including specs and the device’s official name. From the image, we can tell at least two things: the headset has a minimum of four camera sensors, now positioned on the corners of the device à la Quest 2, and an ostensibly more comfortably headstrap that cups the back of the user’s head.

What’s more, Lynx announced late last year the company intended to integrate Google’s forthcoming Android XR operating system into its next headset, which will also include Samsung Project Moohan and forthcoming XR glasses from XREAL. Lynx hasn’t released any update on progress, so we’re still waiting to hear more.

Lynx R-1 | Image courtesy Lynx

Notably, Lynx R-1 concluded shipping earlier this year, which was initially positioned to target both consumers and professional users through its 2021 Kickstarter campaign, which brought in $800,000 in crowd-sourced funding.

According to Larroque’s talk at MicroLED Connect last month, it appears the company is however focusing hard on the enterprise sector with its next hardware release, including tasks like training and remote assistance.

My Take

Lynx R-1’s unique “4-fold catadioptric freeform prism” optics allow for a compact focal length, putting the displays flush with the lenses and providing a 90-degree field of view (FOV). While pancake lenses are generally thinner and lighter, R-1’s optics have comparably better light throughput, which is important for mixed reality tasks.

Image courtesy Lynx

As a startup that’s weathered an admittedly “excruciating” fundraising environment, making the right hardware choices in its follow-up will be key though.

My hunch is the prospective ‘Lynx R-2’ headset will probably keep the same optical stack to save on development and manufacturing costs, and mainly push upgrades to the processor and display, which are likely more important to the sort of enterprise customers Lynx is targeting anyway.

As it is, Lynx R-1 is powered by the Qualcomm Snapdragon XR2 chipset, which was initially released in 2019—the same used in Quest 2—so an upgrade there is well overdue. Its 1,600 × 1,600 per-eye LCDs also feel similarly dated.

While an FOV larger than 90 degrees is great, I’d argue that for enterprise hardware that isn’t targeting simulators, clarity and pixel density are probably more important. More info on Lynx’s next-gen headset is due sometime in November, so I’d expect to learn more then.

Filed Under: News, VR Development, XR Industry News

Meta Ray-Ban Display Repairablity is Predictably Bad, But Less Than You Might Think

October 9, 2025 From roadtovr

iFixit got their hands on a pair of Meta Ray-Ban Display smart glasses, so we finally get to see what’s inside. Is it repairable? Not really. But if you can somehow find replacement parts, you could at least potentially swap out the battery.

The News

Meta launched the $800 smart glasses in the US late last month, marking the company’s first pair with a heads-up display.

Serving up a monocular display, Meta Ray-Ban allows for basic app interaction beyond the standard stuff seen (or rather ‘heard’) in its audio-only Ray-Ban Meta and Oakley Meta glasses. It can do things like let you view and respond to messages, get turn-by-turn walking directions, and even use the display as a viewfinder for photos and video.

And iFixit shows off in their latest video that cracking into the glasses and attempting repairs is pretty fiddly, but not entirely impossible.

Meta Ray-Ban Display’s internal battery | Image courtesy iFixit

The first thing you’d probably eventually want to do is replace the battery, which requires splitting the right arm down a glued seam—a common theme with the entire device. Getting to the 960 mWh internal battery, which is slightly larger than the one seen in the Oakley Meta HSTN, you’ll be sacrificing the device’s IPX4 splash resistance rating.

And the work is fiddly, but iFixit manages to go all the way down to the dual speakers, motherboard, Snapdragon AR1 chipset, and liquid crystal on silicon (LCoS) light engine, the latter of which was captured with a CT scanner to show off just how micro Meta has managed to get its most delicate part.

Granted, this is a teardown and not a repair guide as such. All of the components are custom, and replacement parts aren’t available yet. You would also need a few specialized tools and an appetite for risk of destroying a pretty hard-to-come-by device.

For more, make sure to check out iFixit’s full article, which includes images and detailed info on each component. You can also see the teardown in action in the full nine-minute video below.

My Take

Meta isn’t really thinking deeply about reparability when it comes to smart glasses right now, which isn’t exactly shocking. Like earbuds, smart glasses are all about miniaturization to hit an all-day wearable form factor, making its plastic and glue-coated exterior a pretty clear necessity in the near term.

Another big factor: the company is probably banking on the fact that prosumers willing to shell out $800 bucks this year will likely be happy to so the same when Gen 2 eventually arrives. That could be in two years, but I’m betting less if the device performs well enough in the market. After all, Meta sold Quest 2 in 2020 just one year after releasing the original Quest, so I don’t see why they wouldn’t do the same here.

That said, I don’t think we’ll see any real degree of reparability in smart glasses until we get to the sort of sales volumes currently seen in smartphones. And that’s just for a baseline of readily available replacement parts, third-party or otherwise.

So while I definitely want a pair of smart glasses (and eventually AR glasses) that look indistinguishable from standard frames, that also kind of means I have to be okay with eventually throwing away a perfectly cromulent pair of specs just because I don’t have the courage to open it up, or know anyone who does.

Filed Under: AR Development, News, XR Industry News

Meta Ray-Ban Display Waveguide Provider Says It’s Poised for Wide Field-of-view Glasses

September 30, 2025 From roadtovr

SCHOTT—a global leader in advanced optics and specialty glass—working with waveguide partner Lumus, is almost certainly the manufacturer of the waveguide optics in Meta’s Ray-Ban Display glasses. While the Ray-Ban Display glasses offer only a static 20° field-of-view, the company says its waveguide technology is also capable of supporting immersive wide field-of-view glasses in the future.

The News

Schott has secured a big win as perhaps the first waveguide maker to begin producing waveguides at consumer scale. While Meta hasn’t confirmed who makers the waveguides in the Ray-Ban Display glasses, Schott announced—just one day before the launch of Ray-Ban Display—that it was the “first company capable of handling geometric reflective waveguide manufacturing in [mass] production volumes.”

In anticipation of AR glasses, Shott has spent years investing in technology, manufacturing, and partnerships in an effort to set itself up as a leading provider of optics for smart glasses and AR glasses.

The company signed a strategic partnership with Lumus (the company that actually designs the geometric reflective waveguides) back in 2020. Last year the company announced the completion of a brand new factory which it said would “significantly enhance Schott’s capacity to supply high-quality optical components to international high-tech industries, including Augmented Reality (AR).”

Image courtesy Schott

Those investments now appear to be paying off. While there are a handful of companies out there with varying waveguide technologies and manufacturing processes, as the likely provider of the waveguides in the Ray-Ban Display glasses, Schott can now claim it has “proven mass market readiness regarding scalability;” something others have yet to do at this scale, as far as I’m aware.

“This breakthrough in industrial production of geometric reflective waveguides means nothing less than adding a crucial missing puzzle piece to the AR technology landscape,” said Dr. Ruediger Sprengard, Senior Vice President Augmented Reality at Schott. “For years, the promise of lightweight and powerful smart glasses available at scale has been out of reach. Today, we are changing that. By offering geometric reflective waveguides at scale, we’re helping our partners cross the threshold into truly wearable products, providing an immersive experience.”

As for the future, the company claims its geometric reflective waveguides will be able to scale beyond the small 20° field-of-view of the Ray-Ban Display glasses to immersive wide field-of-view devices.

“Compared to competing optical technologies in AR, geometric reflective waveguides stand out in light and energy efficiency, enabling device designers to create fashionable glasses for all-day use. These attributes make geometric reflective waveguides the best option for small FoVs, and the only available option for wide FoVs,” the company claims in its announcement.

Indeed, Schott’s partner Lumus has long demonstrated wider field-of-view waveguides, like the 50° ‘Lumus Maximus’ I saw as far back as 2022.

My Take

As the likely provider of waveguides for Ray-Ban Display, Schott & Lumus have secured a big win over competitors. From the outside, it looks like Lumus’ geometric reflective waveguides won out primarily due to their light efficiency. Most other waveguide technologies rely on diffractive (rather than reflective) optics, which have certain advantages but fall short on light efficiency.

Light efficiency is crucial because the microdisplays in glasses-sized devices must be both tiny and power-efficient. As displays get larger and brighter, they get bulkier, hotter, and more power-hungry. Using a waveguide with high light efficiency thus allows the displays to be smaller, cooler, and less power-hungry, which is critical considering the tiny space available.

Light and power demands also rise with field-of-view, since spreading the same light across a wider area reduces apparent brightness.

Schott says its waveguide technology is ready to scale to wider fields-of-view, but that probably isn’t what’s holding back true AR glasses (like the Orion Prototype that Meta showed off in 2024).

It’s not just wide field-of-view optics that need to be in place for a device like Orion to ship. There’s still the issue of battery and processing power. Orion was only able to work as it does because a lot of the computation and battery was offloaded onto a wireless puck. If Meta wants to launch full AR glasses like Orion without a puck (as they did with Ray-Ban Display), the company still needs smaller, more efficient chips to make that possible.

Additionally, display technology also needs to advance in order to actually take advantage of optics that are capable of projectinga wide field-of-view

Ray-Ban Display glasses are using a fairly low resolution 0.36MP (600 × 600) display. It appears sharp because the pixels are spread across just 20°. As the field-of-view increases, both brightness and resolution need to increase to maintain the same image quality. Without much room to increase the physical size of the display, that means packing smaller pixels into the same tiny area, while also making them brighter. As you can imagine, it’s a challenge to improve these inversely-related characteristics at the same time.

Filed Under: News, XR Industry News

Why Ray-Ban Meta Glasses Failed on Stage at Connect

September 19, 2025 From roadtovr

Meta CEO Mark Zuckerberg’s keynote at this year’s Connect wasn’t exactly smooth—especially if count two big hiccups that sidetracked live demos for both the latest Ray-Ban Meta smart glasses and the new Meta Ray-Ban Display glasses.

Ray-Ban Meta (Gen 2) smart glasses essentially bring the same benefits as Oakley Meta HSTN, which launched back in July: longer battery life and better video capture.

One of the biggest features though is its access to Meta’s large language model (LLM), Meta AI, which pops up when you say “Hey Meta”, letting you ask questions about anything, from the weather to what the glasses camera can actually see.

As part of the on-stage demo of its Live AI feature, which runs continuously instead of sporadically, food influencer Jack Mancuso attempted to create a Korean-inspired steak sauce using the AI as a guide.

And it didn’t go well, as Mancuso struggled to get the Live AI back on track after missing a key step in the sauce’s preparation. You can see the full cringe-inducing glory for yourself, timestamped below:

And the reason behind it is… well, just dumb. Jake Steineman, Developer Advocate at Meta’s Reality Labs, explained what happened in an X post:

So here’s the story behind why yesterdays live #metaconnect demo failed – when the chef said “Hey Meta start Live AI” it activated everyone’s Meta AI in the room at once and effectively DDOS’d our servers 🤣

That’s what we get for doing it live!

— Jake Steinerman 🔜 Meta Connect (@jasteinerman) September 19, 2025

Unfortunate, yes. But also pretty foreseeable, especially considering the AI ‘wake word’ gaffe has been a thing since the existence of Google Nest (ex-Home) and Amazon Alexa.

Anyone with one of those friendly tabletop pucks has probably experienced what happens when a TV advert includes “Hey Google” or “Hey Alexa,” unwittingly commanding every device in earshot to tell them the weather, or even order items online.

What’s more surprising though: there were enough people using a Meta product in earshot to screw with its servers. Meta AI isn’t like Google Gemini or Apple’s Siri—it doesn’t have OS-level access to smartphones. The only devices with default are the company’s Ray-Ban Meta and Oakley Meta glasses (and Quest if you opt-in), conjuring the image of a room full of confused, bespectacled Meta employees waiting out of shot.

As for the Meta Ray-Ban Display glasses, which the company is launching in the US for $799 on September 30th, the hiccup was much more forgivable. Zuckerberg was attempting to take a live video call from company CTO Andrew Bosworth, who after several missed attempts, came on stage to do an ad hoc simulation of what it might have been like.

Those sorts of live product events are notoriously bad for both Wi-Fi and mobile connections, simply because of how many people are in the room, often with multiple devices per-person. Still, Zuckerberg didn’t pull a Steve Jobs, where the former Apple CEO demanded everyone in attendance at iPhone 4’s June 2010 unveiling turn off their Wi-Fi after an on-stage connection flub.

You can catch the Meta Ray-Ban Display demo below (obligatory cringe warning):

Filed Under: AR Development, News, XR Industry News

New Meta Developer Tool Enables Third-parties to Bring Apps to its Smart Glasses for the First Time

September 19, 2025 From roadtovr

Today during Connect, Meta announced the Wearables Device Access Toolkit, which represents the company’s first steps toward allowing third-party experiences on its smart glasses.

If the name “Wearables Device Access Toolkit” sounds a little strange, it’s for good reason. Compared to a plain old SDK, which generally allows developers to build apps for a specific device, apps made for Meta smart glasses don’t actually run on the glasses themselves.

The “Device Access” part of the name is the key; developers will be able to access sensors (like the microphone or camera) on the smart glasses, and then pipe that info back to their own app running on an Android or iOS device. After processing the sensor data, the app can then send information back to the glasses for output.

For instance, a cooking app running on Android (like Epicurious) could be triggered by the user saying “Hey Epicurious” to the smart glasses. Then, when the user says “show me the top rated recipe I can make with these ingredients,” the Android app could access the camera on the Meta smart glasses to take a photo of what the user is looking at, then process that photo on the user’s phone before sending back its recommendation as spoken audio to the smart glasses.

In this way, developers will be able to extend apps from smartphones to smart glasses, but not run apps directly on the smart glasses.

The likely reason for this approach is that Meta’s smart glasses have strict limits on compute, thermals, and battery life. And the audio-only interface on most of the company’s smart glasses doesn’t allow for the kind of navigation and interaction that users are used to with a smartphone app.

Developers interested in building for Meta’s smart glasses can now sign up for access to the forthcoming preview of the Wearables Device Access Toolkit.

As for what can be done with the toolkit, Meta showed a few examples from partners who are experimenting with the devices.

Disney, for instance, made an app which combines knowledge about its parks with contextual awareness of the user’s situation by accessing the camera to see what they’re looking at.

Golf app 18Birdies showed an example of contextually aware information on a specific golf course.

For now, Meta says only select partners will be able to bring their app integrations with its smart glasses to the public, but expects to allow more open accessibility starting in 2026.

The examples shown so far used only voice output as the means of interacting with the user. While Meta says developers can also extend apps to the Ray-Ban Display glasses, it’s unclear at this point if apps will be able to send text, photo, or video back to the glasses, or integrate with the device’s own UI.

Filed Under: News, XR Design & Development, XR Industry News

Hands-on: Meta Ray-Ban Display Glasses & Neural Band Offer a Glimpse of Future AR Glasses

September 18, 2025 From roadtovr

The newly announced Meta Ray-Ban Display glasses, and the ‘Neural Band’ input device that comes with them, are still far from proper augmented reality. But Meta has made several clever design choices that will pay dividends once their true AR glasses are ready for the masses.

The Ray-Ban Display glasses are a new category for Meta. Previous products communicated to the user purely through audio. Now, a small, static monocular display adds quite a bit of functionality to the glasses. Check out the full announcement of the Meta Ray-Ban Display glasses here for all the details, and read on for my hands-on impressions of the device.

A Small Display is a Big Improvement

Meta Ray-Ban Display Glasses | Image courtesy Meta

A 20° monocular display isn’t remotely sufficient for proper AR (where virtual content floats in the world around you), but it adds a lot of new functionality to Meta’s smart glasses.

For instance, imagine you want to ask Meta AI for a recipe for teriyaki chicken. On the non-display models, you could definitely ask the question and get a response. But after the AI reads it out to you, how do you continue to reference the recipe? Well, you could either keep asking the glasses over and-over, or you could pull your phone out of your pocket and use the Meta AI companion app (at which point, why not just pull the recipe up on your phone in the first place?).

Now with the Meta Ray-Ban Display glasses, you can actually see the recipe instructions as text in a small heads-up display, and glance at them whenever you need.

In the same way, almost everything you could previously do with the non-display Meta Ray-Ban glasses is enhanced by having a display.

Now you can see a whole thread of messages instead of just hearing one read through your ear. And when you reply you can actually read the input as it appears in real-time to make sure it’s correct instead of needing to simply hear it played back to you.

When capturing photos and videos you now see a real-time viewfinder to ensure you’re framing the scene exactly as you want it. Want to check your texts without needing to talk out loud to your glasses? Easy peasy.

And the real-time translation feature becomes more useful too. In current Meta glasses you have to listen to two overlapping audio streams at once. The first is the voice of the speaker and the second is the voice in your ear translating into your language, which can make it harder to focus on the translation. With the Ray-Ban Display glasses, now the translation can appear as a stream of text, which is much easier to process while listening to the person speaking in the background.

It should be noted that Meta has designed the screen in the Ray-Ban Display glasses to be off most of the time. The screen is set off and to the right of your central vision, making it more of a glanceable display than something that’s right in the middle of your field-of-view. At any time you can turn the display on or off with a double-tap of your thumb and middle finger.

Technically, the display is a 0.36MP (600 × 600) full-color LCoS display with a reflective waveguide. Even though the resolution is “low,” it’s plenty sharp across the small 20° field-of-view. Because it’s monocular, it does have a ghostly look to it (because only one eye can see it). This doesn’t hamper the functionality of the glasses, but aesthetically it’s not ideal.

Meta hasn’t said if they designed the waveguide in-house or are working with a partner. I suspect the latter, and if I had to guess, Lumus would be the likely supplier. Meta says the display can output up to 5,000 nits brightness, which is enough to make the display readily usable even in full daylight (the included Transitions also help).

From the outside, the waveguide is hardly visible in the lens. The most prominent feature is some small diagonal markings toward the temple-side of the headset.

Photo by Road to VR

Meanwhile, the final output gratings are very transparent. Even when the display is turned on, it’s nearly impossible to see a glint from the display in a normally lit room. Meta said the outward light-leakage is around 2%, which I am very impressed by.

 The waveguide is extremely subtle within the lens | Photo by Road to VR

Aside from the glasses being a little chonkier than normal glasses, the social acceptability here is very high—even more so because you don’t need to constantly talk to the glasses to use them, or even hold your hand up to tap the temple. Instead, the so-called Neural Band (based on EMG sensing), allows you to make subtle inputs while your hand is down at your side.

The Neural Band is an Essential Piece to the Input Puzzle

Photo by Road to VR

The included Neural Band is just as important to these new glasses as the display itself—and it’s clear that this will be equally important to future AR glasses.

To date, controlling XR devices has been done with controllers, hand-tracking, or voice input. All of these have their pros and cons, but none are particularly fitting for glasses that you’d wear around in public; controllers are too cumbersome, hand-tracking requires line of sight which means you need to hold your hands awkwardly out in front of you, and voice is problematic both for privacy and certain social settings where talking isn’t appropriate.

The Neural Band, on the other hand, feels like the perfect input device for all-day wearable glasses. Because it’s detecting muscle activity (instead of visually looking for your fingers) no line-of-sight is needed. You can have your arm completely to your side (or even behind your back) and you’ll still be able to control the content on the display.

The Neural Band offers several ways to navigate the UI of the Ray-Ban Display glasses. You can pinch your thumb and index finger together to ‘select’; pinch your thumb and middle finger to ‘go back’; and swipe your thumb across the side of your finger to make up, down, left, and right selections. There are a few other inputs too, like double-tapping fingers or pinching and rotating your hand.

As of now, you navigate the Ray-Ban Display glasses mostly by swiping around the interface and selecting. In the future, having eye-tracking on-board will make navigation even more seamless, by allowing you to simply look and pinch to select what you want. The look-and-pinch method, combined with eye-tracking, already works great on Vision Pro. But it still misses your pinches sometimes if your hand isn’t in the right spot, because the cameras can’t always see your hands at quite the right angle. If I could use the Neural Band for pinch detection on Vision Pro, I absolutely would—that’s how well it seems to work already.

While it’s easy enough to swipe and select your way around the Ray-Ban Display interface, the Neural Band has the same downside that all the aforementioned input methods have: text input. But maybe not for long.

In my hands-on with the Ray-Ban Display, the device was still limited to dictation input. So replying to a message or searching for a point of interest still means talking out loud to the headset.

However, Meta showed me a demo (that I didn’t get to try myself) of being able to ‘write’ using your finger against a surface like a table or your leg. It’s not going to be nearly as fast as a keyboard (or dictation, for that matter), but private text input is an important feature. After all, if you’re out in public, you probably don’t want to be speaking all of your message replies out loud.

The ‘writing’ input method is said to be a forthcoming feature, though I didn’t catch whether they expected it to be available at launch or sometime after.

On the whole, the Neural Band seems like a real win for Meta. Not just for making the Ray-Ban display more useful, but it seems like the ideal input method for future glasses with full input capabilities.

Photo by Road to VR

And it’s easy to see a future where the Neural Band becomes even more useful by evolving to include smartwatch and fitness tracking functions. I already wear a smartwatch most of the day anyway… making it my input device for a pair of smart glasses (or AR glasses in the future) is a smart approach.

Little Details Add Up

One thing I was not expecting to be impressed by was the charging case of the Ray-Ban Display glasses. Compared to the bulky charging cases of all of Meta’s other smart glasses, this clever origami-like case folds down flat to take up less space when you aren’t using it. It goes from being big enough to accommodate a charging battery and the glasses themselves, down to something that can easily go in a back pocket or slide into a small pocket in a bag.

This might not seem directly relevant to augmented reality, but it’s actually more important than you might think. It’s not like Meta invented a folding glasses case, but it shows that the company is really thinking about how this kind of device will fit into people’s lives. An analog to this for their MR headsets would be including a charging dock with every headset—something they’ve yet to do.

Now with a display on-board, Meta is also repurposing the real-time translation feature as a sort of ‘closed captioning’. Instead of translating to another language, you can turn on the feature and see a real-time text stream of the person in front of you, even if they’re already speaking your native language. That’s an awesome capability for those that are hard-of-hearing.

Live Captions in Meta Ray-Ban Display Glasses | Image courtesy Meta

And even for those that aren’t, you might still find it useful… Meta says the beam-forming microphones in the Ray-Ban Display can focus on the person you’re looking at while ignoring other nearby voices. They showed me a demo of this in action in a room with one person speaking to me and three others having a conversation nearby to my left. It worked relatively well, but it remains to be seen if it will work in louder environments like a noisy restaurant or a club with thumping music.

Meta wants to eventually pack full AR capabilities into glasses of a similar size. And even if they aren’t there yet, getting something out the door like the Ray-Ban Display gives them the opportunity to explore, iterate—and hopefully perfect—many of the key ‘lifestyle’ factors that need to be in place for AR glasses to really take off.


Disclosure: Meta covered lodging for one Road to VR correspondent to attend an event where information for this article was gathered.

Filed Under: Feature, News, Smart Glasses, XR Industry News

Meta Reveals Next-Gen Ray-Ban & New Oakley Vanguard Smart Glasses

September 17, 2025 From roadtovr

Undoubtedly the smart glasses headliner of Meta Connect this year was the new $800 Meta Ray-Ban Display Glasses, which pack in a single display into a familiar Wafarer-style package. Alongside it though, Meta showed off two new smart glasses: the Oakley Meta Vanguard and next generation of Ray-Ban Meta.

Oakley Meta Vanguard – $499 (available Oct 21)

Oakley Meta Vanguard | Image courtesy Meta

Before Meta and Essilor Luxottica released Oakley Meta HSTN in July, we were definitely envisioning something more like the new Oakley Meta Vanguard. But it’s better late than never, as Meta has just unveiled its sleek, blade-like frames they say are “built for high-intensity sports.”

Rated at IP67 dust and water resistance, Meta Oakley Vanguard is supposedly durable enough for sweaty workouts or rainy rides, with it targeting sports like cycling, snowboarding, and running.

Oakley Meta Vanguard | Image courtesy Meta

Notably, like many of its traditional specs, the new smart glasses use Oakley’s Three-Point Fit system, which includes three interchangeable nose pads for a more secure fit, with Meta noting the frames are optimized for use with cycling helmets and hats.

They also include an onboard 12MP, 122° wide-angle camera sensor for capturing video up to 3K resolution, with modes including Slow Motion, Hyperlapse, and adjustable image stabilization.

And just like Ray-Ban Meta, it features open-ear speakers, notably rated at six decibels louder than previous Meta Oakley HSTN models, including a wind-optimized five-mic array to provide clear audio for taking calls, using voice commands, or listening to music while training.

The newest Oakley’s also integrate with Garmin, Strava, Apple Health, and Android Health Connect, delivering post-workout summaries and real-time stats through Meta AI. Athletes can check heart rate, progress, or other data hands-free with voice prompts.

Oakley Meta Vanguard | Image courtesy Meta

Available in four frame/lens color combinations, the glasses weigh 66g and offer up to nine hours of mixed use (or six hours of music) on a single charge, with an additional 36 hours via the charging case. Quick charging is said to bring the glasses to 50% in just 20 minutes, Meta says.

Like all of the other Meta smart glasses on offer, they include 32GB of storage for over 1,000 photos or 100 short videos, the company says.

Since it’s built for high-intensity sports, it also means the company is introducing replaceable lenses, starting at $85. Here are all four models available for pre-order, including the lenses you’ll be able to mix and match later.

  • Oakley Meta Vanguard Black with PRIZMTM 24K
  • Oakley Meta Vanguard White with PRIZMTM Black
  • Oakley Meta Vanguard Black with PRIZMTM Road
  • Oakley Meta Vanguard White with PRIZMTM Sapphire

Oakley Meta Vanguard is now available for pre-order through Meta or Oakley, priced at for $499 and launching October 21st.

They’ll be available in the US, Canada, UK, Ireland, France, Italy, Spain, Austria, Belgium, Australia, Germany, Sweden, Norway, Finland, Denmark, Switzerland, and the Netherlands. Meta says they should also eventually launch in Mexico, India, Brazil, and the United Arab Emirates later this year.

Ray-Ban Meta (Gen 2) – Starting at $379 (Now Available)

Ray-Ban Meta Wayfarer (Gen 2) | Image courtesy Meta

While the company considers its next Ray-Ban Meta Glasses “Gen 2”, they’re technically the third generation following the release of Ray-Ban Facebook Stories in 2021 and Ray-Ban Meta in 2023.

Naming scheme aside, the latest Ray-Ban Meta smart glasses are delivering the same improvements seen in Oakley Meta HSTN, and essentially the same base functionality. While can play music, do real-time translation, and hands-free calls, but also offers better photo and video capture than its predecessor.

Its ultrawide 12MP camera sensor is rated for photo capture up to 3,024 × 4032 pixels and video from 1200p at 60 FPS 1440p at 30 FPS, and 3K at 30 FPS—all of which are up to three minutes in length.

Ray-Ban Meta Wayfarer (Gen 2) | Image courtesy Meta

Like Oakley Meta HSTN, Ray-Ban Meta (Gen2) boasts up to eight hours of continuous use and an additional 48 hours from the charging case, plus quick charge to 50% in 20 minutes in the charging case.

And it probably goes without saying, but all of Meta’s smart glasses make heavy use of its own Meta AI, which includes things like voice search queries (“Hey Meta!”), reading QR codes, suggesting recipes, saving notes, etc.

Ray-Ban Meta Skyler (Gen 2) | Image courtesy Meta

Additionally, the device includes Bluetooth 5.3, Wi-Fi 6, 32GB of storage, and an IPX4 water-resistance rating for light rain or splashes.

And like the 2023 model, the new Ray-Ban Meta smart glasses offer gads of frame and lens combinations: 27 in total across its Wayfarer and Skyler models, which include options for large or low nose bridges.

It is also getting a price bump over the first-gen, which were launched in 2023 for $299. Ray-Ban Meta (Gen 2) starts at $379 for standard lens options, and will be available with polarized lenses ($409), transitions lenses ($459), and prescription lenses (pricing varies).

You can find all of those models and lens combinations starting today over at Meta and Ray-Ban.com.


We’re currently on the ground at Meta Connect this year, so check back soon for all things XR.

Filed Under: AR Development, ar industry, News, XR Industry News

VITURE Launches ‘Luma Ultra’ AR Glasses with Sony Micro-OLED Panels

September 17, 2025 From roadtovr

VITURE has now launched Luma Ultra AR glasses, which pack in Sony’s latest micro-OLED to go along with spatial gesture tracking thanks to onboard sensor array.

Priced at $600, and now shipping worldwide, Viture Luma Ultra is targeting prosumers, enterprise and business professionals looking for a personal, on-the-go workspace.

Notably, these aren’t standalone devices, instead relying on PC, console and mobile tethering for compute, which means they integrate as external (albeit very personal) monitors.

Image courtesy VITURE

Luma Ultra is said to include a 52-degree field of view (FOV), Sony’s latest micro-OLED panels with a resolution up to 1200p and 1,250 nits peak brightness. Two depth sensing cameras are onboard in addition to a single RGB camera for spatial tracking and hand gesture input.

Unlike some AR glasses, which rely on slimming waveguide optics, Luma Ultra uses what’s called a ‘birdbath’ optic system, which uses a curved, semi-transparent mirror to project the digital image into the user’s eyes. It’s typically cheaper and easier to manufacture, and can also reach higher brightness at the expense of more bulk and weight.

Image courtesy VITURE

The device also includes an electrochromic film for tint control, myopia adjustments up to -4.0 diopters, and support for 64 ± 6mm interpupillary distance (IPD).

In reality, the company also launched a slate of AR glasses alongside it, which are targeted at consuming traditional media, positioning Viture Luma Ultra the company’s flagship device.

Check out the full lineup and spec below:

Image courtesy VITURE

Viture Luma ($400), Luma Pro ($500) and Luma Ultra ($600) are all estimated to ship within two weeks of ordering, with the next device, Luma Beast ($550) slated to ship sometime in November.

None of the devices above (besides Luma Ultra) include spatial tracking due to the lack of depth sensors, however Luma Beast is said to come with the same micro-OLED displays as Luma Ultra at a slightly larger 58-degree FOV and an auto-adjusting electrochromic film for tint control.

This follows the news of Viture’s latest funding round, which brought the San Francisco-based XR glasses company $100 million in Series B financing. which the company says will aid in global expansion of its consumer XR glasses. Viture says the funding will aid in global expansion of its consumer XR glasses.

Filed Under: AR Development, ar industry, News, XR Industry News

Meta Leaks Next-gen Smart Glasses with Display Ahead of Connect This Week

September 16, 2025 From roadtovr

It seems Meta has a new generation of smart glasses to show off at Connect this week, and it appears we’ve just got an eye-full of the long-rumored version with a built-in display, previously codenamed ‘Hypernova’.

As noted by XR analyst Brad Lynch, Meta seems to have leaked the next slate of smart glasses built in collaboration with Essilor Luxottica.

The video, which was unlisted on Meta’s YouTube channel, has since been deleted.

New Meta smartglasses with display leaked via an unlisted video on their own YouTube channel

Along with their EMG wristband, and other smartglass models they plan to show off this week at Meta Connect pic.twitter.com/8tTlmaeQ0a

— SadlyItsDadley (@SadlyItsBradley) September 15, 2025

The video shows off four main models: the recently released Oakley Meta HSTN, the rumored Oakley Meta Sphaera model, what appears to be the next gen version of Ray-Ban Meta, and the rumored variant with display, which also comes with an electromyography (EMG) based wristband for input.

Meta also showed off a few use cases for the new display-clad smart glasses: typing on the back of a laptop to send a message, following turn-by-turn directions, identifying an object using AI, and real-time text translation.

Image courtesy Brad Lynch

Notably, prior to its unintentional unveiling, it was thought the display model would not be built in collaboration with Essilor Luxottica, and instead be marketed under the Meta name, owing to its ‘Celeste’ branding seen in previous leaks. It appears however the company is coopting a slightly larger Ray-Ban Wayfarer design and appending the name ‘Display’.

What’s more, the the new smart glasses with heads-up display are also shown with the previously reported EMG wristband, which is meant to control the device’s UI. Meta has previously shown the wristband input device working with its prototype Orion AR glasses, which picks up movement in the wrist without needing line of sight to camera sensors, like Meta Quest 3 does.

There’s no confirmed pricing info yet, however a previous report from Bloomberg’s Mark Gurman maintains the display model and EMG wristband controller could cost “about $800.”

Meta Connect kicks off September 17th – 18th, where we expect to learn more about release dates and pricing for all of the company’s newest smart glasses.


We will be at Meta Connect this week, so make sure to check back soon for all of the latest in Meta’s XR hardware and software.

Filed Under: AR Development, News, XR Industry News

Next Page »

  • Home