• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

XR Industry News

‘MultiBrush’ Studio Secures $4.5M Grant to Promote Positive VR Experiences for Elders

November 4, 2025 From roadtovr

Rendever, the company behind Tilt Brush-based multiplayer Quest app MultiBrush (2022), has secured nearly $4.5 million in grant funding from the U.S. National Institutes of Health (NIH), which the company says it will use to bring its elder-focused VR experiences to the home care market.

The studio says in an announcement the latest funding includes $3.8 million for the Thrive At Home Program and an additional grant to build a caregiver support network in VR.

“These funds will pave the way for Rendever to bring their technology to the large majority of individuals and caregivers who are aging in place and lacking in structural social support,” the studio says.

Rendever is currently partnered with the University of California in Santa Barbara, research organization RAND, and home care service Right at Home.

The company says these organization will help it conduct studies to evaluate the effectiveness of VR technology in building relationships across living environments. The aim is to reduce social isolation, improve mental health, and enhance overall well-being in elders. Additionally, Rendever maintains studies gauging the impact of caregiving tools, including its recent Dementia & Empathy training program, will continue as a result.

“Our Phase II trial has shown the power of VR to effectively build and enhance family relationships across distances – even across country lines. The future of aging depends on technology that effectively reshapes how we experience these core parts of the human experience as we get older,” said Kyle Rand, Rendever CEO. “We know there’s nothing more holistically impactful than our social health. Over the next three years, we’ll work across the industry to build the next generation of community infrastructure that delivers real happiness and forges new relationships, all while driving meaningful health outcomes.”

While Rendever currently offers VR-assisted therapy for both senior living and healthcare facilities, the company is currently assembling a beta pilot in certain geographic regions in the US to test its forthcoming in-home offering.

Additionally, the company announced it’s adding Sarah Thomas to its Board of Directors, an expert on aging and venture partner in the AgeTech industry.

Filed Under: News, VR Investment, XR Industry News

Meta to Ship Project Aria Gen 2 to Researchers in 2026, Paving the Way for Future AR Glasses

October 29, 2025 From roadtovr

Meta announced it’s shipping out Project Aria Gen 2 to third-party researchers next year, which the company hopes will accelerate development of machine perception and AI technologies needed for future AR glasses and personal AI assistants.

The News

Meta debuted Project Aria Gen 1 back in 2020, the company’s sensor-packed research glasses which it used internally to train various AR-focused perception systems, in addition to releasing it in 2024 to third-party researchers across 300 labs in 27 countries.

Then, in February, the company announced Aria Gen 2, which Meta says includes improvements in sensing, comfort, interactivity, and on-device computation. Notably, neither generation contains a display of any type, like the company’s recently launch Meta Ray-Ban Display smart glasses.

Now the company is taking applications for researchers looking to use the device, which is said to ship to qualified applicants sometime in Q2 2026. That also means applications for Aria Gen 1 are now closed, with remaining requests still to be processed.

To front run what Meta calls a “broad” rollout next year, the company is releasing two major resources: the Aria Gen 2 Device Whitepaper and the Aria Gen 2 Pilot Dataset.

The whitepaper details the device’s ergonomic design, expanded sensor suite, Meta’s custom low-power co-processor for real-time perception, and compares Gen 1 and Gen 2’s abilities.

Meanwhile, the pilot dataset provides examples of data captured by Aria Gen 2, showing its capabilities in hand and eye-tracking, sensor fusion, and environmental mapping. The dataset also includes example outputs from Meta’s own algorithms, such as hand-object interaction and 3D bounding box detection, as well as NVIDIA’s FoundationStereo for depth estimation.

Meta is accepting applications from both academic and corporate researchers for Aria Gen 2.

My Take

Meta doesn’t call Project Aria ‘AI glasses’ like it does with its various generations of Ray-Ban Meta or Meta Ray-Ban Display, or even ‘smart glasses’ like you might expect—even if they’re substantively similar on the face of things. They’re squarely considered ‘research glasses’ by the company.

Cool, but why? Why does the company that already makes smart glasses with and without displays, and cool prototype AR glasses need to put out what’s substantively the skeleton of a future device?

What Meta is attempting to do with Project Aria is actually pretty smart for a few reasons: sure, it’s putting out a framework that research teams will build on, but it’s also doing it at a comparatively lower cost than outright hiring teams to directly build out future use cases, whatever those might be.

Aria Gen 2 | Image courtesy Meta

While the company characterizes its future Aria Gen 2 rollout as “broad”, Meta is still filtering for projects based on merit, i.e. getting a chance to guide research without really having to interface with what will likely be substantially more than 300 teams, all of whom will use the glasses to solve problems in how humans can more fluidly interact with an AI system that can see, hear, and know a heck of a lot more about your surroundings than you might at any given moment.

AI is also growing faster than supply chains can keep up, which I think more than necessitates an artisanal pair of smart glasses so teams can get to grips with what will drive the future of AR glasses—the real crux of Meta’s next big move.

Building out an AR platform that may one day supplant the smartphone is no small task, and its iterative steps have the potential to give Meta the sort of market share the company dreamt of way back in 2013 when it co-released the HTC First, which at the time was colloquially called the ‘Facebook phone’.
The device was a flop, partly because the hardware was lackluster, and I think I’m not alone in saying so, mostly because people didn’t want a Facebook phone in their pockets at any price when the ecosystem had some many other (clearly better) choices.

Looking back at the early smartphones, Apple teaches us that you don’t have to be first to be best, but it does help to have so many patents and underlying research projects that your position in the market is mostly assured. And Meta has that in spades.

Filed Under: AR Development, News, XR Industry News

Researchers Propose Novel E-Ink XR Display with Resolution Far Beyond Current Headsets

October 27, 2025 From roadtovr

A group of Sweden-based researchers proposed a novel e-ink display solution that could make way for super compact, retina-level VR headsets and AR glasses in the future.

The News

Traditional emissive displays are shrinking, but they face physical limits; smaller pixels tend to emit less uniformly and provide less intense light, which is especially noticeable in near-eye applications like virtual and augmented reality headsets.

In a recent research paper published in Nature, a team of researchers presents what a “retinal e-ink display” which hopes to offer a new solution quite unlike displays seen in modern VR headsets today, which are increasingly adopting micro-OLEDs to reduce size and weight.

The paper was authored by researchers affiliated with Uppsala University, Umeå University, University of Gothenburg, and Chalmers University of Technology in Gothenburg: Ade Satria Saloka Santosa, Yu-Wei Chang, Andreas B. Dahlin, Lars Österlund, Giovanni Volpe, and Kunli Xiong.

While conventional e-paper has struggled to reach the resolution necessary for realistic, high-fidelity images, the team proposes a new form of e-paper featuring electrically tunable “metapixels” only about 560 nanometres wide.

This promises a pixel density of over 25,000 pixels per inch (PPI)—an order of magnitude denser than displays currently used in headsets like Samsung Galaxy XR or Apple Vision Pro. Those headsets have a PPI of around 4,000.

Image courtesy Nature

As the paper describes it, each metapixel is made from tungsten trioxide (WO₃) nanodisks that undergo a reversible insulator-to-metal transition when electrically reduced. This process dynamically changes the material’s refractive index and optical absorption, allowing nanoscale control of brightness and color contrast.

In effect, when lit by ambient light, the display can create bright, saturated colors far thinner than a human hair, as well as deep blacks with reported optical contrast ratios around 50%—a reflective equivalent of high-dynamic range (HDR).

And the team says it could be useful in both AR and VR displays. The figure below shows a conceptual optical stack for both applications, with Figure A representing a VR display, and Figure B showing an AR display.

Image courtesy Nature

Still, there are some noted drawbacks. Beyond sheer resolution, the display delivers full-color video at “more than 25 Hz,” which is significantly lower than what VR users need for comfortable viewing. In addition to a relatively low refresh rate, researchers note the retina e-paper requires further optimization in color gamut, operational stability and lifetime.

“Lowering the operating voltage and exploring alternative electrolytes represent promising engineering routes to extend device durability and reduce energy consumption,” the paper explains. “Moreover, its ultra-high resolution also necessitates the development of ultra-high-resolution TFT arrays for independent pixel control, which will enable fully addressable, large-area displays and is therefore a critical direction for future research and technological development.”

And while the e-paper display itself is remarkably low-powered, packing in the graphical compute to put those metapixels to work will also be a challenge. It’s a good problem to have, but a problem none the less.

My Take

At least as the paper describes it, the underlying tech could produce XR displays approaching the size and pixel density that we’ve never seen before. And reaching the limits of human visual perception is one of those holy grail moments I’ve been waiting for.

Getting that refresh rate up well beyond 25 Hz is going to be extremely important though. As the paper describes it, 25 Hz is good for video playback, but driving an immersive VR environment requires at least 60 Hz refresh to be minimally comfortable. 72 Hz is better, and 90 Hz is the standard nowadays.

I’m also curious to see the e-paper display stacked up against lower resolution micro-OLED contemporaries, if only to see how that proposed ambient lighting can achieve HDR. I have a hard time wrapping my head around it. Essentially, the display’s metapixels absorb and scatter ambient light, much like Vantablack does—probably something that needs to be truly seen in person to be believed.

Healthy skepticism aside, I find it truly amazing we’ve even arrived at the conversation in the first place: we’re at the point where XR displays could recreate reality, at least as far as your eyes are concerned.

Filed Under: AR Development, News, VR Development, XR Industry News

Former Oculus Execs’ AI Smart Glasses Startup ‘Sesame’ Raises $250M Series B Funding

October 24, 2025 From roadtovr

Sesame, an AI and smart glasses startup founded by former Oculus execs, raised $250 million in Series B funding, which the company hopes will accelerate its voice-based AI.

The News

As first reported by Tech Crunch, lead investors in Sesame’s Series B include Spark Capital and Sequoia Capital, bringing the company’s overall funding to $307.6 million, according to Crunchbase data.

Exiting stealth earlier this year, Sesame was founded by Oculus co-founder and former CEO Brendan Iribe, former Oculus hardware architect Ryan Brown, and Ankit Kumar, former CTO of AR startup Ubiquity6. Additionally, Oculus co-founder Nate Mitchell announced in June he was joining Sesame as Chief Product Officer, which he noted was to “help bring computers to life.”

Image courtesy Sesame

Sesame is currently working on an AI assistant along with a pair of lightweight smart glasses. Its AI assistant aims to be “the perfect AI conversational partner,” Sequoia Capital says in a recent post.

“Sesame’s vision is to build an ambient interface that is always available and has contextual awareness of the world around you,” Sequoia says. “To achieve that, Sesame is creating their own lightweight, fashion-forward AI-enabled glasses designed to be worn all day. They’re intentionally crafted—fit for everyday life.”

Sesame is currently taking signups for beta access to its AI assistants Miles and Maya in an iOS app, and also has a public preview showcasing a ‘call’ function that allows you to speak with the chatbots.

My Take

Love it or hate it, AI is going to be baked into everything in the future, as contextually aware systems hope to bridge the gap between user input and the expectation of timely and intelligent output. That’s increasingly important when the hardware doesn’t include a display, requiring the user to interface almost entirely by voice.

Some things to watch out for: if the company does commercialize a pair of smart glasses to champion its AI assistant, it will be competing for some pretty exclusive real estate that companies like Meta, Google, Samsung, and Apple (still unconfirmed) are currently gunning for. That puts Sesame at somewhat of a disadvantage if it hopes to go it alone, but not if it’s hoping for a timely exit into the coming wave of smart glasses by being acquired by any of the above.

There’s also some pretty worrying precedent in the rear view mirror too: e.g. Humane’s AI Pin or AI Friend necklace, both of which were publicly lambasted for essentially releasing hardware that could just as easily have been apps on your smartphone.

Granted, Sesame hasn’t shown off its smart glasses hardware yet, so there’s no telling what the company hopes to bring to the table outside of the having an easy-to-wear pair of off-ear headphones for all-day AI stuff—that, to me, would be the worst case scenario, as Meta refines its own smart glasses in partnership with EssilorLuxottica, Google releases Android XR frames with Gentle Monster and Warby Parker, Samsung releases its own Android XR glasses, and Apple does… something. We don’t know yet.

Whatever the case, I’m looking forward to it, if only based on the company’s combined experience in XR, which I’d argue any startup would envy as the race to build the next big computing platform truly takes off.

Filed Under: AR Development, AR Investment, News, XR Industry News

Amazon is Developing Smart Glasses to Allow Delivery Drivers to Work Hands-free

October 23, 2025 From roadtovr

Amazon announced it’s developing smart glasses for its delivery drivers, which include a display for real-time navigation and delivery instructions.

The News

Amazon announced the news in a blog post, which partially confirms a recent report from The Information, which alleged that Amazon is developing smart glasses both for its delivery drivers and consumers.

The report, released in September, maintained that Amazon’s smart glasses for delivery drivers will be bulkier and less sleek than the consumer model. Codenamed ‘Jayhawk’, the delivery-focused smart glasses are expected to rollout as soon as Q2 2026, and include an initial production run of 100,000 units.

Image courtesy Amazon

Amazon says the smart glasses were designed and optimized with input from hundreds of delivery drivers, and include the ability to identify hazards, scan packages, capture proof of delivery, and navigate by serving up turn-by-turn walking directions.

The company hasn’t confirmed whether the glasses’ green monotone heads-up display is monoscopic or stereoscopic, however images suggest it indeed features a single waveguide in the right lens.

Moreover, the glasses aren’t meant to be used while driving, as Amazon says that the glasses “automatically activate” when the driver parks their vehicle. Only afterwards does the driver receive instructions, ostensibly done to reduce the risk of driver distraction.

In addition to the glasses, the system also features what Amazon calls “a small controller worn in the delivery vest that contains operational controls, a swappable battery ensuring all-day use, and a dedicated emergency button to reach emergency services along their routes if needed.”

Additionally, Amazon says the glasses support prescription lenses along with transitional lenses that automatically adjust to light.

As for the reported consumer version, it’s possible Amazon may be looking to evolve its current line of ‘Echo Frames’ glasses. First introduced in 2019, Echo Frames support AI voice control, music playback, calls, and Alexa smart home control, although they notably lack any sort of camera or display.

My Take

I think Amazon has a good opportunity to dogfood (aka, use its own technology) here on a pretty large scale—probably much larger than Meta or Google could initially with their first generation of smart glasses with displays.

That said, gains made in enterprise smart glasses can be difficult to translate to consumer products, which will necessarily include more functions and apps, and likely require more articulated input—all of the things that can make or break any consumer product.

Third-gen Echo Frames | Image courtesy Amazon

Amazon’s core strength though is generally less focused on high-end innovation, and more about creating cheap, reliable hardware that feeds into recurring revenue streams: Kindle, Fire TV, Alexa products, etc. Essentially, if Amazon can’t immediately figure out a way to make consumer smart glasses feed into its existing ecosystems, I wouldn’t expect to see the company put its full weight behind the device, at least not initially.

After the 2014 failure of Fire Phone, Amazon may still be gun-shy from going head-first into a segment it has near-zero experience entering. And I really don’t count Echo Frames, because they’re primarily just Bluetooth headphones with Alexa support baked in. Still, real smart glasses with cameras and displays represent a treasure trove of data that the company may not be so keen to pass up.

Using object recognition to peep into your home or otherwise follow you around could allow Amazon to better target personalized suggestions, figure out brand preferences, and even track users as they shop at physical stores. Whatever the case, I bet the company will give it a go, if only to occupy the top slot when you search “smart glasses” on Amazon.

Filed Under: AR Development, News, XR Industry News

Samsung to Launch Project Moohan XR Headset at Galaxy Event on October 21st

October 15, 2025 From roadtovr

Samsung announced it’s holding a Galaxy Event on October 21st, which will feature Project Moohan, the company’s long-awaited Apple Vision Pro competitor.

The News

The livestream event is slated to take place on October 21st at 10PM ET (local time here), which is said to focus on “the future of AI” and Project Moohan.

“Come meet the first official device on Android XR—Project Moohan,” the video’s description reads.

There’s no official indication yet on what the headset will be priced, or even officially named at this point. A previous report from South Korea’s Newsworks suggests it could cost somewhere between ₩2.5 and ₩4 million South Korean won, or between $1,800 and $2,900 USD.

The company’s event site does however allow users to register for a $100 credit, valid when purchasing qualifying Galaxy products.

We’re hoping to learn more about the headset’s specs and promised VR motion controllers, which Samsung has yet to reveal.

Since our previous hands-on from last year, we’ve learned Project Moohan includes a Qualcomm Snapdragon XR2 + Gen 2, dual micro‑OLED panels, pancake lenses, automatic interpupillary distance (IPD) adjustment, support for eye and hand-tracking, optional magnetically-attached light shield, and a removable external battery pack.

My Take

Personally, the teaser doesn’t really serve up the sort of “wow” factor I was hoping for, as it highlights some fairly basic stuff seen in XR over the past decade. Yes, it’s actually has been that long.

While I don’t expect Moohan to stop at a Google Earth VR-style map and immersive video—neat as those things are—it’s interesting to me the company thought those two things were worthy additions to a launch day teaser for its first XR headset since the release of Samsung Odyssey+ in 2018.

Smasung Odyssey+ | Image courtesy Samsung

As the first official headset supporting Google’s Android XR operating system though, I expect the event will also focus on Moohan’s ability to not only use the standard library of Android apps and native XR stuff, but also XR productivity—provided Samsung really wants to go toe-to-toe with Vision Pro.

By all accounts, Moohan is a capable XR headset, but I wonder how much gas Samsung will throw at it now that Apple is reportedly shifting priorities to focus on Meta-style smart glasses instead of developing a cheaper and lighter Vision Pro. While Apple is still apparently moving ahead with Vision Pro’s M5 hardware refresh, which is rumored to release soon, that’s going to mostly appeal to enterprise users, which leaves Samsung to navigate a potentially awkward middle ground between Meta and Apple.

Moohan’s market performance may also dictate how other manufacturers adopt Android XR. And there’s worrying precedent. Google did the same thing with Lenovo Mirage Solo in 2018, which was supposed to be the first headset to support its Android-based Daydream platform before Google pulled the plug due to poor engagement. Here’s to hoping history doesn’t repeat itself.

Filed Under: News, VR Development, XR Industry News

Lynx Teases Next Mixed Reality Headset for Enterprise

October 13, 2025 From roadtovr

Lynx teased its next mixed reality headset, which is hoping to target enterprise and professional users across training and remote assistance.

The News

At MicroLED Connect last month, Lynx CEO Stan Larroque announced he aimed to reveal the company’s next mixed reality standalone sometime in mid-November.

However Somnium CEO Artur Sychov and major investor in the company beat Lynx to the punch by posting a cropped image of the France-based company’s next device.

I will just say this – Lynx next headset news is going to be wild… 💣

Sorry @stanlarroque, I can’t hold myself not to tease at least something… 😬😅

October & November 2025 will be 🔥 pic.twitter.com/XidrdTqqlp

— Artur Sychov ᯅ (@ASychov) October 10, 2025

In response, Larroque posted the full image, seen above. Here’s a version with the white balance turned up for better visibility, courtesy MRTV’s Sebastian Ang:

Modified image courtesy Sebastian Ang

There’s still a lot to learn, including specs and the device’s official name. From the image, we can tell at least two things: the headset has a minimum of four camera sensors, now positioned on the corners of the device à la Quest 2, and an ostensibly more comfortably headstrap that cups the back of the user’s head.

What’s more, Lynx announced late last year the company intended to integrate Google’s forthcoming Android XR operating system into its next headset, which will also include Samsung Project Moohan and forthcoming XR glasses from XREAL. Lynx hasn’t released any update on progress, so we’re still waiting to hear more.

Lynx R-1 | Image courtesy Lynx

Notably, Lynx R-1 concluded shipping earlier this year, which was initially positioned to target both consumers and professional users through its 2021 Kickstarter campaign, which brought in $800,000 in crowd-sourced funding.

According to Larroque’s talk at MicroLED Connect last month, it appears the company is however focusing hard on the enterprise sector with its next hardware release, including tasks like training and remote assistance.

My Take

Lynx R-1’s unique “4-fold catadioptric freeform prism” optics allow for a compact focal length, putting the displays flush with the lenses and providing a 90-degree field of view (FOV). While pancake lenses are generally thinner and lighter, R-1’s optics have comparably better light throughput, which is important for mixed reality tasks.

Image courtesy Lynx

As a startup that’s weathered an admittedly “excruciating” fundraising environment, making the right hardware choices in its follow-up will be key though.

My hunch is the prospective ‘Lynx R-2’ headset will probably keep the same optical stack to save on development and manufacturing costs, and mainly push upgrades to the processor and display, which are likely more important to the sort of enterprise customers Lynx is targeting anyway.

As it is, Lynx R-1 is powered by the Qualcomm Snapdragon XR2 chipset, which was initially released in 2019—the same used in Quest 2—so an upgrade there is well overdue. Its 1,600 × 1,600 per-eye LCDs also feel similarly dated.

While an FOV larger than 90 degrees is great, I’d argue that for enterprise hardware that isn’t targeting simulators, clarity and pixel density are probably more important. More info on Lynx’s next-gen headset is due sometime in November, so I’d expect to learn more then.

Filed Under: News, VR Development, XR Industry News

Meta Ray-Ban Display Repairablity is Predictably Bad, But Less Than You Might Think

October 9, 2025 From roadtovr

iFixit got their hands on a pair of Meta Ray-Ban Display smart glasses, so we finally get to see what’s inside. Is it repairable? Not really. But if you can somehow find replacement parts, you could at least potentially swap out the battery.

The News

Meta launched the $800 smart glasses in the US late last month, marking the company’s first pair with a heads-up display.

Serving up a monocular display, Meta Ray-Ban allows for basic app interaction beyond the standard stuff seen (or rather ‘heard’) in its audio-only Ray-Ban Meta and Oakley Meta glasses. It can do things like let you view and respond to messages, get turn-by-turn walking directions, and even use the display as a viewfinder for photos and video.

And iFixit shows off in their latest video that cracking into the glasses and attempting repairs is pretty fiddly, but not entirely impossible.

Meta Ray-Ban Display’s internal battery | Image courtesy iFixit

The first thing you’d probably eventually want to do is replace the battery, which requires splitting the right arm down a glued seam—a common theme with the entire device. Getting to the 960 mWh internal battery, which is slightly larger than the one seen in the Oakley Meta HSTN, you’ll be sacrificing the device’s IPX4 splash resistance rating.

And the work is fiddly, but iFixit manages to go all the way down to the dual speakers, motherboard, Snapdragon AR1 chipset, and liquid crystal on silicon (LCoS) light engine, the latter of which was captured with a CT scanner to show off just how micro Meta has managed to get its most delicate part.

Granted, this is a teardown and not a repair guide as such. All of the components are custom, and replacement parts aren’t available yet. You would also need a few specialized tools and an appetite for risk of destroying a pretty hard-to-come-by device.

For more, make sure to check out iFixit’s full article, which includes images and detailed info on each component. You can also see the teardown in action in the full nine-minute video below.

My Take

Meta isn’t really thinking deeply about reparability when it comes to smart glasses right now, which isn’t exactly shocking. Like earbuds, smart glasses are all about miniaturization to hit an all-day wearable form factor, making its plastic and glue-coated exterior a pretty clear necessity in the near term.

Another big factor: the company is probably banking on the fact that prosumers willing to shell out $800 bucks this year will likely be happy to so the same when Gen 2 eventually arrives. That could be in two years, but I’m betting less if the device performs well enough in the market. After all, Meta sold Quest 2 in 2020 just one year after releasing the original Quest, so I don’t see why they wouldn’t do the same here.

That said, I don’t think we’ll see any real degree of reparability in smart glasses until we get to the sort of sales volumes currently seen in smartphones. And that’s just for a baseline of readily available replacement parts, third-party or otherwise.

So while I definitely want a pair of smart glasses (and eventually AR glasses) that look indistinguishable from standard frames, that also kind of means I have to be okay with eventually throwing away a perfectly cromulent pair of specs just because I don’t have the courage to open it up, or know anyone who does.

Filed Under: AR Development, News, XR Industry News

Meta Ray-Ban Display Waveguide Provider Says It’s Poised for Wide Field-of-view Glasses

September 30, 2025 From roadtovr

SCHOTT—a global leader in advanced optics and specialty glass—working with waveguide partner Lumus, is almost certainly the manufacturer of the waveguide optics in Meta’s Ray-Ban Display glasses. While the Ray-Ban Display glasses offer only a static 20° field-of-view, the company says its waveguide technology is also capable of supporting immersive wide field-of-view glasses in the future.

The News

Schott has secured a big win as perhaps the first waveguide maker to begin producing waveguides at consumer scale. While Meta hasn’t confirmed who makers the waveguides in the Ray-Ban Display glasses, Schott announced—just one day before the launch of Ray-Ban Display—that it was the “first company capable of handling geometric reflective waveguide manufacturing in [mass] production volumes.”

In anticipation of AR glasses, Shott has spent years investing in technology, manufacturing, and partnerships in an effort to set itself up as a leading provider of optics for smart glasses and AR glasses.

The company signed a strategic partnership with Lumus (the company that actually designs the geometric reflective waveguides) back in 2020. Last year the company announced the completion of a brand new factory which it said would “significantly enhance Schott’s capacity to supply high-quality optical components to international high-tech industries, including Augmented Reality (AR).”

Image courtesy Schott

Those investments now appear to be paying off. While there are a handful of companies out there with varying waveguide technologies and manufacturing processes, as the likely provider of the waveguides in the Ray-Ban Display glasses, Schott can now claim it has “proven mass market readiness regarding scalability;” something others have yet to do at this scale, as far as I’m aware.

“This breakthrough in industrial production of geometric reflective waveguides means nothing less than adding a crucial missing puzzle piece to the AR technology landscape,” said Dr. Ruediger Sprengard, Senior Vice President Augmented Reality at Schott. “For years, the promise of lightweight and powerful smart glasses available at scale has been out of reach. Today, we are changing that. By offering geometric reflective waveguides at scale, we’re helping our partners cross the threshold into truly wearable products, providing an immersive experience.”

As for the future, the company claims its geometric reflective waveguides will be able to scale beyond the small 20° field-of-view of the Ray-Ban Display glasses to immersive wide field-of-view devices.

“Compared to competing optical technologies in AR, geometric reflective waveguides stand out in light and energy efficiency, enabling device designers to create fashionable glasses for all-day use. These attributes make geometric reflective waveguides the best option for small FoVs, and the only available option for wide FoVs,” the company claims in its announcement.

Indeed, Schott’s partner Lumus has long demonstrated wider field-of-view waveguides, like the 50° ‘Lumus Maximus’ I saw as far back as 2022.

My Take

As the likely provider of waveguides for Ray-Ban Display, Schott & Lumus have secured a big win over competitors. From the outside, it looks like Lumus’ geometric reflective waveguides won out primarily due to their light efficiency. Most other waveguide technologies rely on diffractive (rather than reflective) optics, which have certain advantages but fall short on light efficiency.

Light efficiency is crucial because the microdisplays in glasses-sized devices must be both tiny and power-efficient. As displays get larger and brighter, they get bulkier, hotter, and more power-hungry. Using a waveguide with high light efficiency thus allows the displays to be smaller, cooler, and less power-hungry, which is critical considering the tiny space available.

Light and power demands also rise with field-of-view, since spreading the same light across a wider area reduces apparent brightness.

Schott says its waveguide technology is ready to scale to wider fields-of-view, but that probably isn’t what’s holding back true AR glasses (like the Orion Prototype that Meta showed off in 2024).

It’s not just wide field-of-view optics that need to be in place for a device like Orion to ship. There’s still the issue of battery and processing power. Orion was only able to work as it does because a lot of the computation and battery was offloaded onto a wireless puck. If Meta wants to launch full AR glasses like Orion without a puck (as they did with Ray-Ban Display), the company still needs smaller, more efficient chips to make that possible.

Additionally, display technology also needs to advance in order to actually take advantage of optics that are capable of projectinga wide field-of-view

Ray-Ban Display glasses are using a fairly low resolution 0.36MP (600 × 600) display. It appears sharp because the pixels are spread across just 20°. As the field-of-view increases, both brightness and resolution need to increase to maintain the same image quality. Without much room to increase the physical size of the display, that means packing smaller pixels into the same tiny area, while also making them brighter. As you can imagine, it’s a challenge to improve these inversely-related characteristics at the same time.

Filed Under: News, XR Industry News

Why Ray-Ban Meta Glasses Failed on Stage at Connect

September 19, 2025 From roadtovr

Meta CEO Mark Zuckerberg’s keynote at this year’s Connect wasn’t exactly smooth—especially if count two big hiccups that sidetracked live demos for both the latest Ray-Ban Meta smart glasses and the new Meta Ray-Ban Display glasses.

Ray-Ban Meta (Gen 2) smart glasses essentially bring the same benefits as Oakley Meta HSTN, which launched back in July: longer battery life and better video capture.

One of the biggest features though is its access to Meta’s large language model (LLM), Meta AI, which pops up when you say “Hey Meta”, letting you ask questions about anything, from the weather to what the glasses camera can actually see.

As part of the on-stage demo of its Live AI feature, which runs continuously instead of sporadically, food influencer Jack Mancuso attempted to create a Korean-inspired steak sauce using the AI as a guide.

And it didn’t go well, as Mancuso struggled to get the Live AI back on track after missing a key step in the sauce’s preparation. You can see the full cringe-inducing glory for yourself, timestamped below:

And the reason behind it is… well, just dumb. Jake Steineman, Developer Advocate at Meta’s Reality Labs, explained what happened in an X post:

So here’s the story behind why yesterdays live #metaconnect demo failed – when the chef said “Hey Meta start Live AI” it activated everyone’s Meta AI in the room at once and effectively DDOS’d our servers 🤣

That’s what we get for doing it live!

— Jake Steinerman 🔜 Meta Connect (@jasteinerman) September 19, 2025

Unfortunate, yes. But also pretty foreseeable, especially considering the AI ‘wake word’ gaffe has been a thing since the existence of Google Nest (ex-Home) and Amazon Alexa.

Anyone with one of those friendly tabletop pucks has probably experienced what happens when a TV advert includes “Hey Google” or “Hey Alexa,” unwittingly commanding every device in earshot to tell them the weather, or even order items online.

What’s more surprising though: there were enough people using a Meta product in earshot to screw with its servers. Meta AI isn’t like Google Gemini or Apple’s Siri—it doesn’t have OS-level access to smartphones. The only devices with default are the company’s Ray-Ban Meta and Oakley Meta glasses (and Quest if you opt-in), conjuring the image of a room full of confused, bespectacled Meta employees waiting out of shot.

As for the Meta Ray-Ban Display glasses, which the company is launching in the US for $799 on September 30th, the hiccup was much more forgivable. Zuckerberg was attempting to take a live video call from company CTO Andrew Bosworth, who after several missed attempts, came on stage to do an ad hoc simulation of what it might have been like.

Those sorts of live product events are notoriously bad for both Wi-Fi and mobile connections, simply because of how many people are in the room, often with multiple devices per-person. Still, Zuckerberg didn’t pull a Steve Jobs, where the former Apple CEO demanded everyone in attendance at iPhone 4’s June 2010 unveiling turn off their Wi-Fi after an on-stage connection flub.

You can catch the Meta Ray-Ban Display demo below (obligatory cringe warning):

Filed Under: AR Development, News, XR Industry News

Next Page »

  • Home