• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

Researchers Propose Novel E-Ink XR Display with Resolution Far Beyond Current Headsets

October 27, 2025 From roadtovr

A group of Sweden-based researchers proposed a novel e-ink display solution that could make way for super compact, retina-level VR headsets and AR glasses in the future.

The News

Traditional emissive displays are shrinking, but they face physical limits; smaller pixels tend to emit less uniformly and provide less intense light, which is especially noticeable in near-eye applications like virtual and augmented reality headsets.

In a recent research paper published in Nature, a team of researchers presents what a “retinal e-ink display” which hopes to offer a new solution quite unlike displays seen in modern VR headsets today, which are increasingly adopting micro-OLEDs to reduce size and weight.

The paper was authored by researchers affiliated with Uppsala University, Umeå University, University of Gothenburg, and Chalmers University of Technology in Gothenburg: Ade Satria Saloka Santosa, Yu-Wei Chang, Andreas B. Dahlin, Lars Österlund, Giovanni Volpe, and Kunli Xiong.

While conventional e-paper has struggled to reach the resolution necessary for realistic, high-fidelity images, the team proposes a new form of e-paper featuring electrically tunable “metapixels” only about 560 nanometres wide.

This promises a pixel density of over 25,000 pixels per inch (PPI)—an order of magnitude denser than displays currently used in headsets like Samsung Galaxy XR or Apple Vision Pro. Those headsets have a PPI of around 4,000.

Image courtesy Nature

As the paper describes it, each metapixel is made from tungsten trioxide (WO₃) nanodisks that undergo a reversible insulator-to-metal transition when electrically reduced. This process dynamically changes the material’s refractive index and optical absorption, allowing nanoscale control of brightness and color contrast.

In effect, when lit by ambient light, the display can create bright, saturated colors far thinner than a human hair, as well as deep blacks with reported optical contrast ratios around 50%—a reflective equivalent of high-dynamic range (HDR).

And the team says it could be useful in both AR and VR displays. The figure below shows a conceptual optical stack for both applications, with Figure A representing a VR display, and Figure B showing an AR display.

Image courtesy Nature

Still, there are some noted drawbacks. Beyond sheer resolution, the display delivers full-color video at “more than 25 Hz,” which is significantly lower than what VR users need for comfortable viewing. In addition to a relatively low refresh rate, researchers note the retina e-paper requires further optimization in color gamut, operational stability and lifetime.

“Lowering the operating voltage and exploring alternative electrolytes represent promising engineering routes to extend device durability and reduce energy consumption,” the paper explains. “Moreover, its ultra-high resolution also necessitates the development of ultra-high-resolution TFT arrays for independent pixel control, which will enable fully addressable, large-area displays and is therefore a critical direction for future research and technological development.”

And while the e-paper display itself is remarkably low-powered, packing in the graphical compute to put those metapixels to work will also be a challenge. It’s a good problem to have, but a problem none the less.

My Take

At least as the paper describes it, the underlying tech could produce XR displays approaching the size and pixel density that we’ve never seen before. And reaching the limits of human visual perception is one of those holy grail moments I’ve been waiting for.

Getting that refresh rate up well beyond 25 Hz is going to be extremely important though. As the paper describes it, 25 Hz is good for video playback, but driving an immersive VR environment requires at least 60 Hz refresh to be minimally comfortable. 72 Hz is better, and 90 Hz is the standard nowadays.

I’m also curious to see the e-paper display stacked up against lower resolution micro-OLED contemporaries, if only to see how that proposed ambient lighting can achieve HDR. I have a hard time wrapping my head around it. Essentially, the display’s metapixels absorb and scatter ambient light, much like Vantablack does—probably something that needs to be truly seen in person to be believed.

Healthy skepticism aside, I find it truly amazing we’ve even arrived at the conversation in the first place: we’re at the point where XR displays could recreate reality, at least as far as your eyes are concerned.

Filed Under: AR Development, News, VR Development, XR Industry News

Former Oculus Execs’ AI Smart Glasses Startup ‘Sesame’ Raises $250M Series B Funding

October 24, 2025 From roadtovr

Sesame, an AI and smart glasses startup founded by former Oculus execs, raised $250 million in Series B funding, which the company hopes will accelerate its voice-based AI.

The News

As first reported by Tech Crunch, lead investors in Sesame’s Series B include Spark Capital and Sequoia Capital, bringing the company’s overall funding to $307.6 million, according to Crunchbase data.

Exiting stealth earlier this year, Sesame was founded by Oculus co-founder and former CEO Brendan Iribe, former Oculus hardware architect Ryan Brown, and Ankit Kumar, former CTO of AR startup Ubiquity6. Additionally, Oculus co-founder Nate Mitchell announced in June he was joining Sesame as Chief Product Officer, which he noted was to “help bring computers to life.”

Image courtesy Sesame

Sesame is currently working on an AI assistant along with a pair of lightweight smart glasses. Its AI assistant aims to be “the perfect AI conversational partner,” Sequoia Capital says in a recent post.

“Sesame’s vision is to build an ambient interface that is always available and has contextual awareness of the world around you,” Sequoia says. “To achieve that, Sesame is creating their own lightweight, fashion-forward AI-enabled glasses designed to be worn all day. They’re intentionally crafted—fit for everyday life.”

Sesame is currently taking signups for beta access to its AI assistants Miles and Maya in an iOS app, and also has a public preview showcasing a ‘call’ function that allows you to speak with the chatbots.

My Take

Love it or hate it, AI is going to be baked into everything in the future, as contextually aware systems hope to bridge the gap between user input and the expectation of timely and intelligent output. That’s increasingly important when the hardware doesn’t include a display, requiring the user to interface almost entirely by voice.

Some things to watch out for: if the company does commercialize a pair of smart glasses to champion its AI assistant, it will be competing for some pretty exclusive real estate that companies like Meta, Google, Samsung, and Apple (still unconfirmed) are currently gunning for. That puts Sesame at somewhat of a disadvantage if it hopes to go it alone, but not if it’s hoping for a timely exit into the coming wave of smart glasses by being acquired by any of the above.

There’s also some pretty worrying precedent in the rear view mirror too: e.g. Humane’s AI Pin or AI Friend necklace, both of which were publicly lambasted for essentially releasing hardware that could just as easily have been apps on your smartphone.

Granted, Sesame hasn’t shown off its smart glasses hardware yet, so there’s no telling what the company hopes to bring to the table outside of the having an easy-to-wear pair of off-ear headphones for all-day AI stuff—that, to me, would be the worst case scenario, as Meta refines its own smart glasses in partnership with EssilorLuxottica, Google releases Android XR frames with Gentle Monster and Warby Parker, Samsung releases its own Android XR glasses, and Apple does… something. We don’t know yet.

Whatever the case, I’m looking forward to it, if only based on the company’s combined experience in XR, which I’d argue any startup would envy as the race to build the next big computing platform truly takes off.

Filed Under: AR Development, AR Investment, News, XR Industry News

Amazon is Developing Smart Glasses to Allow Delivery Drivers to Work Hands-free

October 23, 2025 From roadtovr

Amazon announced it’s developing smart glasses for its delivery drivers, which include a display for real-time navigation and delivery instructions.

The News

Amazon announced the news in a blog post, which partially confirms a recent report from The Information, which alleged that Amazon is developing smart glasses both for its delivery drivers and consumers.

The report, released in September, maintained that Amazon’s smart glasses for delivery drivers will be bulkier and less sleek than the consumer model. Codenamed ‘Jayhawk’, the delivery-focused smart glasses are expected to rollout as soon as Q2 2026, and include an initial production run of 100,000 units.

Image courtesy Amazon

Amazon says the smart glasses were designed and optimized with input from hundreds of delivery drivers, and include the ability to identify hazards, scan packages, capture proof of delivery, and navigate by serving up turn-by-turn walking directions.

The company hasn’t confirmed whether the glasses’ green monotone heads-up display is monoscopic or stereoscopic, however images suggest it indeed features a single waveguide in the right lens.

Moreover, the glasses aren’t meant to be used while driving, as Amazon says that the glasses “automatically activate” when the driver parks their vehicle. Only afterwards does the driver receive instructions, ostensibly done to reduce the risk of driver distraction.

In addition to the glasses, the system also features what Amazon calls “a small controller worn in the delivery vest that contains operational controls, a swappable battery ensuring all-day use, and a dedicated emergency button to reach emergency services along their routes if needed.”

Additionally, Amazon says the glasses support prescription lenses along with transitional lenses that automatically adjust to light.

As for the reported consumer version, it’s possible Amazon may be looking to evolve its current line of ‘Echo Frames’ glasses. First introduced in 2019, Echo Frames support AI voice control, music playback, calls, and Alexa smart home control, although they notably lack any sort of camera or display.

My Take

I think Amazon has a good opportunity to dogfood (aka, use its own technology) here on a pretty large scale—probably much larger than Meta or Google could initially with their first generation of smart glasses with displays.

That said, gains made in enterprise smart glasses can be difficult to translate to consumer products, which will necessarily include more functions and apps, and likely require more articulated input—all of the things that can make or break any consumer product.

Third-gen Echo Frames | Image courtesy Amazon

Amazon’s core strength though is generally less focused on high-end innovation, and more about creating cheap, reliable hardware that feeds into recurring revenue streams: Kindle, Fire TV, Alexa products, etc. Essentially, if Amazon can’t immediately figure out a way to make consumer smart glasses feed into its existing ecosystems, I wouldn’t expect to see the company put its full weight behind the device, at least not initially.

After the 2014 failure of Fire Phone, Amazon may still be gun-shy from going head-first into a segment it has near-zero experience entering. And I really don’t count Echo Frames, because they’re primarily just Bluetooth headphones with Alexa support baked in. Still, real smart glasses with cameras and displays represent a treasure trove of data that the company may not be so keen to pass up.

Using object recognition to peep into your home or otherwise follow you around could allow Amazon to better target personalized suggestions, figure out brand preferences, and even track users as they shop at physical stores. Whatever the case, I bet the company will give it a go, if only to occupy the top slot when you search “smart glasses” on Amazon.

Filed Under: AR Development, News, XR Industry News

Shiftall Announces Next Thin & Light ‘MeganeX’ PC VR Headset, Shipping in December for $1,900

October 16, 2025 From roadtovr

Shiftall unveiled its next PC VR headset, the MeganeX “8K” Mark II, which is slated to ship in December for $1,900.

The News

Japan-based Shiftall announced MeganeX “8K” Mark II, the follow-up to its thin and light PC VR headset originally launched late last year, the MeganeX superlight “8K”.

The new version is essentially a hardware refresh with only a few notable changes, which mostly aim to improve comfort, durability, and system internals.

Shiftall MeganeX “8K” Mark 2 | Image courtesy Shiftall

The headset contains the same 3,552 × 3,840 per-eye micro-OLEDs, supporting up to 90 Hz refresh, and the same SteamVR tracking standard, which requires the user to buy SteamVR 1.0/2.0 base stations separately.

Here’s a breakdown of all of the changes announced by Shiftall:

  • New chip: The CPU and operating system (OS) have been upgraded, and the firmware has been newly developed, reducing the startup time to less than one-fifth of the previous model. Connection stability with PCs and SteamVR has been improved, and the firmware update process has been improved for greater reliability.
  • New Pancake lenses: Shiftall says they’re newly designed by Panasonic Group.
  • Redesigned USB-C cable connection: previously located on the top of the headset, the USB-C port has been moved to the front and structurally reinforced for improved durability. A specially developed intermediate USB cable enhances connection stability and prevents issues caused by wear or accidental disconnection.
  • Refined nose gap: Sharp plastic edges no longer come into contact with ‘Western’ nose shapes. The material and shape around the nose area have been improved for greater comfort.
  • New Strap material: A new strap material has been adopted, and includes better durability of the hook-and-loop fastener.

Estimated to start shipping in late December, MeganeX Mark II is now available for pre-order.

The headset (SteamVR base stations not included) is priced at $1,900 in the US (excluding import duty), €1,900 in Europe (VAT included), £1,600 in the UK (VAT included), and ₩2,499,000 in South Korea (VAT included).

Specs

Feature MeganeX Superlight “8K” MeganeX “8K” Mark II
Display 3,552 × 3,840 (micro-OLED, 10-bit HDR) 3,552 × 3,840 (micro-OLED, 10-bit HDR)
Refresh rates 90 Hz (support for 75 Hz / 72 Hz) 90 Hz (support for 75 Hz / 72 Hz)
Lens type Pancake lenses (Panasonic group) Pancake lenses (newly designed from Panasonic)
Weight (main body) < 185 g 179 g
IPD & focus adjustment Electric IPD 58–72 mm; diopter adjust 0D to –7D Electric IPD 58–72 mm; diopter adjust 0D to –7D
Connectivity / tracking ecosystem DisplayPort + USB 2.0, SteamVR tracking (base stations required) DisplayPort + USB 2.0, SteamVR tracking (base stations required)

My Take

You may have noticed I’ve put “8K” in quotes throughout this announcement. That’s to indicate that headset doesn’t actually provide 8K per-eye displays.

While companies like Shiftall and Pimax typically err on the side of the biggest number, I see this as more of a marketing device than a true reflection of what the end user actually sees. Because it’s using dual 3,552 × 3,840 micro-OLEDs, the user doesn’t actually perceive an 8K image. By that maxim, Quest 3 could be labeled with “4K”, owing to its dual 2,064 × 2,208 displays, and Oculus Rift CV1 could be labeled “2K” according to its dual 1,080 × 1,200 displays. Impressive sounding, but a bit misleading.

That said, Shiftall thinks resolution is a better catch-all for VR headsets, which I disagree with since its target audience will probably understand the nuances of displays and optics anyway.

“We have decided against publishing official FOV and PPD numbers,” Shiftall says, referring to the original MeganeX superlight “8K”. “If an industry-standard measurement method were established, such as the method used to calculate fuel consumption for automobiles, we would disclose our figures, but this is not the case in the current VR industry.”

Still, I suspect potential enterprise and prosumers looking to shell out $1,900 for a single headset—no controllers or base stations included—are already familiar with pixels per degree (PPD) and binocular overlap, which are more useful, albeit less flashy metrics. On that front, MeganeX “8K” Mark II is impressive. Its pancake lenses provide a reported ~100-degree horizontal FOV, which seems to deliver a near 100 percent binocular overlap.

Using the formula to get PPD (Horizontal Pixel Count ÷ Horizontal Field of View), it also tops the competition, coming out to around 35.5 PPD: larger than Pimax Dream Air ($2,000) at 35 PPD, and Bigscreen Beyond 2 ($1,020) at 32 PPD.

Whatever the case, I think its time to retire these sorts of resolution claims championed outside of the spec sheet, if only to lend more credibility to the company in question. And the same goes for the questionable Photoshop jobs too.

Filed Under: News, PC VR News & Reviews

Samsung to Launch Project Moohan XR Headset at Galaxy Event on October 21st

October 15, 2025 From roadtovr

Samsung announced it’s holding a Galaxy Event on October 21st, which will feature Project Moohan, the company’s long-awaited Apple Vision Pro competitor.

The News

The livestream event is slated to take place on October 21st at 10PM ET (local time here), which is said to focus on “the future of AI” and Project Moohan.

“Come meet the first official device on Android XR—Project Moohan,” the video’s description reads.

There’s no official indication yet on what the headset will be priced, or even officially named at this point. A previous report from South Korea’s Newsworks suggests it could cost somewhere between ₩2.5 and ₩4 million South Korean won, or between $1,800 and $2,900 USD.

The company’s event site does however allow users to register for a $100 credit, valid when purchasing qualifying Galaxy products.

We’re hoping to learn more about the headset’s specs and promised VR motion controllers, which Samsung has yet to reveal.

Since our previous hands-on from last year, we’ve learned Project Moohan includes a Qualcomm Snapdragon XR2 + Gen 2, dual micro‑OLED panels, pancake lenses, automatic interpupillary distance (IPD) adjustment, support for eye and hand-tracking, optional magnetically-attached light shield, and a removable external battery pack.

My Take

Personally, the teaser doesn’t really serve up the sort of “wow” factor I was hoping for, as it highlights some fairly basic stuff seen in XR over the past decade. Yes, it’s actually has been that long.

While I don’t expect Moohan to stop at a Google Earth VR-style map and immersive video—neat as those things are—it’s interesting to me the company thought those two things were worthy additions to a launch day teaser for its first XR headset since the release of Samsung Odyssey+ in 2018.

Smasung Odyssey+ | Image courtesy Samsung

As the first official headset supporting Google’s Android XR operating system though, I expect the event will also focus on Moohan’s ability to not only use the standard library of Android apps and native XR stuff, but also XR productivity—provided Samsung really wants to go toe-to-toe with Vision Pro.

By all accounts, Moohan is a capable XR headset, but I wonder how much gas Samsung will throw at it now that Apple is reportedly shifting priorities to focus on Meta-style smart glasses instead of developing a cheaper and lighter Vision Pro. While Apple is still apparently moving ahead with Vision Pro’s M5 hardware refresh, which is rumored to release soon, that’s going to mostly appeal to enterprise users, which leaves Samsung to navigate a potentially awkward middle ground between Meta and Apple.

Moohan’s market performance may also dictate how other manufacturers adopt Android XR. And there’s worrying precedent. Google did the same thing with Lenovo Mirage Solo in 2018, which was supposed to be the first headset to support its Android-based Daydream platform before Google pulled the plug due to poor engagement. Here’s to hoping history doesn’t repeat itself.

Filed Under: News, VR Development, XR Industry News

Lynx Teases Next Mixed Reality Headset for Enterprise

October 13, 2025 From roadtovr

Lynx teased its next mixed reality headset, which is hoping to target enterprise and professional users across training and remote assistance.

The News

At MicroLED Connect last month, Lynx CEO Stan Larroque announced he aimed to reveal the company’s next mixed reality standalone sometime in mid-November.

However Somnium CEO Artur Sychov and major investor in the company beat Lynx to the punch by posting a cropped image of the France-based company’s next device.

I will just say this – Lynx next headset news is going to be wild… 💣

Sorry @stanlarroque, I can’t hold myself not to tease at least something… 😬😅

October & November 2025 will be 🔥 pic.twitter.com/XidrdTqqlp

— Artur Sychov ᯅ (@ASychov) October 10, 2025

In response, Larroque posted the full image, seen above. Here’s a version with the white balance turned up for better visibility, courtesy MRTV’s Sebastian Ang:

Modified image courtesy Sebastian Ang

There’s still a lot to learn, including specs and the device’s official name. From the image, we can tell at least two things: the headset has a minimum of four camera sensors, now positioned on the corners of the device à la Quest 2, and an ostensibly more comfortably headstrap that cups the back of the user’s head.

What’s more, Lynx announced late last year the company intended to integrate Google’s forthcoming Android XR operating system into its next headset, which will also include Samsung Project Moohan and forthcoming XR glasses from XREAL. Lynx hasn’t released any update on progress, so we’re still waiting to hear more.

Lynx R-1 | Image courtesy Lynx

Notably, Lynx R-1 concluded shipping earlier this year, which was initially positioned to target both consumers and professional users through its 2021 Kickstarter campaign, which brought in $800,000 in crowd-sourced funding.

According to Larroque’s talk at MicroLED Connect last month, it appears the company is however focusing hard on the enterprise sector with its next hardware release, including tasks like training and remote assistance.

My Take

Lynx R-1’s unique “4-fold catadioptric freeform prism” optics allow for a compact focal length, putting the displays flush with the lenses and providing a 90-degree field of view (FOV). While pancake lenses are generally thinner and lighter, R-1’s optics have comparably better light throughput, which is important for mixed reality tasks.

Image courtesy Lynx

As a startup that’s weathered an admittedly “excruciating” fundraising environment, making the right hardware choices in its follow-up will be key though.

My hunch is the prospective ‘Lynx R-2’ headset will probably keep the same optical stack to save on development and manufacturing costs, and mainly push upgrades to the processor and display, which are likely more important to the sort of enterprise customers Lynx is targeting anyway.

As it is, Lynx R-1 is powered by the Qualcomm Snapdragon XR2 chipset, which was initially released in 2019—the same used in Quest 2—so an upgrade there is well overdue. Its 1,600 × 1,600 per-eye LCDs also feel similarly dated.

While an FOV larger than 90 degrees is great, I’d argue that for enterprise hardware that isn’t targeting simulators, clarity and pixel density are probably more important. More info on Lynx’s next-gen headset is due sometime in November, so I’d expect to learn more then.

Filed Under: News, VR Development, XR Industry News

Meta Ray-Ban Display Repairablity is Predictably Bad, But Less Than You Might Think

October 9, 2025 From roadtovr

iFixit got their hands on a pair of Meta Ray-Ban Display smart glasses, so we finally get to see what’s inside. Is it repairable? Not really. But if you can somehow find replacement parts, you could at least potentially swap out the battery.

The News

Meta launched the $800 smart glasses in the US late last month, marking the company’s first pair with a heads-up display.

Serving up a monocular display, Meta Ray-Ban allows for basic app interaction beyond the standard stuff seen (or rather ‘heard’) in its audio-only Ray-Ban Meta and Oakley Meta glasses. It can do things like let you view and respond to messages, get turn-by-turn walking directions, and even use the display as a viewfinder for photos and video.

And iFixit shows off in their latest video that cracking into the glasses and attempting repairs is pretty fiddly, but not entirely impossible.

Meta Ray-Ban Display’s internal battery | Image courtesy iFixit

The first thing you’d probably eventually want to do is replace the battery, which requires splitting the right arm down a glued seam—a common theme with the entire device. Getting to the 960 mWh internal battery, which is slightly larger than the one seen in the Oakley Meta HSTN, you’ll be sacrificing the device’s IPX4 splash resistance rating.

And the work is fiddly, but iFixit manages to go all the way down to the dual speakers, motherboard, Snapdragon AR1 chipset, and liquid crystal on silicon (LCoS) light engine, the latter of which was captured with a CT scanner to show off just how micro Meta has managed to get its most delicate part.

Granted, this is a teardown and not a repair guide as such. All of the components are custom, and replacement parts aren’t available yet. You would also need a few specialized tools and an appetite for risk of destroying a pretty hard-to-come-by device.

For more, make sure to check out iFixit’s full article, which includes images and detailed info on each component. You can also see the teardown in action in the full nine-minute video below.

My Take

Meta isn’t really thinking deeply about reparability when it comes to smart glasses right now, which isn’t exactly shocking. Like earbuds, smart glasses are all about miniaturization to hit an all-day wearable form factor, making its plastic and glue-coated exterior a pretty clear necessity in the near term.

Another big factor: the company is probably banking on the fact that prosumers willing to shell out $800 bucks this year will likely be happy to so the same when Gen 2 eventually arrives. That could be in two years, but I’m betting less if the device performs well enough in the market. After all, Meta sold Quest 2 in 2020 just one year after releasing the original Quest, so I don’t see why they wouldn’t do the same here.

That said, I don’t think we’ll see any real degree of reparability in smart glasses until we get to the sort of sales volumes currently seen in smartphones. And that’s just for a baseline of readily available replacement parts, third-party or otherwise.

So while I definitely want a pair of smart glasses (and eventually AR glasses) that look indistinguishable from standard frames, that also kind of means I have to be okay with eventually throwing away a perfectly cromulent pair of specs just because I don’t have the courage to open it up, or know anyone who does.

Filed Under: AR Development, News, XR Industry News

Apple Reportedly Shelves Cheaper & Lighter Vision Pro for Smart Glasses to Rival Meta

October 2, 2025 From roadtovr

Apple seems to be releasing its next Vision Pro with M5 chip soon, but according to a new report from Bloomberg’s Mark Gurman, the company may have shelved plans for a follow-up headset that’s cheaper and lighter in favor of releasing smart glasses set to compete with Meta.

The News

According to previous rumors, Apple was developing a Vision Pro follow-up more squarely aimed at consumers—often referred to as ‘Vision Air’ (codenamed ‘N100’). Analyst Ming-Chi Kuo reported in September that Vision Air was expected to be “over 40% lighter and more than 50% cheaper” than the current Vision Pro, putting the device at less than 400g and less than $1,750.

Notably, a hardware refresh of Vision Pro featuring Apple’s latest M5 chip is likely releasing soon, according to recent FCC filings, although its 600g weight and $3,500 price tag are likely to remain the same.

Meta Ray-Ban Display Glasses & Neural Band | Image courtesy Meta

Now, Bloomberg’s Mark Gurman maintains Apple is putting Vision Air on hold, citing internal sources. Instead, Apple is reportedly shifting resources to accelerate development of smart glasses, which aim to take on Ray-Ban Meta and the new Meta Ray-Ban Display glasses.

Gurman reports that Apple is pursuing at least two types of smart glasses: an audio-only pair codenamed ‘N50’, which are meant to pair with iPhone, and ostensibly compete with Meta’s fleet of $300+ smart glasses built in partnership with EssilorLuxottica. Apple is reportedly set to preview N50 as soon as 2026, with a release by 2027.

A second pair is said to contain a display, similar to Meta Ray-Ban Display, which launched late last month in the US for $800. In contrast, Apple’s display smart glasses were expected to release in 2028, however the company is reportedly fast-tracking the device’s development.

Both versions are said to emphasize voice interaction and AI integration, and offer multiple styles and a new custom chip to power the devices.

My Take

The shifted development timeline feels a little out of character for Apple, which typically enters segments after a technology is generally proven. Apple didn’t invent the smartphone, smart watch, laptop, or desktop, although it owns a significant slice of each in 2025 thanks to its unique brand of ‘ecosystem stickiness’ and inherent cool factor.

The entrance of Meta Ray-Ban Displays however marks an important inflection point in the race to own the next big computing paradigm. Smart glasses with displays aren’t the end destination, but they are an important stepping stone along the way to all-day augmented reality. And a strong foothold in AI is integral.

“Let’s wait and see what Apple does” has been a pretty common thought process when it comes to emergent tech—something people have been saying for years in VR. The big hope was Apple would eventually swoop in, redefine VR for the masses, and make the best version of it with their first device.

Vision Pro (M2) | Image captured by Road to VR

But Vision Pro isn’t the first-gen iPhone (2007). While a lighter, cheaper version could address pain points, it would still have a hard time not drawing direct comparisons to Meta devices 5-10 times cheaper.

But AI isn’t one of those technologies you can afford to sleep on, if only from a user data collection perspective. In contrast to its biggest competitors, Apple has notably lagged in AI development, having only released its Apple Intelligence platform in late 2024 to counter Google Gemini, Microsoft Copilot, and OpenAI’s ChatGPT. Apple needs to play catchup.

While Apple is expected to release a rebuilt Siri this year to power its hardware ecosystem, smart glasses are the tip of the AI spear. Even without displays, wearing an always-on device represents a treasure trove of data and user behavior that companies will use to improve services, figure out what works and what doesn’t, and ultimately build the next big platform that companies have been salivating over: all-day AR glasses.

That’s the real battle here. Not only does Apple need smart glasses to compete in the next computing paradigm, but they also need them to bridge a very real component price gap. Economies of scale will eventually bring fiddly components down in price, like the extremely expensive and difficult to manufacture silicon carbide waveguides seen in Meta’s Orion AR prototype revealed at last year’s Connect, which cost the company $10,000 each to build. Companies also need to create parts capable of fitting into a glasses form factor, with smart glasses representing an important first testing ground.

Filed Under: Apple Vision Pro News & Reviews, News

Next Apple Vision Pro Inches Closer to Launch, FCC Documents Suggest

October 1, 2025 From roadtovr

Apple may be preparing to release its long-rumored M5 hardware refresh of Vision Pro, according to new certification tests filed with the US Federal Communication Commission (FCC).

The News

As first spotted by MacRumors, Apple’s next Vision Pro seems to be right around the corner, as the FCC has published a trove of transmission tests, SAR test reports, and WLAN test reports for a new “Head Mounted Device” from Apple.

The FCC documents in question don’t include any specs as such, however they do include a single image that seems to confirm the device is Apple’s next Vision Pro, and not, say, a pair of smart glasses.

Image courtesy FCC

This follows a leak in August, which seemingly confirmed that Vision Pro isn’t getting a massive overhaul, instead pointing to a hardware refresh that could feature Apple’s upcoming M5 chipset, according to code shared by Apple and discovered by MacRumors. 

The report also suggested that the new Vision Pro hardware refresh “isn’t expected to feature any design changes or hardware updates aside from the new chip,” although it could feature a new, more comfortable head strap.

My Take

The inclusion of M5 alone doesn’t feel like a massive overhaul, although it is a fair leap in chipset generations. Released in February 2024 for $3,500, the original Apple Vision Pro was saddled with the then two year-old M2—still the most powerful consumer standalone to date, but just not one on par with the rest of its ‘Pro’ lineup at the time.

Notably, despite having access to almost all iPad apps in addition to built-for-XR apps of its own, Vision Pro (M2) doesn’t run some of the most requested productivity apps natively, like Final Cut Pro or Logic Pro. There’s no guarantee the new hardware refresh will either, but it could do a few things.

Apple Vision Pro with ANNAPRO A2 Strap | Photo by Road to VR

Provided we’re getting what’s reported (no more, no less), that essentially puts Vision Pro on par with the rest of Apple’s core products. It could allow developers to build apps that perform consistently across all of the reported ‘Pro’ Mac and iPad devices coming with M5, new Vision Pro included.

As Road to VR reader Christian Schildwaechter points out in the comments of the initial report, the M5 Vision Pro refresh might actually be a distinctly pragmatic move by Apple though, and less about enabling more powerful apps for prosumers, making it more of a stopgap measure.

As Schildwaechter puts it, “most users won’t benefit from an M5. Enterprise customers creating inhouse apps will be happy about the extra performance, but developers targeting consumers probably won’t bother with it.”

So, Apple could be killing two birds with one stone. Hypothetically, the company can flush its stock of Vision Pro parts and plonk in the new M5 to keep enterprise buyers engaged until the company releases its first real headset targeted squarely at consumers.

As reported by independent analyst Ming-Chi Kuo, Apple’s next big XR push could be a cheaper and lighter version expected to release in 2027, called ‘Vision Air’. Kuo maintains Vision Air will be “over 40% lighter and more than 50% cheaper” than the current Vision Pro, making it around 350g and less than $1,750.

Questions worth some healthy speculation and rapid fire answers: When is the M5 Vision Pro coming?—possibly in the October/November timeframe alongside its new new MacBook Pro M5 model release. How much will it cost?—likely nothing short of the $3,500 if Apple is, you know, still Apple.

Filed Under: Apple Vision Pro News & Reviews, News

Meta Ray-Ban Display Waveguide Provider Says It’s Poised for Wide Field-of-view Glasses

September 30, 2025 From roadtovr

SCHOTT—a global leader in advanced optics and specialty glass—working with waveguide partner Lumus, is almost certainly the manufacturer of the waveguide optics in Meta’s Ray-Ban Display glasses. While the Ray-Ban Display glasses offer only a static 20° field-of-view, the company says its waveguide technology is also capable of supporting immersive wide field-of-view glasses in the future.

The News

Schott has secured a big win as perhaps the first waveguide maker to begin producing waveguides at consumer scale. While Meta hasn’t confirmed who makers the waveguides in the Ray-Ban Display glasses, Schott announced—just one day before the launch of Ray-Ban Display—that it was the “first company capable of handling geometric reflective waveguide manufacturing in [mass] production volumes.”

In anticipation of AR glasses, Shott has spent years investing in technology, manufacturing, and partnerships in an effort to set itself up as a leading provider of optics for smart glasses and AR glasses.

The company signed a strategic partnership with Lumus (the company that actually designs the geometric reflective waveguides) back in 2020. Last year the company announced the completion of a brand new factory which it said would “significantly enhance Schott’s capacity to supply high-quality optical components to international high-tech industries, including Augmented Reality (AR).”

Image courtesy Schott

Those investments now appear to be paying off. While there are a handful of companies out there with varying waveguide technologies and manufacturing processes, as the likely provider of the waveguides in the Ray-Ban Display glasses, Schott can now claim it has “proven mass market readiness regarding scalability;” something others have yet to do at this scale, as far as I’m aware.

“This breakthrough in industrial production of geometric reflective waveguides means nothing less than adding a crucial missing puzzle piece to the AR technology landscape,” said Dr. Ruediger Sprengard, Senior Vice President Augmented Reality at Schott. “For years, the promise of lightweight and powerful smart glasses available at scale has been out of reach. Today, we are changing that. By offering geometric reflective waveguides at scale, we’re helping our partners cross the threshold into truly wearable products, providing an immersive experience.”

As for the future, the company claims its geometric reflective waveguides will be able to scale beyond the small 20° field-of-view of the Ray-Ban Display glasses to immersive wide field-of-view devices.

“Compared to competing optical technologies in AR, geometric reflective waveguides stand out in light and energy efficiency, enabling device designers to create fashionable glasses for all-day use. These attributes make geometric reflective waveguides the best option for small FoVs, and the only available option for wide FoVs,” the company claims in its announcement.

Indeed, Schott’s partner Lumus has long demonstrated wider field-of-view waveguides, like the 50° ‘Lumus Maximus’ I saw as far back as 2022.

My Take

As the likely provider of waveguides for Ray-Ban Display, Schott & Lumus have secured a big win over competitors. From the outside, it looks like Lumus’ geometric reflective waveguides won out primarily due to their light efficiency. Most other waveguide technologies rely on diffractive (rather than reflective) optics, which have certain advantages but fall short on light efficiency.

Light efficiency is crucial because the microdisplays in glasses-sized devices must be both tiny and power-efficient. As displays get larger and brighter, they get bulkier, hotter, and more power-hungry. Using a waveguide with high light efficiency thus allows the displays to be smaller, cooler, and less power-hungry, which is critical considering the tiny space available.

Light and power demands also rise with field-of-view, since spreading the same light across a wider area reduces apparent brightness.

Schott says its waveguide technology is ready to scale to wider fields-of-view, but that probably isn’t what’s holding back true AR glasses (like the Orion Prototype that Meta showed off in 2024).

It’s not just wide field-of-view optics that need to be in place for a device like Orion to ship. There’s still the issue of battery and processing power. Orion was only able to work as it does because a lot of the computation and battery was offloaded onto a wireless puck. If Meta wants to launch full AR glasses like Orion without a puck (as they did with Ray-Ban Display), the company still needs smaller, more efficient chips to make that possible.

Additionally, display technology also needs to advance in order to actually take advantage of optics that are capable of projectinga wide field-of-view

Ray-Ban Display glasses are using a fairly low resolution 0.36MP (600 × 600) display. It appears sharp because the pixels are spread across just 20°. As the field-of-view increases, both brightness and resolution need to increase to maintain the same image quality. Without much room to increase the physical size of the display, that means packing smaller pixels into the same tiny area, while also making them brighter. As you can imagine, it’s a challenge to improve these inversely-related characteristics at the same time.

Filed Under: News, XR Industry News

Next Page »

  • Home