• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

News

Cambridge & Meta Study Raises the Bar for ‘Retinal Resolution’ in XR

November 5, 2025 From roadtovr

It’s been a long-held assumption that the human eye is capable of detecting a maximum of 60 pixels per degree (PPD), which is commonly called ‘retinal’ resolution. Any more than that, and you’d be wasting pixels. Now, a recent University of Cambridge and Meta Reality Labs study published in Nature maintains the upper threshold is actually much higher than previously thought.

The News

As the University of Cambridge’s news site explains, the research team measured participants’ ability to detect specific display features across a variety of scenarios: both in color and greyscale, looking at images straight on (aka ‘foveal vision’), through their peripheral vision, and from both close up and farther away.

The team used a novel sliding-display device (seen below) to precisely measure the visual resolution limits of the human eye, which seem to overturn the widely accepted benchmark of 60 PPD commonly considered as ‘retinal resolution’.

Image courtesy University of Cambridge, Meta

Essentially, PPD measures how many display pixels fall within one degree of a viewer’s visual field; it’s sometimes seen on XR headset spec sheets to better communicate exactly what the combination of field of view (FOV) and display resolution actually means to users in terms of visual sharpness.

According to the researchers, foveal vision can actually perceive much more than 60 PPD—more like up to 94 PPD for black-and-white patterns, 89 PPD for red-green, and 53 PPD for yellow-violet. Notably, the study had a few outliers in the participant group, with some individuals capable of perceiving as high as 120 PPD—double the upper bound for the previously assumed retinal resolution limit.

The study also holds implications for foveated rendering, which is used with eye-tracking to reduce rendering quality in an XR headset user’s peripheral vision. Traditionally optimized for black and white vision, the study maintains foveated rendering could further reduce bandwidth and computation by lowering resolution further for specific color channels.

So, for XR hardware engineers, the team’s findings point to a new target for true retinal resolution. For a more in-depth look, you can read the full paper in Nature.

My Take

While you’ll be hard pressed to find accurate info on each headset’s PPD—some manufacturers believe in touting pixels per inch (PPI), while others focus on raw resolution numbers—not many come close to reaching 60 PPD, let alone the revised retinal resolution suggested above.

According to data obtained from XR spec comparison site VRCompare, consumer headsets like Quest 3, Pico 4, and Bigscreen Beyond 2 tend to have a peak PPD of around 22-25, which describes the most pixel-dense area at dead center.

Meta ‘Butterscotch’ varifocal prototype (left), ‘Flamera’ passthrough prototype (right) | Image courtesy Meta

Prosumer and enterprise headsets fare slightly better, but only just. Estimating from available data, Apple Vision Pro and Samsung Galaxy XR boast a peak PPD of between 32-36.

Headsets like Shiftall MeganeX Superlight “8K” and Pimax Dream Air have around 35-40 peak PPD. On the top end of the range is Varjo, which claims its XR-4 ($8,000) enterprise headset can achieve 51 peak PPD through an aspheric lens.

Then, there are prototypes like Meta’s ‘Butterscotch’ varifocal headset, which the company showed off in 2023, which is said to sport 56 PPD (not confirmed if average or peak).

Still, there’s a lot more to factor in to reaching ‘perfect’ visuals beyond PPD, peak or otherwise. Optical artifacts, refresh rate, subpixel layout, binocular overlap, and eye box size can all sour even the best displays. What is sure though: there is still plenty of room to grow in the spec sheet department before any manufacturer can confidently call their displays retinal.

Filed Under: AR Development, News, VR Development, XR Industry News

‘MultiBrush’ Studio Secures $4.5M Grant to Promote Positive VR Experiences for Elders

November 4, 2025 From roadtovr

Rendever, the company behind Tilt Brush-based multiplayer Quest app MultiBrush (2022), has secured nearly $4.5 million in grant funding from the U.S. National Institutes of Health (NIH), which the company says it will use to bring its elder-focused VR experiences to the home care market.

The studio says in an announcement the latest funding includes $3.8 million for the Thrive At Home Program and an additional grant to build a caregiver support network in VR.

“These funds will pave the way for Rendever to bring their technology to the large majority of individuals and caregivers who are aging in place and lacking in structural social support,” the studio says.

Rendever is currently partnered with the University of California in Santa Barbara, research organization RAND, and home care service Right at Home.

The company says these organization will help it conduct studies to evaluate the effectiveness of VR technology in building relationships across living environments. The aim is to reduce social isolation, improve mental health, and enhance overall well-being in elders. Additionally, Rendever maintains studies gauging the impact of caregiving tools, including its recent Dementia & Empathy training program, will continue as a result.

“Our Phase II trial has shown the power of VR to effectively build and enhance family relationships across distances – even across country lines. The future of aging depends on technology that effectively reshapes how we experience these core parts of the human experience as we get older,” said Kyle Rand, Rendever CEO. “We know there’s nothing more holistically impactful than our social health. Over the next three years, we’ll work across the industry to build the next generation of community infrastructure that delivers real happiness and forges new relationships, all while driving meaningful health outcomes.”

While Rendever currently offers VR-assisted therapy for both senior living and healthcare facilities, the company is currently assembling a beta pilot in certain geographic regions in the US to test its forthcoming in-home offering.

Additionally, the company announced it’s adding Sarah Thomas to its Board of Directors, an expert on aging and venture partner in the AgeTech industry.

Filed Under: News, VR Investment, XR Industry News

Sharp is Crowdfunding a Slim & Light PC VR Headset in Japan That Feels Positively Retro

November 3, 2025 From roadtovr

Sharp announced it’s launching a crowdfunding campaign for a slim and light PC VR headset in Japan, called Xrostella VR1.

The News

Sharp first showed off a PC VR headset prototype at CES 2023, which was supposedly meant to ship sometime in 2024. It’s been nearly three years since we last heard about the headset, however during a recent Metaverse Expo in Japan, Sharp unveiled a newer version of the device, as demoed by Gizmodo Japan.

Now, Sharp says it’s slated to sell the device in Japan via crowdfunding platform Green Funding starting sometime in November, which it’s now dubbing ‘Xrostella VR1’.

Sharp Xxostella VR1 Prototype | Image courtesy Gizmodo Japan

Xrostella VR1 connects to either a Windows 11 PC or a limited number of smartphones via a wired connection. The company has confirmed compatibility with Sharp’s AQUOS sense10, with more models soon to be revealed.

Weighing in at just 198g and sporting what Sharp calls in a Japanese language press statement a “glasses-like design,” the headset includes dual 2,160 × 2,160 per eye LCD displays clocked up to 90Hz.

It also makes use of “thin, light-efficient pancake lens,” providing a 90 degree field of view (FOV), and cameras for both inside-out 6DOF tracking and color passthrough.

Sharp Xxostella VR1 Prototype | Image courtesy Gizmodo Japan

Included controllers appear to be a standard ‘Touch’-style affair that shipped with Quest 2 in 2020, replete with tracking rings, which comes in stark contrast to the company’s recent controller prototype, which combines standard button input with a unique haptic glove.

Additionally, Xrostella VR1 features a mechanism for adjusting the interpupillary distance (IPD) and diopter from 0D to -9.0D, which will allow nearsighted users to wear without needing glasses.

Pricing has yet to be confirmed, however Gizmodo Japan speculates it could be “more expensive than the Meta Quest 3,” which is priced at ¥81,400 (~$530 USD).

My Take

If you saw the specs and did a double take, you’re not alone. While having independent diopter adjustments is cool, it’s a shame Sharp is going so weak in the display department, as  it essentially delivers a resolution only slightly higher than Quest 3.

And while the form factor is interesting on paper, I have my doubts that ~198g will weigh lightly on the bridge of your nose without having some sort of strap you can crank down, or otherwise better distribute weight for longer sessions—making its ‘glasses’ form factor more akin to headset with rigid, non-configurable straps. It all smacks of an aging headset design, recalling devices like HTC Vive Flow (2021), which feels remarkably heavy on the face, even at 189g.

Granted, marketing images don’t show the buckled strap system seen below, so there’s no telling what it will ship with. But the fact the company was demoing with the strap tells me everything I need to know about just how front-heavy it will be.

Sharp Xxostella VR1 Prototype | Image courtesy Gizmodo Japan

Still, it may not be as ‘DOA’ as you might think despite the thin and light PC VR segment growing to include a bevy of devices: Bigscreen Beyond 2 ($1,020), Pimax’s Dream Air SE ($900 – $1,200) coming December, and fellow Japanese brand Shiftall, which is releasing its latest MeganeX PC VR headset in December too for $1,900. It could be significantly cheaper if it were closer to Quest 3 in price, which would be really interesting to watch.

That said, Sharp’s VR headset is likely going to be a Japan-only device, which means the company will probably be leaning hard on the fact that it’s being produced and serviced domestically—regardless of price.

While mostly known for televisions and home applainces in the West, Sharp actually holds a significant slice of the smartphone market share in Japan. Despite foreign brands like Samsung and Google making recent headway in the country, Sharp remains a trusted name that Japanese consumers may simply feel more comfortable dealing with.

Filed Under: News, PC VR News & Reviews

Meta to Ship Project Aria Gen 2 to Researchers in 2026, Paving the Way for Future AR Glasses

October 29, 2025 From roadtovr

Meta announced it’s shipping out Project Aria Gen 2 to third-party researchers next year, which the company hopes will accelerate development of machine perception and AI technologies needed for future AR glasses and personal AI assistants.

The News

Meta debuted Project Aria Gen 1 back in 2020, the company’s sensor-packed research glasses which it used internally to train various AR-focused perception systems, in addition to releasing it in 2024 to third-party researchers across 300 labs in 27 countries.

Then, in February, the company announced Aria Gen 2, which Meta says includes improvements in sensing, comfort, interactivity, and on-device computation. Notably, neither generation contains a display of any type, like the company’s recently launch Meta Ray-Ban Display smart glasses.

Now the company is taking applications for researchers looking to use the device, which is said to ship to qualified applicants sometime in Q2 2026. That also means applications for Aria Gen 1 are now closed, with remaining requests still to be processed.

To front run what Meta calls a “broad” rollout next year, the company is releasing two major resources: the Aria Gen 2 Device Whitepaper and the Aria Gen 2 Pilot Dataset.

The whitepaper details the device’s ergonomic design, expanded sensor suite, Meta’s custom low-power co-processor for real-time perception, and compares Gen 1 and Gen 2’s abilities.

Meanwhile, the pilot dataset provides examples of data captured by Aria Gen 2, showing its capabilities in hand and eye-tracking, sensor fusion, and environmental mapping. The dataset also includes example outputs from Meta’s own algorithms, such as hand-object interaction and 3D bounding box detection, as well as NVIDIA’s FoundationStereo for depth estimation.

Meta is accepting applications from both academic and corporate researchers for Aria Gen 2.

My Take

Meta doesn’t call Project Aria ‘AI glasses’ like it does with its various generations of Ray-Ban Meta or Meta Ray-Ban Display, or even ‘smart glasses’ like you might expect—even if they’re substantively similar on the face of things. They’re squarely considered ‘research glasses’ by the company.

Cool, but why? Why does the company that already makes smart glasses with and without displays, and cool prototype AR glasses need to put out what’s substantively the skeleton of a future device?

What Meta is attempting to do with Project Aria is actually pretty smart for a few reasons: sure, it’s putting out a framework that research teams will build on, but it’s also doing it at a comparatively lower cost than outright hiring teams to directly build out future use cases, whatever those might be.

Aria Gen 2 | Image courtesy Meta

While the company characterizes its future Aria Gen 2 rollout as “broad”, Meta is still filtering for projects based on merit, i.e. getting a chance to guide research without really having to interface with what will likely be substantially more than 300 teams, all of whom will use the glasses to solve problems in how humans can more fluidly interact with an AI system that can see, hear, and know a heck of a lot more about your surroundings than you might at any given moment.

AI is also growing faster than supply chains can keep up, which I think more than necessitates an artisanal pair of smart glasses so teams can get to grips with what will drive the future of AR glasses—the real crux of Meta’s next big move.

Building out an AR platform that may one day supplant the smartphone is no small task, and its iterative steps have the potential to give Meta the sort of market share the company dreamt of way back in 2013 when it co-released the HTC First, which at the time was colloquially called the ‘Facebook phone’.
The device was a flop, partly because the hardware was lackluster, and I think I’m not alone in saying so, mostly because people didn’t want a Facebook phone in their pockets at any price when the ecosystem had some many other (clearly better) choices.

Looking back at the early smartphones, Apple teaches us that you don’t have to be first to be best, but it does help to have so many patents and underlying research projects that your position in the market is mostly assured. And Meta has that in spades.

Filed Under: AR Development, News, XR Industry News

Researchers Propose Novel E-Ink XR Display with Resolution Far Beyond Current Headsets

October 27, 2025 From roadtovr

A group of Sweden-based researchers proposed a novel e-ink display solution that could make way for super compact, retina-level VR headsets and AR glasses in the future.

The News

Traditional emissive displays are shrinking, but they face physical limits; smaller pixels tend to emit less uniformly and provide less intense light, which is especially noticeable in near-eye applications like virtual and augmented reality headsets.

In a recent research paper published in Nature, a team of researchers presents what a “retinal e-ink display” which hopes to offer a new solution quite unlike displays seen in modern VR headsets today, which are increasingly adopting micro-OLEDs to reduce size and weight.

The paper was authored by researchers affiliated with Uppsala University, Umeå University, University of Gothenburg, and Chalmers University of Technology in Gothenburg: Ade Satria Saloka Santosa, Yu-Wei Chang, Andreas B. Dahlin, Lars Österlund, Giovanni Volpe, and Kunli Xiong.

While conventional e-paper has struggled to reach the resolution necessary for realistic, high-fidelity images, the team proposes a new form of e-paper featuring electrically tunable “metapixels” only about 560 nanometres wide.

This promises a pixel density of over 25,000 pixels per inch (PPI)—an order of magnitude denser than displays currently used in headsets like Samsung Galaxy XR or Apple Vision Pro. Those headsets have a PPI of around 4,000.

Image courtesy Nature

As the paper describes it, each metapixel is made from tungsten trioxide (WO₃) nanodisks that undergo a reversible insulator-to-metal transition when electrically reduced. This process dynamically changes the material’s refractive index and optical absorption, allowing nanoscale control of brightness and color contrast.

In effect, when lit by ambient light, the display can create bright, saturated colors far thinner than a human hair, as well as deep blacks with reported optical contrast ratios around 50%—a reflective equivalent of high-dynamic range (HDR).

And the team says it could be useful in both AR and VR displays. The figure below shows a conceptual optical stack for both applications, with Figure A representing a VR display, and Figure B showing an AR display.

Image courtesy Nature

Still, there are some noted drawbacks. Beyond sheer resolution, the display delivers full-color video at “more than 25 Hz,” which is significantly lower than what VR users need for comfortable viewing. In addition to a relatively low refresh rate, researchers note the retina e-paper requires further optimization in color gamut, operational stability and lifetime.

“Lowering the operating voltage and exploring alternative electrolytes represent promising engineering routes to extend device durability and reduce energy consumption,” the paper explains. “Moreover, its ultra-high resolution also necessitates the development of ultra-high-resolution TFT arrays for independent pixel control, which will enable fully addressable, large-area displays and is therefore a critical direction for future research and technological development.”

And while the e-paper display itself is remarkably low-powered, packing in the graphical compute to put those metapixels to work will also be a challenge. It’s a good problem to have, but a problem none the less.

My Take

At least as the paper describes it, the underlying tech could produce XR displays approaching the size and pixel density that we’ve never seen before. And reaching the limits of human visual perception is one of those holy grail moments I’ve been waiting for.

Getting that refresh rate up well beyond 25 Hz is going to be extremely important though. As the paper describes it, 25 Hz is good for video playback, but driving an immersive VR environment requires at least 60 Hz refresh to be minimally comfortable. 72 Hz is better, and 90 Hz is the standard nowadays.

I’m also curious to see the e-paper display stacked up against lower resolution micro-OLED contemporaries, if only to see how that proposed ambient lighting can achieve HDR. I have a hard time wrapping my head around it. Essentially, the display’s metapixels absorb and scatter ambient light, much like Vantablack does—probably something that needs to be truly seen in person to be believed.

Healthy skepticism aside, I find it truly amazing we’ve even arrived at the conversation in the first place: we’re at the point where XR displays could recreate reality, at least as far as your eyes are concerned.

Filed Under: AR Development, News, VR Development, XR Industry News

Former Oculus Execs’ AI Smart Glasses Startup ‘Sesame’ Raises $250M Series B Funding

October 24, 2025 From roadtovr

Sesame, an AI and smart glasses startup founded by former Oculus execs, raised $250 million in Series B funding, which the company hopes will accelerate its voice-based AI.

The News

As first reported by Tech Crunch, lead investors in Sesame’s Series B include Spark Capital and Sequoia Capital, bringing the company’s overall funding to $307.6 million, according to Crunchbase data.

Exiting stealth earlier this year, Sesame was founded by Oculus co-founder and former CEO Brendan Iribe, former Oculus hardware architect Ryan Brown, and Ankit Kumar, former CTO of AR startup Ubiquity6. Additionally, Oculus co-founder Nate Mitchell announced in June he was joining Sesame as Chief Product Officer, which he noted was to “help bring computers to life.”

Image courtesy Sesame

Sesame is currently working on an AI assistant along with a pair of lightweight smart glasses. Its AI assistant aims to be “the perfect AI conversational partner,” Sequoia Capital says in a recent post.

“Sesame’s vision is to build an ambient interface that is always available and has contextual awareness of the world around you,” Sequoia says. “To achieve that, Sesame is creating their own lightweight, fashion-forward AI-enabled glasses designed to be worn all day. They’re intentionally crafted—fit for everyday life.”

Sesame is currently taking signups for beta access to its AI assistants Miles and Maya in an iOS app, and also has a public preview showcasing a ‘call’ function that allows you to speak with the chatbots.

My Take

Love it or hate it, AI is going to be baked into everything in the future, as contextually aware systems hope to bridge the gap between user input and the expectation of timely and intelligent output. That’s increasingly important when the hardware doesn’t include a display, requiring the user to interface almost entirely by voice.

Some things to watch out for: if the company does commercialize a pair of smart glasses to champion its AI assistant, it will be competing for some pretty exclusive real estate that companies like Meta, Google, Samsung, and Apple (still unconfirmed) are currently gunning for. That puts Sesame at somewhat of a disadvantage if it hopes to go it alone, but not if it’s hoping for a timely exit into the coming wave of smart glasses by being acquired by any of the above.

There’s also some pretty worrying precedent in the rear view mirror too: e.g. Humane’s AI Pin or AI Friend necklace, both of which were publicly lambasted for essentially releasing hardware that could just as easily have been apps on your smartphone.

Granted, Sesame hasn’t shown off its smart glasses hardware yet, so there’s no telling what the company hopes to bring to the table outside of the having an easy-to-wear pair of off-ear headphones for all-day AI stuff—that, to me, would be the worst case scenario, as Meta refines its own smart glasses in partnership with EssilorLuxottica, Google releases Android XR frames with Gentle Monster and Warby Parker, Samsung releases its own Android XR glasses, and Apple does… something. We don’t know yet.

Whatever the case, I’m looking forward to it, if only based on the company’s combined experience in XR, which I’d argue any startup would envy as the race to build the next big computing platform truly takes off.

Filed Under: AR Development, AR Investment, News, XR Industry News

Amazon is Developing Smart Glasses to Allow Delivery Drivers to Work Hands-free

October 23, 2025 From roadtovr

Amazon announced it’s developing smart glasses for its delivery drivers, which include a display for real-time navigation and delivery instructions.

The News

Amazon announced the news in a blog post, which partially confirms a recent report from The Information, which alleged that Amazon is developing smart glasses both for its delivery drivers and consumers.

The report, released in September, maintained that Amazon’s smart glasses for delivery drivers will be bulkier and less sleek than the consumer model. Codenamed ‘Jayhawk’, the delivery-focused smart glasses are expected to rollout as soon as Q2 2026, and include an initial production run of 100,000 units.

Image courtesy Amazon

Amazon says the smart glasses were designed and optimized with input from hundreds of delivery drivers, and include the ability to identify hazards, scan packages, capture proof of delivery, and navigate by serving up turn-by-turn walking directions.

The company hasn’t confirmed whether the glasses’ green monotone heads-up display is monoscopic or stereoscopic, however images suggest it indeed features a single waveguide in the right lens.

Moreover, the glasses aren’t meant to be used while driving, as Amazon says that the glasses “automatically activate” when the driver parks their vehicle. Only afterwards does the driver receive instructions, ostensibly done to reduce the risk of driver distraction.

In addition to the glasses, the system also features what Amazon calls “a small controller worn in the delivery vest that contains operational controls, a swappable battery ensuring all-day use, and a dedicated emergency button to reach emergency services along their routes if needed.”

Additionally, Amazon says the glasses support prescription lenses along with transitional lenses that automatically adjust to light.

As for the reported consumer version, it’s possible Amazon may be looking to evolve its current line of ‘Echo Frames’ glasses. First introduced in 2019, Echo Frames support AI voice control, music playback, calls, and Alexa smart home control, although they notably lack any sort of camera or display.

My Take

I think Amazon has a good opportunity to dogfood (aka, use its own technology) here on a pretty large scale—probably much larger than Meta or Google could initially with their first generation of smart glasses with displays.

That said, gains made in enterprise smart glasses can be difficult to translate to consumer products, which will necessarily include more functions and apps, and likely require more articulated input—all of the things that can make or break any consumer product.

Third-gen Echo Frames | Image courtesy Amazon

Amazon’s core strength though is generally less focused on high-end innovation, and more about creating cheap, reliable hardware that feeds into recurring revenue streams: Kindle, Fire TV, Alexa products, etc. Essentially, if Amazon can’t immediately figure out a way to make consumer smart glasses feed into its existing ecosystems, I wouldn’t expect to see the company put its full weight behind the device, at least not initially.

After the 2014 failure of Fire Phone, Amazon may still be gun-shy from going head-first into a segment it has near-zero experience entering. And I really don’t count Echo Frames, because they’re primarily just Bluetooth headphones with Alexa support baked in. Still, real smart glasses with cameras and displays represent a treasure trove of data that the company may not be so keen to pass up.

Using object recognition to peep into your home or otherwise follow you around could allow Amazon to better target personalized suggestions, figure out brand preferences, and even track users as they shop at physical stores. Whatever the case, I bet the company will give it a go, if only to occupy the top slot when you search “smart glasses” on Amazon.

Filed Under: AR Development, News, XR Industry News

Shiftall Announces Next Thin & Light ‘MeganeX’ PC VR Headset, Shipping in December for $1,900

October 16, 2025 From roadtovr

Shiftall unveiled its next PC VR headset, the MeganeX “8K” Mark II, which is slated to ship in December for $1,900.

The News

Japan-based Shiftall announced MeganeX “8K” Mark II, the follow-up to its thin and light PC VR headset originally launched late last year, the MeganeX superlight “8K”.

The new version is essentially a hardware refresh with only a few notable changes, which mostly aim to improve comfort, durability, and system internals.

Shiftall MeganeX “8K” Mark 2 | Image courtesy Shiftall

The headset contains the same 3,552 × 3,840 per-eye micro-OLEDs, supporting up to 90 Hz refresh, and the same SteamVR tracking standard, which requires the user to buy SteamVR 1.0/2.0 base stations separately.

Here’s a breakdown of all of the changes announced by Shiftall:

  • New chip: The CPU and operating system (OS) have been upgraded, and the firmware has been newly developed, reducing the startup time to less than one-fifth of the previous model. Connection stability with PCs and SteamVR has been improved, and the firmware update process has been improved for greater reliability.
  • New Pancake lenses: Shiftall says they’re newly designed by Panasonic Group.
  • Redesigned USB-C cable connection: previously located on the top of the headset, the USB-C port has been moved to the front and structurally reinforced for improved durability. A specially developed intermediate USB cable enhances connection stability and prevents issues caused by wear or accidental disconnection.
  • Refined nose gap: Sharp plastic edges no longer come into contact with ‘Western’ nose shapes. The material and shape around the nose area have been improved for greater comfort.
  • New Strap material: A new strap material has been adopted, and includes better durability of the hook-and-loop fastener.

Estimated to start shipping in late December, MeganeX Mark II is now available for pre-order.

The headset (SteamVR base stations not included) is priced at $1,900 in the US (excluding import duty), €1,900 in Europe (VAT included), £1,600 in the UK (VAT included), and ₩2,499,000 in South Korea (VAT included).

Specs

Feature MeganeX Superlight “8K” MeganeX “8K” Mark II
Display 3,552 × 3,840 (micro-OLED, 10-bit HDR) 3,552 × 3,840 (micro-OLED, 10-bit HDR)
Refresh rates 90 Hz (support for 75 Hz / 72 Hz) 90 Hz (support for 75 Hz / 72 Hz)
Lens type Pancake lenses (Panasonic group) Pancake lenses (newly designed from Panasonic)
Weight (main body) < 185 g 179 g
IPD & focus adjustment Electric IPD 58–72 mm; diopter adjust 0D to –7D Electric IPD 58–72 mm; diopter adjust 0D to –7D
Connectivity / tracking ecosystem DisplayPort + USB 2.0, SteamVR tracking (base stations required) DisplayPort + USB 2.0, SteamVR tracking (base stations required)

My Take

You may have noticed I’ve put “8K” in quotes throughout this announcement. That’s to indicate that headset doesn’t actually provide 8K per-eye displays.

While companies like Shiftall and Pimax typically err on the side of the biggest number, I see this as more of a marketing device than a true reflection of what the end user actually sees. Because it’s using dual 3,552 × 3,840 micro-OLEDs, the user doesn’t actually perceive an 8K image. By that maxim, Quest 3 could be labeled with “4K”, owing to its dual 2,064 × 2,208 displays, and Oculus Rift CV1 could be labeled “2K” according to its dual 1,080 × 1,200 displays. Impressive sounding, but a bit misleading.

That said, Shiftall thinks resolution is a better catch-all for VR headsets, which I disagree with since its target audience will probably understand the nuances of displays and optics anyway.

“We have decided against publishing official FOV and PPD numbers,” Shiftall says, referring to the original MeganeX superlight “8K”. “If an industry-standard measurement method were established, such as the method used to calculate fuel consumption for automobiles, we would disclose our figures, but this is not the case in the current VR industry.”

Still, I suspect potential enterprise and prosumers looking to shell out $1,900 for a single headset—no controllers or base stations included—are already familiar with pixels per degree (PPD) and binocular overlap, which are more useful, albeit less flashy metrics. On that front, MeganeX “8K” Mark II is impressive. Its pancake lenses provide a reported ~100-degree horizontal FOV, which seems to deliver a near 100 percent binocular overlap.

Using the formula to get PPD (Horizontal Pixel Count ÷ Horizontal Field of View), it also tops the competition, coming out to around 35.5 PPD: larger than Pimax Dream Air ($2,000) at 35 PPD, and Bigscreen Beyond 2 ($1,020) at 32 PPD.

Whatever the case, I think its time to retire these sorts of resolution claims championed outside of the spec sheet, if only to lend more credibility to the company in question. And the same goes for the questionable Photoshop jobs too.

Filed Under: News, PC VR News & Reviews

Samsung to Launch Project Moohan XR Headset at Galaxy Event on October 21st

October 15, 2025 From roadtovr

Samsung announced it’s holding a Galaxy Event on October 21st, which will feature Project Moohan, the company’s long-awaited Apple Vision Pro competitor.

The News

The livestream event is slated to take place on October 21st at 10PM ET (local time here), which is said to focus on “the future of AI” and Project Moohan.

“Come meet the first official device on Android XR—Project Moohan,” the video’s description reads.

There’s no official indication yet on what the headset will be priced, or even officially named at this point. A previous report from South Korea’s Newsworks suggests it could cost somewhere between ₩2.5 and ₩4 million South Korean won, or between $1,800 and $2,900 USD.

The company’s event site does however allow users to register for a $100 credit, valid when purchasing qualifying Galaxy products.

We’re hoping to learn more about the headset’s specs and promised VR motion controllers, which Samsung has yet to reveal.

Since our previous hands-on from last year, we’ve learned Project Moohan includes a Qualcomm Snapdragon XR2 + Gen 2, dual micro‑OLED panels, pancake lenses, automatic interpupillary distance (IPD) adjustment, support for eye and hand-tracking, optional magnetically-attached light shield, and a removable external battery pack.

My Take

Personally, the teaser doesn’t really serve up the sort of “wow” factor I was hoping for, as it highlights some fairly basic stuff seen in XR over the past decade. Yes, it’s actually has been that long.

While I don’t expect Moohan to stop at a Google Earth VR-style map and immersive video—neat as those things are—it’s interesting to me the company thought those two things were worthy additions to a launch day teaser for its first XR headset since the release of Samsung Odyssey+ in 2018.

Smasung Odyssey+ | Image courtesy Samsung

As the first official headset supporting Google’s Android XR operating system though, I expect the event will also focus on Moohan’s ability to not only use the standard library of Android apps and native XR stuff, but also XR productivity—provided Samsung really wants to go toe-to-toe with Vision Pro.

By all accounts, Moohan is a capable XR headset, but I wonder how much gas Samsung will throw at it now that Apple is reportedly shifting priorities to focus on Meta-style smart glasses instead of developing a cheaper and lighter Vision Pro. While Apple is still apparently moving ahead with Vision Pro’s M5 hardware refresh, which is rumored to release soon, that’s going to mostly appeal to enterprise users, which leaves Samsung to navigate a potentially awkward middle ground between Meta and Apple.

Moohan’s market performance may also dictate how other manufacturers adopt Android XR. And there’s worrying precedent. Google did the same thing with Lenovo Mirage Solo in 2018, which was supposed to be the first headset to support its Android-based Daydream platform before Google pulled the plug due to poor engagement. Here’s to hoping history doesn’t repeat itself.

Filed Under: News, VR Development, XR Industry News

Lynx Teases Next Mixed Reality Headset for Enterprise

October 13, 2025 From roadtovr

Lynx teased its next mixed reality headset, which is hoping to target enterprise and professional users across training and remote assistance.

The News

At MicroLED Connect last month, Lynx CEO Stan Larroque announced he aimed to reveal the company’s next mixed reality standalone sometime in mid-November.

However Somnium CEO Artur Sychov and major investor in the company beat Lynx to the punch by posting a cropped image of the France-based company’s next device.

I will just say this – Lynx next headset news is going to be wild… 💣

Sorry @stanlarroque, I can’t hold myself not to tease at least something… 😬😅

October & November 2025 will be 🔥 pic.twitter.com/XidrdTqqlp

— Artur Sychov ᯅ (@ASychov) October 10, 2025

In response, Larroque posted the full image, seen above. Here’s a version with the white balance turned up for better visibility, courtesy MRTV’s Sebastian Ang:

Modified image courtesy Sebastian Ang

There’s still a lot to learn, including specs and the device’s official name. From the image, we can tell at least two things: the headset has a minimum of four camera sensors, now positioned on the corners of the device à la Quest 2, and an ostensibly more comfortably headstrap that cups the back of the user’s head.

What’s more, Lynx announced late last year the company intended to integrate Google’s forthcoming Android XR operating system into its next headset, which will also include Samsung Project Moohan and forthcoming XR glasses from XREAL. Lynx hasn’t released any update on progress, so we’re still waiting to hear more.

Lynx R-1 | Image courtesy Lynx

Notably, Lynx R-1 concluded shipping earlier this year, which was initially positioned to target both consumers and professional users through its 2021 Kickstarter campaign, which brought in $800,000 in crowd-sourced funding.

According to Larroque’s talk at MicroLED Connect last month, it appears the company is however focusing hard on the enterprise sector with its next hardware release, including tasks like training and remote assistance.

My Take

Lynx R-1’s unique “4-fold catadioptric freeform prism” optics allow for a compact focal length, putting the displays flush with the lenses and providing a 90-degree field of view (FOV). While pancake lenses are generally thinner and lighter, R-1’s optics have comparably better light throughput, which is important for mixed reality tasks.

Image courtesy Lynx

As a startup that’s weathered an admittedly “excruciating” fundraising environment, making the right hardware choices in its follow-up will be key though.

My hunch is the prospective ‘Lynx R-2’ headset will probably keep the same optical stack to save on development and manufacturing costs, and mainly push upgrades to the processor and display, which are likely more important to the sort of enterprise customers Lynx is targeting anyway.

As it is, Lynx R-1 is powered by the Qualcomm Snapdragon XR2 chipset, which was initially released in 2019—the same used in Quest 2—so an upgrade there is well overdue. Its 1,600 × 1,600 per-eye LCDs also feel similarly dated.

While an FOV larger than 90 degrees is great, I’d argue that for enterprise hardware that isn’t targeting simulators, clarity and pixel density are probably more important. More info on Lynx’s next-gen headset is due sometime in November, so I’d expect to learn more then.

Filed Under: News, VR Development, XR Industry News

Next Page »

  • Home