• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

XR Industry News

Apple Design Lead Heads to Meta, Hopefully to Fix Longstanding Quest UX Issues

December 5, 2025 From roadtovr

Apple’s Vice President of Human Interface design, Alan Dye, is leaving the company to lead a new studio within Meta’s Reality Labs division. The move appears to be aimed at raising the bar on the user experience of Meta’s glasses and headsets.

The News

According to his LinkedIn profile, Alan Dye spent nearly 20 years as Apple’s Vice President of Human Interface Design. He was a driving force behind the company’s UI and UX direction, including Apple’s most recent ‘Liquid Glass’ interface overhaul and the VisionOS interface that’s the foundation of Vision Pro.

Now Dye is heading to Meta to lead a “new creative studio within Reality Labs,” according to an announcement by Meta CEO Mark Zuckerberg.

“The new studio [led by Dye] will bring together design, fashion, and technology to define the next generation of our products and experiences. Our idea is to treat intelligence as a new design material and imagine what becomes possible when it is abundant, capable, and human-centered,” Zuckerberg said. “We plan to elevate design within Meta, and pull together a talented group with a combination of craft, creative vision, systems thinking, and deep experience building iconic products that bridge hardware and software.”

The new studio within Reality Labs will also include Billy Sorrentino, another high level Apple designer; Joshua To, who has led interface design at Reality Labs; Meta’s industrial design team, led by Pete Bristol; and art teams led by Jason Rubin, a longtime Meta executive that has been with the company since its 2014 acquisition of Oculus.

“We’re entering a new era where AI glasses and other devices will change how we connect with technology and each other. The potential is enormous, but what matters most is making these experiences feel natural and truly centered around people. With this new studio, we’re focused on making every interaction thoughtful, intuitive, and built to serve people,” said Zuckerberg.

My Take

I’ve been ranting about the fundamental issues of the Quest user experience and interface (UX & UI) for literally years at this point. Meta has largely hit it out of the park with its hardware design, but the software side of things has lagged far behind what we would expect from one of the world’s leading software companies. A post on X from less than a month ago sums up my thoughts:

It’s crazy to see Meta take one step forward with its Quest UI and two steps back, over and over again for years.

They keep piling on new features with seemingly no top-down vision for how the interface should work or feel. The Quest interface is as scattered, confusing, and unpolished as ever.

The new Navigator is an improvement for simply accessing app icons, but it feels like it’s using a completely different paradigm than the rest of the window / panel management interface. Not to mention that the system interface speaks a vastly different language than the Horizon interface.

I have completely lost faith that Meta will ever get a handle on this after watching the interface meander in random directions year after year, punctuated by “refreshes” that look promising but end up being forgotten about 6 months later.

It seems Meta is trying to course-correct before things get further out of hand. If pulling in one of the world’s most experienced individuals at creating cohesive UX & UI at scale is what it takes, then I’m glad to see it happening.

Apple has set a high bar for how easy a headset should be to use. I use both Vision Pro and Quest on a regular basis, and moving between them is a night-and-day difference in usability and polish. And as I’ve said before, the high cost of Vision Pro has little to do with why its interface works so much better; the high level design decisions—which would work similarly well on any headset—are a much more significant factor.

Back when Meta was still called Facebook, the company had a famous motto: “Move fast and break things.” Although the company no longer champions this motto, it seems like it has had a hard time leaving it behind. The scattered, unpolished, and constantly shifting nature of the Quest interface could hardly embody the motto more clearly.

“Move fast and break things” might have worked great in the world of web development, but when it comes to creating a completely new interface paradigm for the brand new medium of VR, it hasn’t worked so well.

Of course, Dye’s onboarding and the new studio within Reality Labs isn’t only about Quest. In fact, it might not even be mostly about Quest. If I’ve learned anything about Zuckerberg over the years, it’s that he’s a very long-term thinker and does what he can to move his company where it needs to go to be in the right place 5 or 10 years down the road.

And in 5 to 10 years, Zuckerberg hopes Meta will be dominant, not just with immersive headsets, but AI smart glasses (and likely unreleased devices) too. This new team will likely not be focused on fixing the current state of the Quest interface, but instead trying to define a cohesive UX & UI for the company’s entire ecosystem of devices.

With Alan Dye heading to Meta, there’s a good chance that he will bring with him decades of Apple design processes that have worked well for the company over many years. But I have a feeling it will be a significant challenge for him to change “move fast and break things” to “move slow and polish things” within Meta.

Filed Under: News, XR Industry News

Alibaba Launches Smart Glasses to Rival Meta Ray-Ban Display

December 2, 2025 From roadtovr

Alibaba released a pair of display-clad smart glasses, ostensibly looking to go toe-to-toe with Meta Ray-Ban Display, which launched in the US for $800 back in September.

The News

China’s Alibaba, one the world’s largest retailers and e-commerce companies, just released its first smart glasses, called Quark AI Glasses, which run the company’s own Qwen AI model.

Image courtesy Reuters

Seemingly China-only devices for now, Alibaba is now offering Quark AI in two fundamental versions across Chinese online and brick-and-mortar retailers:

  • Quark AI Glasses S1: starting at ¥3,799 (~$540 USD), includes dual monochrome green displays
  • Quark AI Glasses G1: starting at ¥1,899 (~$270 USD), no displays, sharing core technology of ‘S1’ model

Quark AI Glasses S1 is equipped with a Qualcomm Snapdragon AR1 chipset and a low-power co-processor which drive dual monochrome green micro-OLED displays, boasting a brightness of up to 4,000 nits, according to South China Morning Post.

It also features a five-microphone array with bone conduction, 3K video recording which can be automatically upscaled to 4K, as well as low-light enhancement tech said to bring mobile phone-level imaging to smart glasses. Additionally, Quark AI Glasses S1 include hot-swappable batteries, which plug into the glasses’ stem piece.

You can see the English dubbed version of the Chinese language announcement below:

My Take

At least when it comes to on-paper specs, Quark AI Glasses S1 aren’t exactly a 1:1 rival with Meta Ray-Ban Display, even though both technically include display(s), onboard AI, and the ability to take photos and video.

While Meta Ray-Ban Display only feature a single full-color display, Quark S1’s dual displays only offer monochrome green output, which limits the sort of information that can be seen.

Meta Ray-Ban Display & Neural Band | Photo by Road to VR

Quark S1 also doesn’t come with an input device, like Meta Ray-Ban’s Neural Band, limiting it to only voice and touch input. That means Quark S1 user won’t be scrolling social media, pinching and zooming content, or other nifty UI manipulation.

Still, that might be just enough—at least one of the world’s largest e-commerce, cloud infrastructure, and FinTech companies thinks so. Also not worth overlooking is Quark S1’s unique benefit of being tightly integrated into the Qwen AI ecosystem, as well as the Chinese payment infrastructure for fast and easy QR code-based payments with Alipay; that last one is something most Chinese smart glasses are trying to hook into, like Xiaomi’s own Ray-Ban Meta competitors.

Although the company’s Qwen AI model is available globally, I find it pretty unlikely that Alibaba will ever bring its first-gen models of Quark AI Glasses S1/G1 outside of its usual sphere of influence, or meaningfully intersect with Meta’s supported regions.

Filed Under: AR Development, News, XR Industry News

FluxPose VR Tracker Raises $2M on Kickstarter, Promising Compact 6DOF Body Tracking

December 1, 2025 From roadtovr

FluxPose is a 6DOF tracking solution for full-body tracking that seems to be picking up speed on Kickstarter, having now garnered over $2 million in crowdfunding since its initial launch on November 29th.

The News

FluxPose is a full-body tracking system that’s said to deliver occlusion-free positional tracking without the need of externally mounted base stations or sensors. It does this by way of a wearable beacon, which generates magnetic fields, the team explains on the FluxPose Kickstarter campaign.

“It’s completely occlusion-free, incredibly compact, drift-free, and the trackers last up to 24 hours on a single charge, offering high-end performance in the smallest, lightest form factor possible,” the Logrono, Spain-based team says.

Image courtesy FluxPose

And because the beacon is worn on your body, and automatically synchronizes the tracking space with VR headsets without any additional software, it essentially means the tracking volume moves with you as you move (or more likely, dance) in VR.

Weighing in at 85 grams, the trackers are also impressively compact: a Dorito for scale.

Image courtesy FluxPose

At the time of this writing, the cheapest support tier is the ‘Lite Kit’ for €339 (~$394 USD), which comes with three tracking points (straps sold separately). At the higher end is the ‘Pro Kit’ for €689 (~$800 USD), which includes eight tracking points. Notably, those prices do not include taxes or import tariffs.

VR headset mounts provided through the Kickstarter are said to include Quest 2/3/3S/Pro, Pico 4/4 Ultra, Samsung Galaxy XR, HTC Vive Pro/Pro 2/Focus/XR Elite, Bigscreen Beyond 1/2, Valve Index, and Steam Frame. Backers will have the chance to select the exact headset model on a survey after the Kickstarter ends, and again a few months before delivery.

You can find out more over on the FluxPose Kickstarter, which we’ll be following for the campaign’s remaining 58 days, ending on January 28th, 2026. The earliest delivery is expected in August 2026 for early bird supporters, and October 2026 for late comers to the Kickstarter.

My Take

Magnetically-tracked peripherals aren’t anything new in VR; I’ve seen a number of solutions come and go, with the emphasis mostly on go: Razer Hydra, Sixense Stem, Atraxa, Magic Leap 1 controllers—these implementations seem to be good enough in optimal conditions, but not rock solid across the board.

In short, magnetic trackers position themselves in 3D space by measuring the intensity of the magnetic field in various directions, which (as mentioned above) is generated by a beacon. When the trackers’ measurement point is rotated, the distribution of the magnetic field changes across its various axes, allowing for it to be positionally tracked.

And while those magnetically-tracked peripherals listed above don’t suffer from optical occlusion, they can be affected by external magnetic fields, ferromagnetic materials in the tracking volume, and conductive materials near the emitter or sensor. These things typically reduce tracking quality, making them less reliably accurate than optical (Quest 3) or laser-positioned systems (SteamVR base stations).

Granted, I haven’t tried FluxPose yet, although I don’t think those drawbacks are nearly as important in fully-body tracking than they might be in actual motion controllers, which require much higher accuracy. A few millimeter’s discrepancy in your foot’s position really doesn’t matter as much as it might if you were reaching out and trying to grab something with a magnetically-tracked controller.

Provided Road to VR doesn’t get to go hands-on in the coming months, I’ll be keeping my eyes peeled for videos and articles as we move closer to the campaign’s close next month.

Filed Under: News, VR Development, XR Industry News

Pico Reportedly Releasing Vision Pro Competitor in 2026 with Self-developed Chip

November 26, 2025 From roadtovr

Zhenyuan Yang, Vice President of Technology at Pico parent company ByteDance, reportedly revealed plans for Pico’s next XR headset, which is said to sport a self-developed display chip and 4,000 PPI microOLED display.

The News

According to Chinese news outlet Science and Technology Innovation Board Daily (via Nweon), Yang was speaking at ByteDance’s annual scholarship award ceremony when he mentioned specific plans to release a new Pico XR headset in 2026.

The self-developed chip was started in 2022, Yang reportedly revealed on stage, noting the chip is now in mass production. The chip is said to overcome real-time processing bottlenecks in high-resolution, high-frame-rate mixed reality video, with it capable of reducing system latency to about 12 ms while maintaining high-precision image quality.

It’s also said to improve performance in SLAM, motion compensation, and inverse-distortion workloads, which demand high compute efficiency on low-power devices, Science and Technology Innovation Board Daily reports.

Image courtesy PICO

Supposedly slated to launch in 2026, the headset will pair this chip with a custom microOLED display which is said to approach 4,000 PPI—slightly higher than that of Apple Vision Pro’s 3,386 PPI.

According to the report, Pico’s microOLED display reaches an average 40 PPD (over 45 at center), and addresses brightness limitations by incorporating microlens (MLA) technology and optical compensation for uniform color and luminance. Additionally, Pico is also developing its own data-capture systems to train advanced eye-tracking, gesture-tracking, and spatial-understanding models.

Yang emphasized that since 2023, ByteDance has shifted Pico’s strategy away from aggressive content and marketing spending toward long-term technological investment, increasing XR R&D rather than retreating from the market.

“In 2023, we decided to reduce our investment in content and marketing, and instead focus more firmly on our technology strategy,” Yang said (machine translated from Chinese). “This was because the hardware experience of our products was not yet mature enough to support large-scale market applications. This adjustment led to some misunderstandings at the time, with many people saying that ByteDance was no longer pursuing this direction. In fact, quite the opposite.”

This follows an initial report from The Information this summer, which alleged Pico was developing a pair of slim and light MR “goggles,” reportedly codenamed ‘Swan’, which are said to weigh just 100 grams.

My Take

More competition is great, although US-based audiences hoping for a new Vision Pro competitor from Pico may be left waiting.

The company’s headsets are typically only available in China, East and Southeast Asia and Europe—but not in North America, and not for the lack of trying either. An additional stumbling block: Pico headsets have typically been priced above Meta’s equivalents, which has limited appeal in Meta-supported regions.

Still, ByteDance, the parent company behind TikTok and Chinese equivalent platform Douyin, has actually overtaken Meta in revenue, putting the parent company in a better position than ever to bolster its XR platform as a premium offering globally.

Filed Under: News, VR Development, XR Industry News

Former Magic Leap Engineers Launch No-code AR Creation Platform, Aiming to Be ‘Canva of AR’

November 7, 2025 From roadtovr

Trace, a startup founded by former Magic Leap engineers, today announced the launch of a new augmented reality creation platform the company hopes will become the “Canva of AR”.

The News

Trace says it’s targeting everyone from global brands to independent creators wanting to build location-based immersive AR content, according to a recent press statement.

Notably, the platform doesn’t require coding or advanced design expertise, allowing users to design, drop, and share interactive AR experiences across mobile devices, headsets, and AR glasses.

To boot, Trace says it’s launching the platform at a pivotal moment; Adobe has officially discontinued its Aero AR platform, and Meta’s Spark AR platform was retired in January 2025. To seize the moment, Trace is offering three free months of its premium plan to Aero and Spark users who migrate to its platform.

“Even as XR devices become more capable, the creator ecosystem is still really limited,” said Martin Smith, Trace’s CTO and co-founder. “Empowering creators to build and share their vision is such an important part of the picture, whether they’re an educator, an artist, or a Fortune 500 brand. Trace runs anywhere, scales instantly, and supports the fidelity AR deserves.”

Founded in 2021, Trace has already worked with a host of early enterprise adopters, including ESPN, T-Mobile, Qualcomm, Telefónica, Lenovo, and Deutsche Telekom, who have used Trace for marketing, visualization, employee training, and trade show installations at Mobile World Congress and the Hip Hop 50 Summit.

Trace’s creation platform is available to download for free on iPhone and iPad through the App Store, with an optional premium subscription available starting at $20 per month. Creations can currently be viewed through the Trace Viewer app available for free on the App Store and Google Play, and users can import their existing 3D assets in the Web Studio, available at studio.trace3d.app.

My Take

There’s a reason Meta and Adobe haven’t put a massive amount of effort into their respective AR creation platforms lately: all-day AR glasses are still relatively far away, as the usual cadre of XR headset and glasses creators are only now stepping into smart glasses ahead of what could be a multi-year leadup to all-day AR glasses of the future.

Still, enterprise-level AR activations on mobile and mixed reality headsets, like Apple Vision Pro and Quest 3, can turn more than a few heads, making a quick and easy no-code solution ideal for companies and independent creators looking for a selective reach.

Quest 3 (left) and Apple Vision Pro (right) | Based on images courtesy Meta, Apple

I would consider Trace’s strategy of offering former Adobe Aero AR and Meta Spark AR a pretty shrewd move to get some market share out of the gate too, which is increasingly important since it’s the company’s sole occupation—and not a side project like it was for Adobe and Meta.

The more challenging test will be to see how Trace grows in that interminable leadup to wide-spread AR glasses though, and how it weathers the competition sure to come from platform holders looking to offer similarly easy-to-use AR creations suites of the future.

While the platform’s wide target and ease of use are big pluses, I can see it more squarely fitting in the enterprise space than something regular consumers might latch onto—which is probably the ideal fit for a company founded by Magic Leap alumni, who have undoubtedly learned a sharp lesson first hand. Magic Leap’s early flirtations with prosumers in 2018 with the launch of Magic Leap One eventually forced the company to pivot to enterprise two years later.

Filed Under: AR Development, News, XR Industry News

Cambridge & Meta Study Raises the Bar for ‘Retinal Resolution’ in XR

November 5, 2025 From roadtovr

It’s been a long-held assumption that the human eye is capable of detecting a maximum of 60 pixels per degree (PPD), which is commonly called ‘retinal’ resolution. Any more than that, and you’d be wasting pixels. Now, a recent University of Cambridge and Meta Reality Labs study published in Nature maintains the upper threshold is actually much higher than previously thought.

The News

As the University of Cambridge’s news site explains, the research team measured participants’ ability to detect specific display features across a variety of scenarios: both in color and greyscale, looking at images straight on (aka ‘foveal vision’), through their peripheral vision, and from both close up and farther away.

The team used a novel sliding-display device (seen below) to precisely measure the visual resolution limits of the human eye, which seem to overturn the widely accepted benchmark of 60 PPD commonly considered as ‘retinal resolution’.

Image courtesy University of Cambridge, Meta

Essentially, PPD measures how many display pixels fall within one degree of a viewer’s visual field; it’s sometimes seen on XR headset spec sheets to better communicate exactly what the combination of field of view (FOV) and display resolution actually means to users in terms of visual sharpness.

According to the researchers, foveal vision can actually perceive much more than 60 PPD—more like up to 94 PPD for black-and-white patterns, 89 PPD for red-green, and 53 PPD for yellow-violet. Notably, the study had a few outliers in the participant group, with some individuals capable of perceiving as high as 120 PPD—double the upper bound for the previously assumed retinal resolution limit.

The study also holds implications for foveated rendering, which is used with eye-tracking to reduce rendering quality in an XR headset user’s peripheral vision. Traditionally optimized for black and white vision, the study maintains foveated rendering could further reduce bandwidth and computation by lowering resolution further for specific color channels.

So, for XR hardware engineers, the team’s findings point to a new target for true retinal resolution. For a more in-depth look, you can read the full paper in Nature.

My Take

While you’ll be hard pressed to find accurate info on each headset’s PPD—some manufacturers believe in touting pixels per inch (PPI), while others focus on raw resolution numbers—not many come close to reaching 60 PPD, let alone the revised retinal resolution suggested above.

According to data obtained from XR spec comparison site VRCompare, consumer headsets like Quest 3, Pico 4, and Bigscreen Beyond 2 tend to have a peak PPD of around 22-25, which describes the most pixel-dense area at dead center.

Meta ‘Butterscotch’ varifocal prototype (left), ‘Flamera’ passthrough prototype (right) | Image courtesy Meta

Prosumer and enterprise headsets fare slightly better, but only just. Estimating from available data, Apple Vision Pro and Samsung Galaxy XR boast a peak PPD of between 32-36.

Headsets like Shiftall MeganeX Superlight “8K” and Pimax Dream Air have around 35-40 peak PPD. On the top end of the range is Varjo, which claims its XR-4 ($8,000) enterprise headset can achieve 51 peak PPD through an aspheric lens.

Then, there are prototypes like Meta’s ‘Butterscotch’ varifocal headset, which the company showed off in 2023, which is said to sport 56 PPD (not confirmed if average or peak).

Still, there’s a lot more to factor in to reaching ‘perfect’ visuals beyond PPD, peak or otherwise. Optical artifacts, refresh rate, subpixel layout, binocular overlap, and eye box size can all sour even the best displays. What is sure though: there is still plenty of room to grow in the spec sheet department before any manufacturer can confidently call their displays retinal.

Filed Under: AR Development, News, VR Development, XR Industry News

‘MultiBrush’ Studio Secures $4.5M Grant to Promote Positive VR Experiences for Elders

November 4, 2025 From roadtovr

Rendever, the company behind Tilt Brush-based multiplayer Quest app MultiBrush (2022), has secured nearly $4.5 million in grant funding from the U.S. National Institutes of Health (NIH), which the company says it will use to bring its elder-focused VR experiences to the home care market.

The studio says in an announcement the latest funding includes $3.8 million for the Thrive At Home Program and an additional grant to build a caregiver support network in VR.

“These funds will pave the way for Rendever to bring their technology to the large majority of individuals and caregivers who are aging in place and lacking in structural social support,” the studio says.

Rendever is currently partnered with the University of California in Santa Barbara, research organization RAND, and home care service Right at Home.

The company says these organization will help it conduct studies to evaluate the effectiveness of VR technology in building relationships across living environments. The aim is to reduce social isolation, improve mental health, and enhance overall well-being in elders. Additionally, Rendever maintains studies gauging the impact of caregiving tools, including its recent Dementia & Empathy training program, will continue as a result.

“Our Phase II trial has shown the power of VR to effectively build and enhance family relationships across distances – even across country lines. The future of aging depends on technology that effectively reshapes how we experience these core parts of the human experience as we get older,” said Kyle Rand, Rendever CEO. “We know there’s nothing more holistically impactful than our social health. Over the next three years, we’ll work across the industry to build the next generation of community infrastructure that delivers real happiness and forges new relationships, all while driving meaningful health outcomes.”

While Rendever currently offers VR-assisted therapy for both senior living and healthcare facilities, the company is currently assembling a beta pilot in certain geographic regions in the US to test its forthcoming in-home offering.

Additionally, the company announced it’s adding Sarah Thomas to its Board of Directors, an expert on aging and venture partner in the AgeTech industry.

Filed Under: News, VR Investment, XR Industry News

Meta to Ship Project Aria Gen 2 to Researchers in 2026, Paving the Way for Future AR Glasses

October 29, 2025 From roadtovr

Meta announced it’s shipping out Project Aria Gen 2 to third-party researchers next year, which the company hopes will accelerate development of machine perception and AI technologies needed for future AR glasses and personal AI assistants.

The News

Meta debuted Project Aria Gen 1 back in 2020, the company’s sensor-packed research glasses which it used internally to train various AR-focused perception systems, in addition to releasing it in 2024 to third-party researchers across 300 labs in 27 countries.

Then, in February, the company announced Aria Gen 2, which Meta says includes improvements in sensing, comfort, interactivity, and on-device computation. Notably, neither generation contains a display of any type, like the company’s recently launch Meta Ray-Ban Display smart glasses.

Now the company is taking applications for researchers looking to use the device, which is said to ship to qualified applicants sometime in Q2 2026. That also means applications for Aria Gen 1 are now closed, with remaining requests still to be processed.

To front run what Meta calls a “broad” rollout next year, the company is releasing two major resources: the Aria Gen 2 Device Whitepaper and the Aria Gen 2 Pilot Dataset.

The whitepaper details the device’s ergonomic design, expanded sensor suite, Meta’s custom low-power co-processor for real-time perception, and compares Gen 1 and Gen 2’s abilities.

Meanwhile, the pilot dataset provides examples of data captured by Aria Gen 2, showing its capabilities in hand and eye-tracking, sensor fusion, and environmental mapping. The dataset also includes example outputs from Meta’s own algorithms, such as hand-object interaction and 3D bounding box detection, as well as NVIDIA’s FoundationStereo for depth estimation.

Meta is accepting applications from both academic and corporate researchers for Aria Gen 2.

My Take

Meta doesn’t call Project Aria ‘AI glasses’ like it does with its various generations of Ray-Ban Meta or Meta Ray-Ban Display, or even ‘smart glasses’ like you might expect—even if they’re substantively similar on the face of things. They’re squarely considered ‘research glasses’ by the company.

Cool, but why? Why does the company that already makes smart glasses with and without displays, and cool prototype AR glasses need to put out what’s substantively the skeleton of a future device?

What Meta is attempting to do with Project Aria is actually pretty smart for a few reasons: sure, it’s putting out a framework that research teams will build on, but it’s also doing it at a comparatively lower cost than outright hiring teams to directly build out future use cases, whatever those might be.

Aria Gen 2 | Image courtesy Meta

While the company characterizes its future Aria Gen 2 rollout as “broad”, Meta is still filtering for projects based on merit, i.e. getting a chance to guide research without really having to interface with what will likely be substantially more than 300 teams, all of whom will use the glasses to solve problems in how humans can more fluidly interact with an AI system that can see, hear, and know a heck of a lot more about your surroundings than you might at any given moment.

AI is also growing faster than supply chains can keep up, which I think more than necessitates an artisanal pair of smart glasses so teams can get to grips with what will drive the future of AR glasses—the real crux of Meta’s next big move.

Building out an AR platform that may one day supplant the smartphone is no small task, and its iterative steps have the potential to give Meta the sort of market share the company dreamt of way back in 2013 when it co-released the HTC First, which at the time was colloquially called the ‘Facebook phone’.
The device was a flop, partly because the hardware was lackluster, and I think I’m not alone in saying so, mostly because people didn’t want a Facebook phone in their pockets at any price when the ecosystem had some many other (clearly better) choices.

Looking back at the early smartphones, Apple teaches us that you don’t have to be first to be best, but it does help to have so many patents and underlying research projects that your position in the market is mostly assured. And Meta has that in spades.

Filed Under: AR Development, News, XR Industry News

Researchers Propose Novel E-Ink XR Display with Resolution Far Beyond Current Headsets

October 27, 2025 From roadtovr

A group of Sweden-based researchers proposed a novel e-ink display solution that could make way for super compact, retina-level VR headsets and AR glasses in the future.

The News

Traditional emissive displays are shrinking, but they face physical limits; smaller pixels tend to emit less uniformly and provide less intense light, which is especially noticeable in near-eye applications like virtual and augmented reality headsets.

In a recent research paper published in Nature, a team of researchers presents what a “retinal e-ink display” which hopes to offer a new solution quite unlike displays seen in modern VR headsets today, which are increasingly adopting micro-OLEDs to reduce size and weight.

The paper was authored by researchers affiliated with Uppsala University, Umeå University, University of Gothenburg, and Chalmers University of Technology in Gothenburg: Ade Satria Saloka Santosa, Yu-Wei Chang, Andreas B. Dahlin, Lars Österlund, Giovanni Volpe, and Kunli Xiong.

While conventional e-paper has struggled to reach the resolution necessary for realistic, high-fidelity images, the team proposes a new form of e-paper featuring electrically tunable “metapixels” only about 560 nanometres wide.

This promises a pixel density of over 25,000 pixels per inch (PPI)—an order of magnitude denser than displays currently used in headsets like Samsung Galaxy XR or Apple Vision Pro. Those headsets have a PPI of around 4,000.

Image courtesy Nature

As the paper describes it, each metapixel is made from tungsten trioxide (WO₃) nanodisks that undergo a reversible insulator-to-metal transition when electrically reduced. This process dynamically changes the material’s refractive index and optical absorption, allowing nanoscale control of brightness and color contrast.

In effect, when lit by ambient light, the display can create bright, saturated colors far thinner than a human hair, as well as deep blacks with reported optical contrast ratios around 50%—a reflective equivalent of high-dynamic range (HDR).

And the team says it could be useful in both AR and VR displays. The figure below shows a conceptual optical stack for both applications, with Figure A representing a VR display, and Figure B showing an AR display.

Image courtesy Nature

Still, there are some noted drawbacks. Beyond sheer resolution, the display delivers full-color video at “more than 25 Hz,” which is significantly lower than what VR users need for comfortable viewing. In addition to a relatively low refresh rate, researchers note the retina e-paper requires further optimization in color gamut, operational stability and lifetime.

“Lowering the operating voltage and exploring alternative electrolytes represent promising engineering routes to extend device durability and reduce energy consumption,” the paper explains. “Moreover, its ultra-high resolution also necessitates the development of ultra-high-resolution TFT arrays for independent pixel control, which will enable fully addressable, large-area displays and is therefore a critical direction for future research and technological development.”

And while the e-paper display itself is remarkably low-powered, packing in the graphical compute to put those metapixels to work will also be a challenge. It’s a good problem to have, but a problem none the less.

My Take

At least as the paper describes it, the underlying tech could produce XR displays approaching the size and pixel density that we’ve never seen before. And reaching the limits of human visual perception is one of those holy grail moments I’ve been waiting for.

Getting that refresh rate up well beyond 25 Hz is going to be extremely important though. As the paper describes it, 25 Hz is good for video playback, but driving an immersive VR environment requires at least 60 Hz refresh to be minimally comfortable. 72 Hz is better, and 90 Hz is the standard nowadays.

I’m also curious to see the e-paper display stacked up against lower resolution micro-OLED contemporaries, if only to see how that proposed ambient lighting can achieve HDR. I have a hard time wrapping my head around it. Essentially, the display’s metapixels absorb and scatter ambient light, much like Vantablack does—probably something that needs to be truly seen in person to be believed.

Healthy skepticism aside, I find it truly amazing we’ve even arrived at the conversation in the first place: we’re at the point where XR displays could recreate reality, at least as far as your eyes are concerned.

Filed Under: AR Development, News, VR Development, XR Industry News

Former Oculus Execs’ AI Smart Glasses Startup ‘Sesame’ Raises $250M Series B Funding

October 24, 2025 From roadtovr

Sesame, an AI and smart glasses startup founded by former Oculus execs, raised $250 million in Series B funding, which the company hopes will accelerate its voice-based AI.

The News

As first reported by Tech Crunch, lead investors in Sesame’s Series B include Spark Capital and Sequoia Capital, bringing the company’s overall funding to $307.6 million, according to Crunchbase data.

Exiting stealth earlier this year, Sesame was founded by Oculus co-founder and former CEO Brendan Iribe, former Oculus hardware architect Ryan Brown, and Ankit Kumar, former CTO of AR startup Ubiquity6. Additionally, Oculus co-founder Nate Mitchell announced in June he was joining Sesame as Chief Product Officer, which he noted was to “help bring computers to life.”

Image courtesy Sesame

Sesame is currently working on an AI assistant along with a pair of lightweight smart glasses. Its AI assistant aims to be “the perfect AI conversational partner,” Sequoia Capital says in a recent post.

“Sesame’s vision is to build an ambient interface that is always available and has contextual awareness of the world around you,” Sequoia says. “To achieve that, Sesame is creating their own lightweight, fashion-forward AI-enabled glasses designed to be worn all day. They’re intentionally crafted—fit for everyday life.”

Sesame is currently taking signups for beta access to its AI assistants Miles and Maya in an iOS app, and also has a public preview showcasing a ‘call’ function that allows you to speak with the chatbots.

My Take

Love it or hate it, AI is going to be baked into everything in the future, as contextually aware systems hope to bridge the gap between user input and the expectation of timely and intelligent output. That’s increasingly important when the hardware doesn’t include a display, requiring the user to interface almost entirely by voice.

Some things to watch out for: if the company does commercialize a pair of smart glasses to champion its AI assistant, it will be competing for some pretty exclusive real estate that companies like Meta, Google, Samsung, and Apple (still unconfirmed) are currently gunning for. That puts Sesame at somewhat of a disadvantage if it hopes to go it alone, but not if it’s hoping for a timely exit into the coming wave of smart glasses by being acquired by any of the above.

There’s also some pretty worrying precedent in the rear view mirror too: e.g. Humane’s AI Pin or AI Friend necklace, both of which were publicly lambasted for essentially releasing hardware that could just as easily have been apps on your smartphone.

Granted, Sesame hasn’t shown off its smart glasses hardware yet, so there’s no telling what the company hopes to bring to the table outside of the having an easy-to-wear pair of off-ear headphones for all-day AI stuff—that, to me, would be the worst case scenario, as Meta refines its own smart glasses in partnership with EssilorLuxottica, Google releases Android XR frames with Gentle Monster and Warby Parker, Samsung releases its own Android XR glasses, and Apple does… something. We don’t know yet.

Whatever the case, I’m looking forward to it, if only based on the company’s combined experience in XR, which I’d argue any startup would envy as the race to build the next big computing platform truly takes off.

Filed Under: AR Development, AR Investment, News, XR Industry News

Next Page »

  • Home