• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

XR Industry News

Meta Waveguide Provider Claims “world’s first” 70° FoV Waveguide

January 9, 2026 From roadtovr

Lumus, the company that developed the waveguide optic used in Meta’s Ray-Ban Display smart glasses, says it has achieved a 70° field-of-view in a new design revealed this week at CES 2026. This conveniently matches the 70° field-of-view that Meta achieved in its ‘Orion’ prototype, but only with the use of novel materials.

The News

Back in 2024, Meta revealed its first AR glasses prototype, codenamed Orion. One of the prototype’s big innovations was its ability to squeeze a 70° field-of-view into such a small form-factor. This was made possible with the use of unique waveguide optics made with silicon carbide, a novel material that enabled the wider field-of-view thanks to its greater refractive index.

Orion porotype AR glasses | Image courtesy Meta

In 2025, Meta talked about the challenges of manufacturing silicon carbide waveguides, affordably, at scale. While the company said progress was being made, it still conceded that the work is ongoing.

“We’ve successfully shown that silicon carbide can flex across electronics and photonics. It’s a material that could have future applications in quantum computing. And we’re seeing signs that it’s possible to significantly reduce the cost. There’s a lot of work left to be done, but the potential upside here is huge,” the company said at the time.

But now Lumus, the company that developed the waveguides in Meta’s Ray-Ban Display glasses says it has achieved a 70° field-of-view in its glass waveguides. The company claims it’s the “world’s first geometric waveguide to surpass a 70° FOV.”

Image courtesy Lumus

The company announced that it is showing the new ZOE waveguide this week at CES 2026. Renders provided by the company show the company’s latest prototype to include the ZOE optics (though it’s worth noting that Lumus’ prototypes typically do not include on-board battery, compute, or tracking hardware, which would add bulk to any real product based on ZOE).

My Take

My gut tells me it probably isn’t a coincidence that Lumus has been aiming for a 70° field-of-view, which just happens to match what Meta achieved with its Orion prototype. Most likely, the company was tasked (implicitly or maybe even directly) with doing exactly that—proving that its waveguides could reach the 70° benchmark without using silicon carbide.

Beyond simply achieving a 70° field-of-view as a proof-of-concept, Lumus says the ZOE optic is made with the same process as its other glass waveguides. That’s a big deal, because the company has already proven that such waveguides can be manufactured at scale, thanks to the use of its waveguides in Ray-Ban Display, Meta’s first smart glasses with a display.

That means Lumus’ ZOE waveguide is most definitely on the shortlist for what Meta could use in its first pair of wide field-of-view AR glasses, which the company said it hopes to bring to market before 2030.

Granted, field-of-view isn’t everything. When it comes to optics, everything is a tradeoff. Increased field-of-view can impact brightness, PPD, and various visual artifacts. Without being able to see the new ZOE optic for myself, it’s hard to say whether or not Lumus has something truly new here, or if they’ve simply boosted field-of-view by trading other downsides.

I expect I’ll have a chance to see the ZOE optic later this year at AWE 2026 where I usually meet with Lumus to see their latest developments. In the meantime, I’ve also reached out to the company to learn more about how it reached the 70° field-of-view and what tradeoffs it did or didn’t have to make to get there.

Filed Under: News, XR Industry News

New Reference Design From Key Manufacturer Shows What to Expect From MR Headsets in 2026

January 8, 2026 From roadtovr

A Chinese company which mass produces many of the best known headsets in the industry has shared a new compact MR headset reference design which sets expectations for 2026.

Goertek is a little-known but massively important player in the XR industry. The company is a key enabler in the production of XR headsets as it provides reference designs which function as blueprints for consumer companies to build headsets, and handles mass production for some of the best known headsets in the industry.

At CES 2026, Goertek revealed its latest MR headset reference design. Reference designs like this act as a blueprint for any company that wants to put their own spin on the device and take it to market. Rather than a prototype—which might use novel materials or techniques that aren’t yet mass producible—reference designs like this represent a fully functional set of ready-to-manufacture components with tangible costs and delivery dates.

There isn’t a lot of info available on the reference design yet, except what has been officially stated by Goertek:

An Ultra-Lightweight MR Reference Design showcases system-level optimizations, reducing the weight of a 4K MR headset to approximately 100 grams. It delivers retinal-level clarity (38 PPD) within a 100-degree field of view, with Video See-Through (VST) and 6DoF [tracking].

We’ve reached out to Goertek for details, but in the meantime many questions remain.

Considering the incredible 100g weight of the headset, it seems almost certain that this reference design does not include on-board compute or battery. For comparison, Quest 3, even with a soft strap, weighs in at 515g.

Image courtesy CNFOL

That means the headset would need to rely on a tethered compute/battery pack, or some other host device, to function. This would follow the trend of headsets like Vision Pro and Galaxy XR which both offload the battery weight to a tethered battery.

Adding to the confusion, Goertek calling the headset an “MR reference design” would generally be understood to mean a standalone device, but in the one photo we’ve been able to find of the device in use so far (courtesy CNFOL), it appears to be part of the company’s “PCVR Software Suite” display station, and looks to be tethered directly to the PC in front of the user.

Image courtesy CNFOL

In any case, the reference design shows us what kind of resolution and field-of-view can be expected from headsets in 2026 with this compact form-factor, even if the design doesn’t have its own compute/battery.

Image courtesy CNFOL

Likely the reference design is meant to show the form-factor while leaving it up to customer companies to decide if they would bring it to market as a standalone or tethered headset.

Filed Under: News, XR Industry News

Meta Pauses International Release of Meta Ray-Ban Display Glasses

January 7, 2026 From roadtovr

Meta Ray-Ban Display glasses seem to be selling too well, as the company announced it’s delaying the international rollout of its first display-clad smart glasses.

The News

Initially released in the US back in September, Meta said it was hoping to bring the $800 smart glasses to a number of regions in early 2026, which includes a single color display embedded in the right lens.

Now, the company says in a blog post it’s decided to “pause” the planned expansion to the UK, France, Italy and Canada, citing “unprecedented demand and limited inventory.”

Meta Ray-Ban Display Glasses & Neural Band | Image courtesy Meta

The company characterizes stock as “extremely limited,” noting that its seen an “overwhelming amount of interest, and as a result, product waitlists now extend well into 2026.”

Meta says it will continue to focus on fulfilling orders in the US while they “re-evaluate [the] approach to international availability.”

My Take

I was looking forward to getting my hands on a pair of Meta Ray-Ban Display glasses here in Italy, one of the regions currently on “pause”—which my Corpo-to-English translator says I probably shouldn’t hold my breath.

While Meta Ray-Ban Display can’t do everything promised just yet—and doesn’t actually have an app store—the device can do a fair number of things I was hoping to test out if it fit into my daily life.

After all, it can do everything the audio-only Ray-Ban Meta glasses can do in addition to serving up a viewfinder for taking photos and video, the ability to see and respond to messages via WhatsApp, Facebook Messenger, and Instagram, and give you turn-by-turn walking directions in supported cities.

Turn-by-turn Directions in Meta Ray-Ban Display | Image courtesy Meta

Months after launch, Meta says it’s also now pushed an update that includes a teleprompter, the previously teased EMG handwriting, as well as more cities for pedestrian navigation.

Still, it makes a lot more sense from a manufacturing perspective. Meta needs to go slow and deliberate with Meta Ray-Ban Display though, if only based on the fact that the device has likely been heavily subsidized to not be eye-wateringly expensive out of the gate; the company is no doubt eating the fairly high bill of materials if only based on waveguide wastage rates. No app store also means no app revenue, making the first-gen decidedly more of a large beta test than anything.

So, right now it seems like Meta is deliberately going slow to make sure use cases, distribution, and supply chain are all in place before really cashing in on the second gen—maybe following Quest’s playbook; in 2019, the company released the original Quest only to toss out Quest 2 a year later, making for the company’s best-selling XR device to date—and also leaving everyone who bought the first-gen to upgrade only a year later.

Filed Under: AR Development, ar industry, News, XR Industry News

Magic Leap Signs Deal with Taiwan’s Pegatron, Strengthening AR Manufacturing Position

January 5, 2026 From roadtovr

Magic Leap announced a manufacturing partnership with Pegatron, a major global electronics manufacturer, to scale production of AR glasses components, including Magic Leap’s waveguide technology.

The News

Under the agreement outlined in a press statement, Pegatron will apply its manufacturing capabilities to help turn Magic Leap’s optical designs into mass-produced components.

Taiwan-based Pegatron specializes in developing and producing computing, communications, and consumer electronics for major brands, in addition to being the parent company of PC component company ASRock.

Details are still under wraps, however Magic Leap Product and Partner Development exec Jade Meskill says the partnership will create “a clear path to bring AR components to market at scale.”

“This collaboration reflects the growing maturity of the AR ecosystem,” said Jason Cheng, Vice Chairman at Pegatron. “By combining Magic Leap’s component-level expertise with Pegatron’s manufacturing infrastructure, we can support more efficient pathways from development to production.”

This follows the announcement in October that Magic Leap was entering into a multi-year AR hardware partnership with Google.

My Take

Despite early market missteps that saw millions (if not billions) go to the development of its ML 1 and ML 2 headsets, Magic Leap seems to be making good on its pivot from AR headset creator to major AR component player, as the company is leveraging its designs, know-how and catalogue of patents to stay in the fight.

And despite the years of grinding, it’s a fight that still hasn’t really heated up just yet, as companies like Meta, Apple and Google are still in deep in preparation to create their own AR glasses (note: not smart glasses) for release sometime before 2030.

Still, if the coming AR revolution is anything like the smartphone revolution of the early 2000s, there will potentially be a lot of players beyond those three tech giants to spin up competition when AR components eventually get cheaper with economies of scale.

And while we’re not there yet, Magic Leap seems to have found a solid raison d’être in the meantime, and a much better shot at one day becoming profitable as a result.

Filed Under: ar industry, AR Investment, News, XR Industry News

Google is Rolling out Photorealistic ‘Likeness’ Avatars on Android XR to Compete with Apple’s ‘Personas’

December 11, 2025 From roadtovr

Google is starting to roll out new photorealistic avatars which they call “Likeness”. Similar to Apple’s Personas, Likeness avatars are generated by scanning a user’s face, then animated it with input from the sensors on a headset. The avatars can be used to represent the user in video call apps, but Google doesn’t yet have a way to have spatial meetings with other Likeness avatars.

The News

Google is launching its own photorealistic avatars called Likeness avatars, for use on compatible Android XR headsets. The idea is similar to Apple’s Persona avatars: scan the user’s face to create a realistic representation, then use the headset’s on-board cameras to animate the scan as realistically as possible.

Likenesses take a slightly different (and probably more user-friendly) approach for the initial face scan; rather than scanning by holding a headset out in front of your face, Google instead released a Likeness (beta) Android app to let people scan themselves with their phone instead. Holding your phone in front of your face for a scan is definitely a bit easier than awkwardly holding a whole headset with both hands.

According to Google, the Likeness (beta) app is only compatible with Google Pixel 8 or newer, Samsung Galaxy S23 or newer, or Samsung Z Fold5 or newer. Without a compatible device, you can’t create a Likeness avatar, meaning Android XR users with an iPhone (or unsupported Android phone) won’t be able to scan themselves. One benefit of Apple’s approach to scanning with the headset itself is that anyone can use a Persona avatar on Vision Pro regardless of what kind of phone they have.

Image courtesy Google

Like Apple’s approach, Likeness avatars can be used generically as a ‘virtual webcam’. That makes them widely compatible with most video call apps that expect a front-facing camera, like Google Meet, Zoom, Messenger, etc.

And just like Apple, the first ‘beta’ iteration of Likeness avatars are 2D only. They are presented as a 2D representation with no way to transmit them in a spatial format, or have a ‘spatial meeting’, like Vision Pro can do with spatial FaceTime calls. However, Google says it’s working on spatial meetings for the future.

My Take

Photorealistic avatars on XR headsets are a great value-add because of the ability to use video call apps naturally. Apple’s Personas are currently the state-of-the-art as far as consumer-available photorealistic avatars, and the company has shown that it’s possible to cross over the uncanny valley with this approach to avatars.

During a recent meeting with Google, I joined a demo video call on Google Meet with one of the participants using a Likeness avatar. From a photorealism standpoint, the results look impressive, and facial movements look convincing too. However, because I didn’t personally know the individual using the Likeness, I was unfamiliar with their actual idiolect, which makes it impossible for me to judge the accuracy of the facial motion. Still, facial motion only needs to be plausibly realistic to be passable in many circumstances, and that’s been achieved from what I can see.

Image courtesy Google

While it’s a bummer that there’s no ‘spatial meeting’ yet for Android XR (allowing users to chat face-to-face with fully spatial Likeness avatars), Google made the right choice in prioritizing virtual webcam usage at the start. It’s less impressive than spatial meetings, but more widely useful and compatible with existing services and apps.

There’s probably no chance we’ll see spatial calls between Likeness avatars and Persona avatars any time soon, but virtual webcam compatibility makes it trivial for both kinds of avatars to chat across headsets.

One thing worth noting is that Likeness avatars probably won’t be compatible with all Android XR devices. Forthcoming ‘Android XR’ smartglasses (which don’t run anything close to the full-blown version of Android XR) don’t have the power or sensors necessary to render or animate a Likeness avatar. Similarly, devices like XREAL Aura (which does run full-blown Android XR), might have the power but don’t have the sensors (eye and mouth tracking cameras) to animate a Likeness avatar.

It’s possible that Google could make Likeness avatars compatible with these devices by doing simulated eye movements and audio-based lip-sync. Although those technologies are already widely in use for more cartoonish avatars, they’re likely to fall deep into the uncanny valley when applied to photorealistic face scans. So I doubt Google will take that approach.

With the introduction of Likeness avatars, Google also has the same challenge I pointed out recently regarding Apple’s Persona avatars: as headsets get smaller, how will they bring this level of avatar fidelity to smaller headsets that have even less room for the cameras that are essential for these kinds of avatars?

Filed Under: Android XR News & Reviews, News, XR Industry News

Meta Delays Puck-Tethered XR Headset to 2027, Next Quest “Large Upgrade” to Current Gen

December 10, 2025 From roadtovr

Meta may be pushing back the release of an upcoming XR headset that tethers to a pocketable compute puck. Meanwhile, the company says its next-gen Quest will be a “large upgrade” over the current generation.

The News

Meta supposedly planned to release the device, codenamed ‘Phoenix’, in the second half of 2026, which is said to include a goggle-like form factor—also slated to offload compute and battery to a puck-like unit tethered to the headset.

Now, according to internal memos obtained by Business Insider, the release timeline of Phoenix has been pushed back to the first half of 2027.

Maher Saba, VP of Reality Labs Foundation, announced the change in an internal memo released December 4th, further noting that the decision arose from a meeting with Reality Labs leaders and CEO Mark Zuckerberg.

Successive XR prototypes | Image courtesy Meta

Saba maintained that the project should be “focused on making the business sustainable and taking extra time to deliver our experiences with higher quality.”

“Based on that, many teams in RL will need to adjust their plans and timelines,” Saba added. “Extending timelines is not an opportunity for us to add more features or take on additional work.”

A separate memo from metaverse leaders Gabriel Aul and Ryan Cairns added that the release date was pushed back in order to “give us a lot more breathing room to get the details right.”

Continuing: “There’s a lot coming in hot with tight bring-up schedules and big changes to our core UX, and we won’t compromise on landing a fully polished and reliable experience,” the memo said.

Additionally, Aul and Cairns’ memo maintained the company is currently working on its next-gen Quest, which is said to focus on immersive gaming. It’s also said to represent a “large upgrade” in capabilities from current devices, and will “significantly improve unit economics.”

Meta is reportedly also planning to release what Business Insider maintains will be a new “limited edition” XR device in 2026, codenamed ‘Malibu 2’. It’s uncertain what sort of device Malibu 2 is at this time.

My Take

It’s difficult to say what the next Quest will shape up to be. Meta tends to run competing prototypes to see what fits best in the market, and may have a different strategy than anyone expects.

Here’s my current hunch: Quest 3S represents the company’s best chance to reach the low end of the market at $300 (cheaper on sale), and it may be in that position for at least another year. I don’t expect a cheap and cheerful headset from Meta for a while, even with the claim that the next Quest will “significantly improve unit economics.” Relative to what? Quest 3S? A potential Quest Pro 2? We simply don’t know.

Meta’s next real headset (not the limited edition thing) may likely be a high-end headset—think around $800 or $1,000 range—which ought to keep some hardcore Quest platform adherents on the upgrade pathway while possibly offering competition some new(ish) faces: namely Samsung Galaxy XR, Valve’s Steam Frame, and the current Apple Vision Pro M5 refresh. Okay, that’s less of a hunch, and more of a consensus from what everyone’s heard.

What is marginally more certain though is Meta doesn’t seem to be in the manufacturing stage just yet of anything, at least not according to the most recent supply chain leaks, or lack thereof, so I’d expect for a lot more hubbub midway through next year. Whatever the case, I’ve got my eye out for all of the above.

Filed Under: Meta Quest 3 News & Reviews, News, VR Development, vr industry, XR Industry News

Apple Design Lead Heads to Meta, Hopefully to Fix Longstanding Quest UX Issues

December 5, 2025 From roadtovr

Apple’s Vice President of Human Interface design, Alan Dye, is leaving the company to lead a new studio within Meta’s Reality Labs division. The move appears to be aimed at raising the bar on the user experience of Meta’s glasses and headsets.

The News

According to his LinkedIn profile, Alan Dye spent nearly 20 years as Apple’s Vice President of Human Interface Design. He was a driving force behind the company’s UI and UX direction, including Apple’s most recent ‘Liquid Glass’ interface overhaul and the VisionOS interface that’s the foundation of Vision Pro.

Now Dye is heading to Meta to lead a “new creative studio within Reality Labs,” according to an announcement by Meta CEO Mark Zuckerberg.

“The new studio [led by Dye] will bring together design, fashion, and technology to define the next generation of our products and experiences. Our idea is to treat intelligence as a new design material and imagine what becomes possible when it is abundant, capable, and human-centered,” Zuckerberg said. “We plan to elevate design within Meta, and pull together a talented group with a combination of craft, creative vision, systems thinking, and deep experience building iconic products that bridge hardware and software.”

The new studio within Reality Labs will also include Billy Sorrentino, another high level Apple designer; Joshua To, who has led interface design at Reality Labs; Meta’s industrial design team, led by Pete Bristol; and art teams led by Jason Rubin, a longtime Meta executive that has been with the company since its 2014 acquisition of Oculus.

“We’re entering a new era where AI glasses and other devices will change how we connect with technology and each other. The potential is enormous, but what matters most is making these experiences feel natural and truly centered around people. With this new studio, we’re focused on making every interaction thoughtful, intuitive, and built to serve people,” said Zuckerberg.

My Take

I’ve been ranting about the fundamental issues of the Quest user experience and interface (UX & UI) for literally years at this point. Meta has largely hit it out of the park with its hardware design, but the software side of things has lagged far behind what we would expect from one of the world’s leading software companies. A post on X from less than a month ago sums up my thoughts:

It’s crazy to see Meta take one step forward with its Quest UI and two steps back, over and over again for years.

They keep piling on new features with seemingly no top-down vision for how the interface should work or feel. The Quest interface is as scattered, confusing, and unpolished as ever.

The new Navigator is an improvement for simply accessing app icons, but it feels like it’s using a completely different paradigm than the rest of the window / panel management interface. Not to mention that the system interface speaks a vastly different language than the Horizon interface.

I have completely lost faith that Meta will ever get a handle on this after watching the interface meander in random directions year after year, punctuated by “refreshes” that look promising but end up being forgotten about 6 months later.

It seems Meta is trying to course-correct before things get further out of hand. If pulling in one of the world’s most experienced individuals at creating cohesive UX & UI at scale is what it takes, then I’m glad to see it happening.

Apple has set a high bar for how easy a headset should be to use. I use both Vision Pro and Quest on a regular basis, and moving between them is a night-and-day difference in usability and polish. And as I’ve said before, the high cost of Vision Pro has little to do with why its interface works so much better; the high level design decisions—which would work similarly well on any headset—are a much more significant factor.

Back when Meta was still called Facebook, the company had a famous motto: “Move fast and break things.” Although the company no longer champions this motto, it seems like it has had a hard time leaving it behind. The scattered, unpolished, and constantly shifting nature of the Quest interface could hardly embody the motto more clearly.

“Move fast and break things” might have worked great in the world of web development, but when it comes to creating a completely new interface paradigm for the brand new medium of VR, it hasn’t worked so well.

Of course, Dye’s onboarding and the new studio within Reality Labs isn’t only about Quest. In fact, it might not even be mostly about Quest. If I’ve learned anything about Zuckerberg over the years, it’s that he’s a very long-term thinker and does what he can to move his company where it needs to go to be in the right place 5 or 10 years down the road.

And in 5 to 10 years, Zuckerberg hopes Meta will be dominant, not just with immersive headsets, but AI smart glasses (and likely unreleased devices) too. This new team will likely not be focused on fixing the current state of the Quest interface, but instead trying to define a cohesive UX & UI for the company’s entire ecosystem of devices.

With Alan Dye heading to Meta, there’s a good chance that he will bring with him decades of Apple design processes that have worked well for the company over many years. But I have a feeling it will be a significant challenge for him to change “move fast and break things” to “move slow and polish things” within Meta.

Filed Under: News, XR Industry News

Alibaba Launches Smart Glasses to Rival Meta Ray-Ban Display

December 2, 2025 From roadtovr

Alibaba released a pair of display-clad smart glasses, ostensibly looking to go toe-to-toe with Meta Ray-Ban Display, which launched in the US for $800 back in September.

The News

China’s Alibaba, one the world’s largest retailers and e-commerce companies, just released its first smart glasses, called Quark AI Glasses, which run the company’s own Qwen AI model.

Image courtesy Reuters

Seemingly China-only devices for now, Alibaba is now offering Quark AI in two fundamental versions across Chinese online and brick-and-mortar retailers:

  • Quark AI Glasses S1: starting at ¥3,799 (~$540 USD), includes dual monochrome green displays
  • Quark AI Glasses G1: starting at ¥1,899 (~$270 USD), no displays, sharing core technology of ‘S1’ model

Quark AI Glasses S1 is equipped with a Qualcomm Snapdragon AR1 chipset and a low-power co-processor which drive dual monochrome green micro-OLED displays, boasting a brightness of up to 4,000 nits, according to South China Morning Post.

It also features a five-microphone array with bone conduction, 3K video recording which can be automatically upscaled to 4K, as well as low-light enhancement tech said to bring mobile phone-level imaging to smart glasses. Additionally, Quark AI Glasses S1 include hot-swappable batteries, which plug into the glasses’ stem piece.

You can see the English dubbed version of the Chinese language announcement below:

My Take

At least when it comes to on-paper specs, Quark AI Glasses S1 aren’t exactly a 1:1 rival with Meta Ray-Ban Display, even though both technically include display(s), onboard AI, and the ability to take photos and video.

While Meta Ray-Ban Display only feature a single full-color display, Quark S1’s dual displays only offer monochrome green output, which limits the sort of information that can be seen.

Meta Ray-Ban Display & Neural Band | Photo by Road to VR

Quark S1 also doesn’t come with an input device, like Meta Ray-Ban’s Neural Band, limiting it to only voice and touch input. That means Quark S1 user won’t be scrolling social media, pinching and zooming content, or other nifty UI manipulation.

Still, that might be just enough—at least one of the world’s largest e-commerce, cloud infrastructure, and FinTech companies thinks so. Also not worth overlooking is Quark S1’s unique benefit of being tightly integrated into the Qwen AI ecosystem, as well as the Chinese payment infrastructure for fast and easy QR code-based payments with Alipay; that last one is something most Chinese smart glasses are trying to hook into, like Xiaomi’s own Ray-Ban Meta competitors.

Although the company’s Qwen AI model is available globally, I find it pretty unlikely that Alibaba will ever bring its first-gen models of Quark AI Glasses S1/G1 outside of its usual sphere of influence, or meaningfully intersect with Meta’s supported regions.

Filed Under: AR Development, News, XR Industry News

FluxPose VR Tracker Raises $2M on Kickstarter, Promising Compact 6DOF Body Tracking

December 1, 2025 From roadtovr

FluxPose is a 6DOF tracking solution for full-body tracking that seems to be picking up speed on Kickstarter, having now garnered over $2 million in crowdfunding since its initial launch on November 29th.

The News

FluxPose is a full-body tracking system that’s said to deliver occlusion-free positional tracking without the need of externally mounted base stations or sensors. It does this by way of a wearable beacon, which generates magnetic fields, the team explains on the FluxPose Kickstarter campaign.

“It’s completely occlusion-free, incredibly compact, drift-free, and the trackers last up to 24 hours on a single charge, offering high-end performance in the smallest, lightest form factor possible,” the Logrono, Spain-based team says.

Image courtesy FluxPose

And because the beacon is worn on your body, and automatically synchronizes the tracking space with VR headsets without any additional software, it essentially means the tracking volume moves with you as you move (or more likely, dance) in VR.

Weighing in at 85 grams, the trackers are also impressively compact: a Dorito for scale.

Image courtesy FluxPose

At the time of this writing, the cheapest support tier is the ‘Lite Kit’ for €339 (~$394 USD), which comes with three tracking points (straps sold separately). At the higher end is the ‘Pro Kit’ for €689 (~$800 USD), which includes eight tracking points. Notably, those prices do not include taxes or import tariffs.

VR headset mounts provided through the Kickstarter are said to include Quest 2/3/3S/Pro, Pico 4/4 Ultra, Samsung Galaxy XR, HTC Vive Pro/Pro 2/Focus/XR Elite, Bigscreen Beyond 1/2, Valve Index, and Steam Frame. Backers will have the chance to select the exact headset model on a survey after the Kickstarter ends, and again a few months before delivery.

You can find out more over on the FluxPose Kickstarter, which we’ll be following for the campaign’s remaining 58 days, ending on January 28th, 2026. The earliest delivery is expected in August 2026 for early bird supporters, and October 2026 for late comers to the Kickstarter.

My Take

Magnetically-tracked peripherals aren’t anything new in VR; I’ve seen a number of solutions come and go, with the emphasis mostly on go: Razer Hydra, Sixense Stem, Atraxa, Magic Leap 1 controllers—these implementations seem to be good enough in optimal conditions, but not rock solid across the board.

In short, magnetic trackers position themselves in 3D space by measuring the intensity of the magnetic field in various directions, which (as mentioned above) is generated by a beacon. When the trackers’ measurement point is rotated, the distribution of the magnetic field changes across its various axes, allowing for it to be positionally tracked.

And while those magnetically-tracked peripherals listed above don’t suffer from optical occlusion, they can be affected by external magnetic fields, ferromagnetic materials in the tracking volume, and conductive materials near the emitter or sensor. These things typically reduce tracking quality, making them less reliably accurate than optical (Quest 3) or laser-positioned systems (SteamVR base stations).

Granted, I haven’t tried FluxPose yet, although I don’t think those drawbacks are nearly as important in fully-body tracking than they might be in actual motion controllers, which require much higher accuracy. A few millimeter’s discrepancy in your foot’s position really doesn’t matter as much as it might if you were reaching out and trying to grab something with a magnetically-tracked controller.

Provided Road to VR doesn’t get to go hands-on in the coming months, I’ll be keeping my eyes peeled for videos and articles as we move closer to the campaign’s close next month.

Filed Under: News, VR Development, XR Industry News

Pico Reportedly Releasing Vision Pro Competitor in 2026 with Self-developed Chip

November 26, 2025 From roadtovr

Zhenyuan Yang, Vice President of Technology at Pico parent company ByteDance, reportedly revealed plans for Pico’s next XR headset, which is said to sport a self-developed display chip and 4,000 PPI microOLED display.

The News

According to Chinese news outlet Science and Technology Innovation Board Daily (via Nweon), Yang was speaking at ByteDance’s annual scholarship award ceremony when he mentioned specific plans to release a new Pico XR headset in 2026.

The self-developed chip was started in 2022, Yang reportedly revealed on stage, noting the chip is now in mass production. The chip is said to overcome real-time processing bottlenecks in high-resolution, high-frame-rate mixed reality video, with it capable of reducing system latency to about 12 ms while maintaining high-precision image quality.

It’s also said to improve performance in SLAM, motion compensation, and inverse-distortion workloads, which demand high compute efficiency on low-power devices, Science and Technology Innovation Board Daily reports.

Image courtesy PICO

Supposedly slated to launch in 2026, the headset will pair this chip with a custom microOLED display which is said to approach 4,000 PPI—slightly higher than that of Apple Vision Pro’s 3,386 PPI.

According to the report, Pico’s microOLED display reaches an average 40 PPD (over 45 at center), and addresses brightness limitations by incorporating microlens (MLA) technology and optical compensation for uniform color and luminance. Additionally, Pico is also developing its own data-capture systems to train advanced eye-tracking, gesture-tracking, and spatial-understanding models.

Yang emphasized that since 2023, ByteDance has shifted Pico’s strategy away from aggressive content and marketing spending toward long-term technological investment, increasing XR R&D rather than retreating from the market.

“In 2023, we decided to reduce our investment in content and marketing, and instead focus more firmly on our technology strategy,” Yang said (machine translated from Chinese). “This was because the hardware experience of our products was not yet mature enough to support large-scale market applications. This adjustment led to some misunderstandings at the time, with many people saying that ByteDance was no longer pursuing this direction. In fact, quite the opposite.”

This follows an initial report from The Information this summer, which alleged Pico was developing a pair of slim and light MR “goggles,” reportedly codenamed ‘Swan’, which are said to weigh just 100 grams.

My Take

More competition is great, although US-based audiences hoping for a new Vision Pro competitor from Pico may be left waiting.

The company’s headsets are typically only available in China, East and Southeast Asia and Europe—but not in North America, and not for the lack of trying either. An additional stumbling block: Pico headsets have typically been priced above Meta’s equivalents, which has limited appeal in Meta-supported regions.

Still, ByteDance, the parent company behind TikTok and Chinese equivalent platform Douyin, has actually overtaken Meta in revenue, putting the parent company in a better position than ever to bolster its XR platform as a premium offering globally.

Filed Under: News, VR Development, XR Industry News

« Previous Page
Next Page »

  • Home