• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

News

Pimax Dream Air Begins Shipping in “small batches” With Temporary Headstrap

January 7, 2026 From roadtovr

After multiple delays, Pimax has finally begun shipping its next PC VR headset, albeit in “small batches,” which arrive with a fabric headstrap—something of a temporary solution until the company can ship out its official headstrap.

The News

Dream Air is Pimax’s first thing and light PC VR headset, which is set to arrive with Sony’s high-end micro-OLED panels, packing in a 13.6MP (3,840 × 3,552) per-eye resolution.

Now, Pimax told Road to VR that it actually began shipping Dream Air in “small batches” before the end of the year for the purposes of external beta testing.

While official shipments are set to kick off sometime this month, a few users have already received Dream Air with what Jaap Grolleman, Pimax’s Head of Communications, describes as a stopgap measure to get the first units out the door.

“We’re still working on the final backstrap, but we don’t want to make that a showstopper to start shipping and start collecting feedback on the headset,” Grolleman said in a recent video.

Pimax Dream Air 2D Strap | Image courtesy Pimax

Those early batches of Pimax Dream Air are shipping with what the company calls its “2D headstrap”, as it’s made out of fabric, with Grolleman noting that it’s “perfectly fine to use, even in long sessions as it hugs your head from behind and slightly above.”

A “3D headstrap”—more of an Apple Vision Pro-inspired knit affair—is said to arrive later to who initially received the 2D strap with their order.

Pimax hasn’t provided info on when the 3D strap will arrive, or when the company will cut off shipments including the 2D strap.

Pimax Dream Air 3D Strap | Image courtesy Pimax

Notably, Pimax says it’s also developing a “hard backstrap,” which includes off-ear audio, which will be available sometime after Dream Air begins its wider rollout.

As for Dream Air SE—the cheaper variant which uses 6.5MP (2,560 × 2,560) per-eye displays—Pimax says small batches will begin shipping out in February 2026.

Pimax initially announced Dream Air last December, as it hoped to enter the emergent thin and light PC VR headset segment, which includes entries such as Bigscreen Beyond and Shiftall MaganeX Superlight 8K. The headset however suffered a number of delays following its planned May 2025 launch.

My Take

If you’ve been following Pimax, you already know this is how they operate: official announcements and initial shipping dates feel more like walking into a brainstorming session, as the company often changes designs, specs, and release windows multiple times before official release. Along the way, the company usually tends to announce other devices, making the reporting process more like taking apart a watch to see what time it is.

On the face of it, you might think that’s fairly amateurish behavior, but Pimax has proven to do what few companies can: publicly iterate with the expectation that it will eventually deliver.

It’s been that way ever since the company funded its original 2017 Pimax “4K” headset via Kickstarter—back when Pimax announced it was releasing the first consumer-oriented wide-FOV PC VR headset alongside a bevy of modular accessories. Some of those never came, and some arrived two years later.

Okay, maybe that was amateurish, but the company is still here, and still serving up competitive hardware, which says something.

Filed Under: News, PC VR News & Reviews

Magic Leap Signs Deal with Taiwan’s Pegatron, Strengthening AR Manufacturing Position

January 5, 2026 From roadtovr

Magic Leap announced a manufacturing partnership with Pegatron, a major global electronics manufacturer, to scale production of AR glasses components, including Magic Leap’s waveguide technology.

The News

Under the agreement outlined in a press statement, Pegatron will apply its manufacturing capabilities to help turn Magic Leap’s optical designs into mass-produced components.

Taiwan-based Pegatron specializes in developing and producing computing, communications, and consumer electronics for major brands, in addition to being the parent company of PC component company ASRock.

Details are still under wraps, however Magic Leap Product and Partner Development exec Jade Meskill says the partnership will create “a clear path to bring AR components to market at scale.”

“This collaboration reflects the growing maturity of the AR ecosystem,” said Jason Cheng, Vice Chairman at Pegatron. “By combining Magic Leap’s component-level expertise with Pegatron’s manufacturing infrastructure, we can support more efficient pathways from development to production.”

This follows the announcement in October that Magic Leap was entering into a multi-year AR hardware partnership with Google.

My Take

Despite early market missteps that saw millions (if not billions) go to the development of its ML 1 and ML 2 headsets, Magic Leap seems to be making good on its pivot from AR headset creator to major AR component player, as the company is leveraging its designs, know-how and catalogue of patents to stay in the fight.

And despite the years of grinding, it’s a fight that still hasn’t really heated up just yet, as companies like Meta, Apple and Google are still in deep in preparation to create their own AR glasses (note: not smart glasses) for release sometime before 2030.

Still, if the coming AR revolution is anything like the smartphone revolution of the early 2000s, there will potentially be a lot of players beyond those three tech giants to spin up competition when AR components eventually get cheaper with economies of scale.

And while we’re not there yet, Magic Leap seems to have found a solid raison d’être in the meantime, and a much better shot at one day becoming profitable as a result.

Filed Under: ar industry, AR Investment, News, XR Industry News

‘VRChat’ Breaks Concurrent User Record on New Year’s Eve

January 5, 2026 From roadtovr

VRChat’s head of community says the popular social VR platform set a new concurrent user record over the New Year’s Eve celebrations.

The News

As first reported by UploadVR, VRChat’s head of community ‘Tupper’ detailed concurrent user numbers as they rolled in across the various Western Hemisphere time zones during the platform’s annual 24-hour NYE celebration, making for a peak of 148,886 concurrent users during the Central Time Zone ball drop.

Here’s the full breakdown, courtesy Tupper, which includes all supported platforms:

Across the board for US TZs:

  • ET: 147,226
  • CT: 148,886
  • MT: 141,184
  • PT: 127,708

Notably, Tupper says that also Japan’s had “a strong showing,” although they declined to details the exact numbers, noting however “it did surprise me.”

Additionally, Tupper says that recent “normal weekend” numbers float around 120-125k concurrent users at peaks.

My Take

VRChat doesn’t regularly publish user figures, or user breakdowns across platforms, which is a real shame since it could be one of the best ways of telling just how well VR is doing overall during these post-holiday periods—right as a flock of new users is coming in to try the massive, free and extremely well-known social VR platform.

And yes, while I tend to call it a social VR platform, VRChat is actually much more than that nowadays, as it also undoubtedly pulls in a significant share of users across flatscreen, which include PC, Android, and iOS.

Image courtesy SteamDB

As it is, engagement doesn’t appear to be slowing down on PC, according to data obtained from SteamDB. Above, you can see the massive bump in 2018 leading up to recenrt ~75,000 concurrent users connected through the Steam version of the app. Notably, those local peaks always coincide with the holiday season.

That said, all platforms eventually plateau, although it’s difficult to say when that might be for VRChat. It’s still attracting a lot of maker talent, thanks to its flexible user-generated content platform, and is still the go-to place for a variety of Internet subcultures.

Filed Under: Meta Quest 3 News & Reviews, News, PC VR News & Reviews

Google is Rolling out Photorealistic ‘Likeness’ Avatars on Android XR to Compete with Apple’s ‘Personas’

December 11, 2025 From roadtovr

Google is starting to roll out new photorealistic avatars which they call “Likeness”. Similar to Apple’s Personas, Likeness avatars are generated by scanning a user’s face, then animated it with input from the sensors on a headset. The avatars can be used to represent the user in video call apps, but Google doesn’t yet have a way to have spatial meetings with other Likeness avatars.

The News

Google is launching its own photorealistic avatars called Likeness avatars, for use on compatible Android XR headsets. The idea is similar to Apple’s Persona avatars: scan the user’s face to create a realistic representation, then use the headset’s on-board cameras to animate the scan as realistically as possible.

Likenesses take a slightly different (and probably more user-friendly) approach for the initial face scan; rather than scanning by holding a headset out in front of your face, Google instead released a Likeness (beta) Android app to let people scan themselves with their phone instead. Holding your phone in front of your face for a scan is definitely a bit easier than awkwardly holding a whole headset with both hands.

According to Google, the Likeness (beta) app is only compatible with Google Pixel 8 or newer, Samsung Galaxy S23 or newer, or Samsung Z Fold5 or newer. Without a compatible device, you can’t create a Likeness avatar, meaning Android XR users with an iPhone (or unsupported Android phone) won’t be able to scan themselves. One benefit of Apple’s approach to scanning with the headset itself is that anyone can use a Persona avatar on Vision Pro regardless of what kind of phone they have.

Image courtesy Google

Like Apple’s approach, Likeness avatars can be used generically as a ‘virtual webcam’. That makes them widely compatible with most video call apps that expect a front-facing camera, like Google Meet, Zoom, Messenger, etc.

And just like Apple, the first ‘beta’ iteration of Likeness avatars are 2D only. They are presented as a 2D representation with no way to transmit them in a spatial format, or have a ‘spatial meeting’, like Vision Pro can do with spatial FaceTime calls. However, Google says it’s working on spatial meetings for the future.

My Take

Photorealistic avatars on XR headsets are a great value-add because of the ability to use video call apps naturally. Apple’s Personas are currently the state-of-the-art as far as consumer-available photorealistic avatars, and the company has shown that it’s possible to cross over the uncanny valley with this approach to avatars.

During a recent meeting with Google, I joined a demo video call on Google Meet with one of the participants using a Likeness avatar. From a photorealism standpoint, the results look impressive, and facial movements look convincing too. However, because I didn’t personally know the individual using the Likeness, I was unfamiliar with their actual idiolect, which makes it impossible for me to judge the accuracy of the facial motion. Still, facial motion only needs to be plausibly realistic to be passable in many circumstances, and that’s been achieved from what I can see.

Image courtesy Google

While it’s a bummer that there’s no ‘spatial meeting’ yet for Android XR (allowing users to chat face-to-face with fully spatial Likeness avatars), Google made the right choice in prioritizing virtual webcam usage at the start. It’s less impressive than spatial meetings, but more widely useful and compatible with existing services and apps.

There’s probably no chance we’ll see spatial calls between Likeness avatars and Persona avatars any time soon, but virtual webcam compatibility makes it trivial for both kinds of avatars to chat across headsets.

One thing worth noting is that Likeness avatars probably won’t be compatible with all Android XR devices. Forthcoming ‘Android XR’ smartglasses (which don’t run anything close to the full-blown version of Android XR) don’t have the power or sensors necessary to render or animate a Likeness avatar. Similarly, devices like XREAL Aura (which does run full-blown Android XR), might have the power but don’t have the sensors (eye and mouth tracking cameras) to animate a Likeness avatar.

It’s possible that Google could make Likeness avatars compatible with these devices by doing simulated eye movements and audio-based lip-sync. Although those technologies are already widely in use for more cartoonish avatars, they’re likely to fall deep into the uncanny valley when applied to photorealistic face scans. So I doubt Google will take that approach.

With the introduction of Likeness avatars, Google also has the same challenge I pointed out recently regarding Apple’s Persona avatars: as headsets get smaller, how will they bring this level of avatar fidelity to smaller headsets that have even less room for the cameras that are essential for these kinds of avatars?

Filed Under: Android XR News & Reviews, News, XR Industry News

Meta Delays Puck-Tethered XR Headset to 2027, Next Quest “Large Upgrade” to Current Gen

December 10, 2025 From roadtovr

Meta may be pushing back the release of an upcoming XR headset that tethers to a pocketable compute puck. Meanwhile, the company says its next-gen Quest will be a “large upgrade” over the current generation.

The News

Meta supposedly planned to release the device, codenamed ‘Phoenix’, in the second half of 2026, which is said to include a goggle-like form factor—also slated to offload compute and battery to a puck-like unit tethered to the headset.

Now, according to internal memos obtained by Business Insider, the release timeline of Phoenix has been pushed back to the first half of 2027.

Maher Saba, VP of Reality Labs Foundation, announced the change in an internal memo released December 4th, further noting that the decision arose from a meeting with Reality Labs leaders and CEO Mark Zuckerberg.

Successive XR prototypes | Image courtesy Meta

Saba maintained that the project should be “focused on making the business sustainable and taking extra time to deliver our experiences with higher quality.”

“Based on that, many teams in RL will need to adjust their plans and timelines,” Saba added. “Extending timelines is not an opportunity for us to add more features or take on additional work.”

A separate memo from metaverse leaders Gabriel Aul and Ryan Cairns added that the release date was pushed back in order to “give us a lot more breathing room to get the details right.”

Continuing: “There’s a lot coming in hot with tight bring-up schedules and big changes to our core UX, and we won’t compromise on landing a fully polished and reliable experience,” the memo said.

Additionally, Aul and Cairns’ memo maintained the company is currently working on its next-gen Quest, which is said to focus on immersive gaming. It’s also said to represent a “large upgrade” in capabilities from current devices, and will “significantly improve unit economics.”

Meta is reportedly also planning to release what Business Insider maintains will be a new “limited edition” XR device in 2026, codenamed ‘Malibu 2’. It’s uncertain what sort of device Malibu 2 is at this time.

My Take

It’s difficult to say what the next Quest will shape up to be. Meta tends to run competing prototypes to see what fits best in the market, and may have a different strategy than anyone expects.

Here’s my current hunch: Quest 3S represents the company’s best chance to reach the low end of the market at $300 (cheaper on sale), and it may be in that position for at least another year. I don’t expect a cheap and cheerful headset from Meta for a while, even with the claim that the next Quest will “significantly improve unit economics.” Relative to what? Quest 3S? A potential Quest Pro 2? We simply don’t know.

Meta’s next real headset (not the limited edition thing) may likely be a high-end headset—think around $800 or $1,000 range—which ought to keep some hardcore Quest platform adherents on the upgrade pathway while possibly offering competition some new(ish) faces: namely Samsung Galaxy XR, Valve’s Steam Frame, and the current Apple Vision Pro M5 refresh. Okay, that’s less of a hunch, and more of a consensus from what everyone’s heard.

What is marginally more certain though is Meta doesn’t seem to be in the manufacturing stage just yet of anything, at least not according to the most recent supply chain leaks, or lack thereof, so I’d expect for a lot more hubbub midway through next year. Whatever the case, I’ve got my eye out for all of the above.

Filed Under: Meta Quest 3 News & Reviews, News, VR Development, vr industry, XR Industry News

Apple Design Lead Heads to Meta, Hopefully to Fix Longstanding Quest UX Issues

December 5, 2025 From roadtovr

Apple’s Vice President of Human Interface design, Alan Dye, is leaving the company to lead a new studio within Meta’s Reality Labs division. The move appears to be aimed at raising the bar on the user experience of Meta’s glasses and headsets.

The News

According to his LinkedIn profile, Alan Dye spent nearly 20 years as Apple’s Vice President of Human Interface Design. He was a driving force behind the company’s UI and UX direction, including Apple’s most recent ‘Liquid Glass’ interface overhaul and the VisionOS interface that’s the foundation of Vision Pro.

Now Dye is heading to Meta to lead a “new creative studio within Reality Labs,” according to an announcement by Meta CEO Mark Zuckerberg.

“The new studio [led by Dye] will bring together design, fashion, and technology to define the next generation of our products and experiences. Our idea is to treat intelligence as a new design material and imagine what becomes possible when it is abundant, capable, and human-centered,” Zuckerberg said. “We plan to elevate design within Meta, and pull together a talented group with a combination of craft, creative vision, systems thinking, and deep experience building iconic products that bridge hardware and software.”

The new studio within Reality Labs will also include Billy Sorrentino, another high level Apple designer; Joshua To, who has led interface design at Reality Labs; Meta’s industrial design team, led by Pete Bristol; and art teams led by Jason Rubin, a longtime Meta executive that has been with the company since its 2014 acquisition of Oculus.

“We’re entering a new era where AI glasses and other devices will change how we connect with technology and each other. The potential is enormous, but what matters most is making these experiences feel natural and truly centered around people. With this new studio, we’re focused on making every interaction thoughtful, intuitive, and built to serve people,” said Zuckerberg.

My Take

I’ve been ranting about the fundamental issues of the Quest user experience and interface (UX & UI) for literally years at this point. Meta has largely hit it out of the park with its hardware design, but the software side of things has lagged far behind what we would expect from one of the world’s leading software companies. A post on X from less than a month ago sums up my thoughts:

It’s crazy to see Meta take one step forward with its Quest UI and two steps back, over and over again for years.

They keep piling on new features with seemingly no top-down vision for how the interface should work or feel. The Quest interface is as scattered, confusing, and unpolished as ever.

The new Navigator is an improvement for simply accessing app icons, but it feels like it’s using a completely different paradigm than the rest of the window / panel management interface. Not to mention that the system interface speaks a vastly different language than the Horizon interface.

I have completely lost faith that Meta will ever get a handle on this after watching the interface meander in random directions year after year, punctuated by “refreshes” that look promising but end up being forgotten about 6 months later.

It seems Meta is trying to course-correct before things get further out of hand. If pulling in one of the world’s most experienced individuals at creating cohesive UX & UI at scale is what it takes, then I’m glad to see it happening.

Apple has set a high bar for how easy a headset should be to use. I use both Vision Pro and Quest on a regular basis, and moving between them is a night-and-day difference in usability and polish. And as I’ve said before, the high cost of Vision Pro has little to do with why its interface works so much better; the high level design decisions—which would work similarly well on any headset—are a much more significant factor.

Back when Meta was still called Facebook, the company had a famous motto: “Move fast and break things.” Although the company no longer champions this motto, it seems like it has had a hard time leaving it behind. The scattered, unpolished, and constantly shifting nature of the Quest interface could hardly embody the motto more clearly.

“Move fast and break things” might have worked great in the world of web development, but when it comes to creating a completely new interface paradigm for the brand new medium of VR, it hasn’t worked so well.

Of course, Dye’s onboarding and the new studio within Reality Labs isn’t only about Quest. In fact, it might not even be mostly about Quest. If I’ve learned anything about Zuckerberg over the years, it’s that he’s a very long-term thinker and does what he can to move his company where it needs to go to be in the right place 5 or 10 years down the road.

And in 5 to 10 years, Zuckerberg hopes Meta will be dominant, not just with immersive headsets, but AI smart glasses (and likely unreleased devices) too. This new team will likely not be focused on fixing the current state of the Quest interface, but instead trying to define a cohesive UX & UI for the company’s entire ecosystem of devices.

With Alan Dye heading to Meta, there’s a good chance that he will bring with him decades of Apple design processes that have worked well for the company over many years. But I have a feeling it will be a significant challenge for him to change “move fast and break things” to “move slow and polish things” within Meta.

Filed Under: News, XR Industry News

Alibaba Launches Smart Glasses to Rival Meta Ray-Ban Display

December 2, 2025 From roadtovr

Alibaba released a pair of display-clad smart glasses, ostensibly looking to go toe-to-toe with Meta Ray-Ban Display, which launched in the US for $800 back in September.

The News

China’s Alibaba, one the world’s largest retailers and e-commerce companies, just released its first smart glasses, called Quark AI Glasses, which run the company’s own Qwen AI model.

Image courtesy Reuters

Seemingly China-only devices for now, Alibaba is now offering Quark AI in two fundamental versions across Chinese online and brick-and-mortar retailers:

  • Quark AI Glasses S1: starting at ¥3,799 (~$540 USD), includes dual monochrome green displays
  • Quark AI Glasses G1: starting at ¥1,899 (~$270 USD), no displays, sharing core technology of ‘S1’ model

Quark AI Glasses S1 is equipped with a Qualcomm Snapdragon AR1 chipset and a low-power co-processor which drive dual monochrome green micro-OLED displays, boasting a brightness of up to 4,000 nits, according to South China Morning Post.

It also features a five-microphone array with bone conduction, 3K video recording which can be automatically upscaled to 4K, as well as low-light enhancement tech said to bring mobile phone-level imaging to smart glasses. Additionally, Quark AI Glasses S1 include hot-swappable batteries, which plug into the glasses’ stem piece.

You can see the English dubbed version of the Chinese language announcement below:

My Take

At least when it comes to on-paper specs, Quark AI Glasses S1 aren’t exactly a 1:1 rival with Meta Ray-Ban Display, even though both technically include display(s), onboard AI, and the ability to take photos and video.

While Meta Ray-Ban Display only feature a single full-color display, Quark S1’s dual displays only offer monochrome green output, which limits the sort of information that can be seen.

Meta Ray-Ban Display & Neural Band | Photo by Road to VR

Quark S1 also doesn’t come with an input device, like Meta Ray-Ban’s Neural Band, limiting it to only voice and touch input. That means Quark S1 user won’t be scrolling social media, pinching and zooming content, or other nifty UI manipulation.

Still, that might be just enough—at least one of the world’s largest e-commerce, cloud infrastructure, and FinTech companies thinks so. Also not worth overlooking is Quark S1’s unique benefit of being tightly integrated into the Qwen AI ecosystem, as well as the Chinese payment infrastructure for fast and easy QR code-based payments with Alipay; that last one is something most Chinese smart glasses are trying to hook into, like Xiaomi’s own Ray-Ban Meta competitors.

Although the company’s Qwen AI model is available globally, I find it pretty unlikely that Alibaba will ever bring its first-gen models of Quark AI Glasses S1/G1 outside of its usual sphere of influence, or meaningfully intersect with Meta’s supported regions.

Filed Under: AR Development, News, XR Industry News

FluxPose VR Tracker Raises $2M on Kickstarter, Promising Compact 6DOF Body Tracking

December 1, 2025 From roadtovr

FluxPose is a 6DOF tracking solution for full-body tracking that seems to be picking up speed on Kickstarter, having now garnered over $2 million in crowdfunding since its initial launch on November 29th.

The News

FluxPose is a full-body tracking system that’s said to deliver occlusion-free positional tracking without the need of externally mounted base stations or sensors. It does this by way of a wearable beacon, which generates magnetic fields, the team explains on the FluxPose Kickstarter campaign.

“It’s completely occlusion-free, incredibly compact, drift-free, and the trackers last up to 24 hours on a single charge, offering high-end performance in the smallest, lightest form factor possible,” the Logrono, Spain-based team says.

Image courtesy FluxPose

And because the beacon is worn on your body, and automatically synchronizes the tracking space with VR headsets without any additional software, it essentially means the tracking volume moves with you as you move (or more likely, dance) in VR.

Weighing in at 85 grams, the trackers are also impressively compact: a Dorito for scale.

Image courtesy FluxPose

At the time of this writing, the cheapest support tier is the ‘Lite Kit’ for €339 (~$394 USD), which comes with three tracking points (straps sold separately). At the higher end is the ‘Pro Kit’ for €689 (~$800 USD), which includes eight tracking points. Notably, those prices do not include taxes or import tariffs.

VR headset mounts provided through the Kickstarter are said to include Quest 2/3/3S/Pro, Pico 4/4 Ultra, Samsung Galaxy XR, HTC Vive Pro/Pro 2/Focus/XR Elite, Bigscreen Beyond 1/2, Valve Index, and Steam Frame. Backers will have the chance to select the exact headset model on a survey after the Kickstarter ends, and again a few months before delivery.

You can find out more over on the FluxPose Kickstarter, which we’ll be following for the campaign’s remaining 58 days, ending on January 28th, 2026. The earliest delivery is expected in August 2026 for early bird supporters, and October 2026 for late comers to the Kickstarter.

My Take

Magnetically-tracked peripherals aren’t anything new in VR; I’ve seen a number of solutions come and go, with the emphasis mostly on go: Razer Hydra, Sixense Stem, Atraxa, Magic Leap 1 controllers—these implementations seem to be good enough in optimal conditions, but not rock solid across the board.

In short, magnetic trackers position themselves in 3D space by measuring the intensity of the magnetic field in various directions, which (as mentioned above) is generated by a beacon. When the trackers’ measurement point is rotated, the distribution of the magnetic field changes across its various axes, allowing for it to be positionally tracked.

And while those magnetically-tracked peripherals listed above don’t suffer from optical occlusion, they can be affected by external magnetic fields, ferromagnetic materials in the tracking volume, and conductive materials near the emitter or sensor. These things typically reduce tracking quality, making them less reliably accurate than optical (Quest 3) or laser-positioned systems (SteamVR base stations).

Granted, I haven’t tried FluxPose yet, although I don’t think those drawbacks are nearly as important in fully-body tracking than they might be in actual motion controllers, which require much higher accuracy. A few millimeter’s discrepancy in your foot’s position really doesn’t matter as much as it might if you were reaching out and trying to grab something with a magnetically-tracked controller.

Provided Road to VR doesn’t get to go hands-on in the coming months, I’ll be keeping my eyes peeled for videos and articles as we move closer to the campaign’s close next month.

Filed Under: News, VR Development, XR Industry News

Pico Reportedly Releasing Vision Pro Competitor in 2026 with Self-developed Chip

November 26, 2025 From roadtovr

Zhenyuan Yang, Vice President of Technology at Pico parent company ByteDance, reportedly revealed plans for Pico’s next XR headset, which is said to sport a self-developed display chip and 4,000 PPI microOLED display.

The News

According to Chinese news outlet Science and Technology Innovation Board Daily (via Nweon), Yang was speaking at ByteDance’s annual scholarship award ceremony when he mentioned specific plans to release a new Pico XR headset in 2026.

The self-developed chip was started in 2022, Yang reportedly revealed on stage, noting the chip is now in mass production. The chip is said to overcome real-time processing bottlenecks in high-resolution, high-frame-rate mixed reality video, with it capable of reducing system latency to about 12 ms while maintaining high-precision image quality.

It’s also said to improve performance in SLAM, motion compensation, and inverse-distortion workloads, which demand high compute efficiency on low-power devices, Science and Technology Innovation Board Daily reports.

Image courtesy PICO

Supposedly slated to launch in 2026, the headset will pair this chip with a custom microOLED display which is said to approach 4,000 PPI—slightly higher than that of Apple Vision Pro’s 3,386 PPI.

According to the report, Pico’s microOLED display reaches an average 40 PPD (over 45 at center), and addresses brightness limitations by incorporating microlens (MLA) technology and optical compensation for uniform color and luminance. Additionally, Pico is also developing its own data-capture systems to train advanced eye-tracking, gesture-tracking, and spatial-understanding models.

Yang emphasized that since 2023, ByteDance has shifted Pico’s strategy away from aggressive content and marketing spending toward long-term technological investment, increasing XR R&D rather than retreating from the market.

“In 2023, we decided to reduce our investment in content and marketing, and instead focus more firmly on our technology strategy,” Yang said (machine translated from Chinese). “This was because the hardware experience of our products was not yet mature enough to support large-scale market applications. This adjustment led to some misunderstandings at the time, with many people saying that ByteDance was no longer pursuing this direction. In fact, quite the opposite.”

This follows an initial report from The Information this summer, which alleged Pico was developing a pair of slim and light MR “goggles,” reportedly codenamed ‘Swan’, which are said to weigh just 100 grams.

My Take

More competition is great, although US-based audiences hoping for a new Vision Pro competitor from Pico may be left waiting.

The company’s headsets are typically only available in China, East and Southeast Asia and Europe—but not in North America, and not for the lack of trying either. An additional stumbling block: Pico headsets have typically been priced above Meta’s equivalents, which has limited appeal in Meta-supported regions.

Still, ByteDance, the parent company behind TikTok and Chinese equivalent platform Douyin, has actually overtaken Meta in revenue, putting the parent company in a better position than ever to bolster its XR platform as a premium offering globally.

Filed Under: News, VR Development, XR Industry News

New Apple Immersive Content Coming Soon to Vision Pro From Real Madrid and Red Bull

November 25, 2025 From roadtovr

Apple announced the next slate of immersive content is on its way to Vision Pro, this time bringing an immersive documentary from Real Madrid and some extreme sports from Red Bull.

First reported by GQ Spain and later confirmed by Apple, next year Apple and Spanish football club Real Madrid are teaming up on a new immersive documentary, coming exclusively to Vision Pro.

The documentary, which hasn’t been named yet, is filmed with over 30 Blackmagic immersive cameras during the 2025-26 Champions League, which pitted Real Madrid against Italian football club Juventus.

Apple says the immersive documentary “brings viewers inside the world’s most decorated club, capturing moments from practice to the pitch with a level of access that fans have never experienced before.”

Also coming to Vision Pro is the first installment of World of Red Bull in December, which Apple announced a few months ago.

Image courtesy Red Bull

World of Redbull is a new series of immersive experiences that will start with ‘Backcountry Skiing’, featuring the world’s top freeskiers taking on the wilderness of Revelstoke, British Columbia. That’s scheduled to land on Vison Pro on December 4th.

The next episode, called ‘Big-Wave Surfing’, is slated to let viewers follow elite surfers off the remote coast of Teahupoʻo, Tahiti, which is scheduled to arrive sometime next year.

Filed Under: Apple Vision Pro News & Reviews, News

« Previous Page
Next Page »

  • Home