• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

Half the Size & Half the Price is What Vision Pro Needs to Take Off

May 6, 2025 From roadtovr

Apple has set the bar for UX on a standalone headset. As soon as the company can get the same experience into a smaller and cheaper package, it’s going to become significantly more appealing to a wider range of people.

Apple has billed Vision Pro as “tomorrow’s technology, today.” And frankly, that feels pretty accurate if we’re talking about the headset’s core user experience, which is far beyond other products on the market. Vision Pro is simple and intuitive to use. It might not do as much as a headset like Quest, but what it does do, it does extremely well. But it’s still undeniably big, bulky, and expensive… my recommendation is that it’s not worth buying for most people.

And that’s probably why there seems to be a broadly held notion that Vision Pro is a bad product… a rare flop for Apple. But as someone who has used the headset since launch, I can plainly see all the ways the headset is superior to what else is out there.

Saying Vision Pro is a bad product is a bit like saying a Ferrari is a bad car for not being as widespread as a Honda Accord.

I don’t know if the first generation of Vision Pro met Apple’s sales expectations or fell short of them. But what I do know is that the headset offers an incredibly compelling experience that’s significantly held back by its price and size.

If Apple can take the exact same specs, capabilities, and experience, and fit them into something that’s half the size and costs half as much, I’m certain the headset will see a massive boost in demand.

A more compact Vision Pro concept | Photo generated by Road to VR

Cutting it down to half the size would mean bringing it down around 310 grams; certainly not be easy but also not entirely unrealistic, especially if they stick to an off-board battery. After all, Bigscreen Beyond is around 180 grams. It might not be a standalone headset, but it shows how compact the housing, optics, and displays can be.

And half the cost would mean a price tag of roughly $1,750. Still not cheap compared to most headsets out there, but significantly more attainable, especially if Apple can market it as also being the best TV most people will have in their home.

This might seem obvious. Making any tech product smaller and cheaper is a good thing.

But my point here is that Vision Pro is disproportionately held back by its size and cost. It has way more to be gained by halving its size and cost than Quest, for instance, because Quest’s core UX is still very clunky.

Fitting the Quest experience into something half the size and half the cost would be nice, but the core UX would still be holding it back in a big way.

On the other hand, Vision Pro feels like its core UX is just waiting to be unleashed… halving the size and cost wouldn’t just be nice, it would be transformative.

Of course this is much easier said than done. After all, you might counter that the very reason why Vision Pro’s core UX is so great is because it costs so much. It must be the expensive hardware that makes the difference between Quest and Vision Pro.

While this is perhaps true in some specific cases, in so many more cases, it’s the software experience that makes Vision Pro excel in usability. For instance, we explained previously that Quest 3 actually has higher effective resolution than Vision Pro, but it’s the thoughtful software design of Vision Pro that lead most people to the conclusion that Vision Pro looks much better visually.

And when I say that Vision Pro will take off when it reaches half the size and half the price, I’m not even factoring in several key improvements that will hopefully come with future versions of the headset (like sharper passthrough with less motion blur and some enhancements to the software).

Apple has set a high bar for how its headset should feel and how easy it is to use. The question now is not if, but when can the company deliver the same experience in a smaller and less expensive package.

Filed Under: Apple Vision Pro News & Reviews, News, XR Industry News

Spacetop Launches Windows App to Turn Laptops into Large AR Workspaces

May 2, 2025 From roadtovr

Late last year, Sightful announced it was cancelling its unique laptop with built-in AR glasses, instead pivoting to build a version of its AR workspace software for Windows. Now the company has released Spacetop for Windows, which lets you transform your environment into a private virtual display for productivity on the go.

Like its previous hardware, Spacetop works with XREAL AR glasses, however the new subscription-based app is targeting a much broader set of AI PCs, including the latest hardware from Dell, HP, Lenovo, Asus, Acer and Microsoft.

Previously, the company was working on its own ‘headless’ laptop of sorts, which ran an Android-based operating system called SpaceOS. Sightful however announced in October  2024 it was cancelling and refunding customers for its Spacetop G1 AR workspace device, which was slated to cost $1,900.

At the time, Sightful said the pivot came down to just how much neural processing units (NPU) could improve processing power and battery efficiency when running AR applications.

Image courtesy Sightful

Now, Sightful has released its own Spacetop Bundle at $899, which includes XREAL Air 2 Ultra AR glasses (regularly priced at $699) and 12-month Spacetop subscription (renews annually at $200).

Additionally, Sightful is selling optional optical lenses at an added cost, including prescription single-vision lens inserts for $50, and prescription progressive-vision lens inserts for $150.

Recommended laptops include Dell XPS Core Ultra 7 (32GB), HP Elitebook, Lenovo Yoga Slim, ASUS Zenbook, Acer Swift Go 14, and Microsoft Surface Pro for Business (Ultra 7), however Sightful notes this list isn’t exhaustive, as the cadre of devices which integrate Intel Core Ultra 7/9 processors with Meteor Lake architecture (or newer) is continuously growing.

Key features include:

  • Seamless access to popular apps: Spacetop works with consumer and business apps
    that power productivity every day for Windows users
  • Push, slide, and rotate your workspace with intuitive keystrokes
  • Travel mode that keeps your workspace with you on the go, whether in a plane, train, coffee shop, Ubering, or on your sofa
  • Bright, crystal-clear display that adjusts to lighting for use indoors and out
  • Natural OS experience, designed to feel familiar yet unlock the potential of spatial computing vs. a simple screen extension
  • All-day comfort with lightweight glasses (83g)
  • Massive 100” display for a multi-monitor / multi-window expansive workspace
  • Ergonomic benefits help avoid neck strain, hunching, and squinting at a small display

Backed by over $61M in funding, Sightful was founded in 2020 by veterans from PrimeSense, Magic Leap, and Broadcom. It is headquartered in Tel Aviv with offices in Palo Alto, New York, and Taiwan. You can learn more about Spacetop for Windows here.

Filed Under: AR Development, ar industry, News, XR Industry News

Quest Devs Can Now Publish Apps That Use the Headset’s Cameras to Scan the World

May 1, 2025 From roadtovr

While Meta’s Quest has always relied heavily on cameras for tracking location of the headset, controllers, and the world around the user, developers haven’t had the same privileged access to the headset’s cameras. Earlier this year Meta gave developers the ability to experiment with direct access to the headset’s cameras in private projects; starting this week developers can now publicly release apps that make use of the new feature.

This week’s update of the Passthrough Camera API for Quest means that developers can now publish apps to the Horizon store that directly access the front-facing cameras of Quest 3 and 3S. This opens the door to third-party applications which can scan the world around the user to understand more about it. For instance, developers could add computer-vision capabilities to track objects or people in the scene, or to build a map of the environment for analysis and interaction.

For a long time this was impossible due to limitations Meta placed on what developers could and couldn’t do with the headset’s hardware. Despite computer-vision capabilities being widely available to developers on smartphones, Meta was hesitant to allow the same on its headsets, apparently due to privacy concerns (and surely amplified by the many privacy controversies the company has faced in the past).

Previously, third-party apps could learn some information about the world around the user—like the shape of the room and objects within it—but this information was provided by the system in a way that prevented apps from directly seeing what the cameras could see. This made it possible for developers to build mixed reality applications that were, to some extent, aware of the space around the user. But it made some use-cases difficult or even impossible; for example, tracking a specific object held by the user.

Last year Meta announced it would finally unlock direct access to the headset’s cameras. In March, it began offering an experimental version of the capability to developers, allowing them to build apps that accessed the headset’s cameras. But they weren’t allowed to publish those apps to the public, until now.

The company has also specified the technical capabilities and performance of the cameras that the developers can access on Quest 3 and 3S:

  • Image capture latency: 40-60ms
  • GPU overhead: ~1-2% per streamed camera
  • Memory overhead: ~45MB
  • Data rate: 30Hz
  • Max resolution: 1280×960
  • Internal data format YUV420

Meta says that a developer’s use of camera data on Quest is covered under its Developer Data Use Policy, including a section on “Prohibited Uses of User Data,” which prohibits certain uses of data, including to “perform, facilitate, or provide tools for surveillance,” and “uniquely identifying a device or user, except as permitted [in the policy].”

Filed Under: Meta Quest 3 News & Reviews, News, XR Industry News

Snapchat CEO to Keynote AWE 2025 as Company Aims to Strengthen Its Position in XR

April 22, 2025 From roadtovr

The CEO of Snap Inc, the company behind Snapchat and the Spectacles AR glasses, will take the stage at AWE 2025 in June to highlight the company’s latest developments in AR. The prominent placement on the event’s schedule comes as Snap aims to strengthen its foothold in the XR industry.

Snap may be one of the only companies offering fully standalone AR glasses that you can get today, but the company is still seen as an outsider among the broader XR community.

That’s partly because Snap is approaching its AR ambitions from a different angle than other major players in the space.

Standalone headsets like Quest join the likes of PC VR & PSVR 2 as primarily gaming-focused devices. Then there’s Apple’s Vision Pro which focuses on entertainment and productivity.

Meanwhile, Snap’s Spectacles are born out of the company’s social-centric approach to AR, which emphasizes both location-based and co-located experiences (meaning experiences which are tied to real-world locations and those which involve multiple users in the same physical space).

Evan Spiegel | Image courtesy Snap Inc

This June, Snap CEO and co-founder Evan Spiegel will take to the main stage at AWE 2025—one of the largest and longest-running XR-focused conferences in the world—in an effort to share the company’s vision for AR and to strengthen bridges into the existing XR industry.

The event is being held in Long Beach, California from June 10th to 12th, and it’s expected to host more than 6,000 attendees, 300 exhibitors, 400 speakers, and a 150,000 sqft expo floor. Early-bird tickets are still available, and Road to VR readers can get an exclusive 20% discount.

Spiegel’s keynote will be flanked by presentations from Qualcomm and XREAL, peers which are well established in the conference and the industry at large.

Ironically, Snap’s commitment to building an AR platform from the ground up is one reason why it has remained something of an outsider in the XR space.

The company isn’t just building its own AR glasses, it’s also building Snap OS, a bespoke operating system for Spectacles. And it has its own authoring tool—Lens Studio—which developers need to learn to build for the headset, rather than using off-the-shelf tools like Unity. The unique approach and device capabilities mean that porting existing XR content isn’t straightforward.

Yet its commitment to building its platform from the ground up shows the company’s authentic belief in the XR space.

Speaking recently to Road to VR, Snap’s VP of Hardware, Scott Myers, said that the company is building Spectacles to be more than just an extension of Snapchat. The company believes glasses like Spectacles will one day replace smartphones altogether. This belief is guiding the standalone nature of Spectacles, which is designed to work without a phone or tethered compute unit.

“We want people to look up [through their glasses], not down [at their smartphone,” Myers said.

Beyond its emphasis on social and location-based AR experiences, Myers said the company is uniquely focused on making its platform the best in the world for developers, by building great tools and iterating aggressively on feedback.

Myers said he personally uses Spectacles “nearly every day” to test new features and experiences. “We’re learning together with developers to make developing [as easy as possible],” he said.

Snap will need to play its cards right to position itself for success in the coming years, as tech giants Meta, Apple, and Google are all vying to be the first to build a pair of mainstream AR glasses.


Road to VR is proud to be the Premier Media Partner of AWE USA 2025, allowing us to offer readers an exclusive 20% discount on tickets to the event.

Filed Under: News, XR Industry News

‘Wonder’ is a Collection of Mesmerizing Mixed Reality Experiences Coming Soon from ‘Gadgeteer’ Studio

April 21, 2025 From roadtovr

Metanaut, the studio behind Rube Goldberg-inspired physics sandbox Gadgeteer (2019), announced they’re releasing an anthology of virtual and mixed reality experiences designed to mesmerize.

Called Wonder, the experience is slated to land on Quest 3/S within the “next couple of months,” aiming to deliver what Metanaut calls a “perfect escape” from your busy life.

Initially announced back in late 2021, Wonder is set to feature three experiences when it launches this Spring, with more coming post-launch:

  • Ancient Ruins: your familiar space morphs into a mysterious cave that gets swallowed up by a blackhole.
  • Jellyfish Bloom: a mesmerizing deep-sea spectacle filled with bioluminescent jellyfish that lights up your walls and furniture.
  • Parallel Worlds: where reality-bending portals reveal alternate versions of your environment in ice, dots, and more.

Metanaut says Wonder is being developed with “clever and advanced rendering techniques” that is pushing Quest 3 to deliver photorealistic visuals thanks to the inclusion of scanned room meshes. It’s also a hand tracking-only title, letting you put down your controllers.

“This technical achievement is paired with custom-crafted, beautiful audio from award-winning music studio, Ictus Audio, whose accolades include winning the John Lennon Songwriting Award,” the studio says.

“The XR industry seems to have shifted from serving adults to kids, and from high-quality premium titles to free-to-play slop,” says Peter Kao, founder of Metanaut. “With Wonder, we wanted to create a magical experience for an underserved audience—one who is older and one who wants to experience the highest audiovisual spectacle possible on latest headsets.”

Wonder is now available for pre-order for $4, and is expected to increase in price as more content is released after launch. You can pre-order it here exclusively for Quest 3 and Quest 3S.

Filed Under: Meta Quest 3 News & Reviews, News

Bigscreen Says Tariffs Will Not Increase Price of Beyond 2 PC VR Headset

April 17, 2025 From roadtovr

Wide sweeping tariffs levied by US President Donald Trump have led to uncertainty in just how the XR hardware industry will react. Now, Bigscreen says its recently announced Beyond 2 PC VR headset will not see a price increase as a result.

Bigscreen released a statement on X wherein the company maintains its Beyond 2 headset, priced at $1,019, isn’t getting a price bump despite increased tariffs applied to many goods manufactured in China.

According to Bigscreen, the company sources Beyond 2 components and assemblies from a variety of regions, including China, Japan and Europe. While “significant final assembly and testing” take place at its Los Angeles-based factory, many of the most expensive components and assemblies are made outside the US, which have “dramatically increased […] costs.”

“We expect to absorb all costs of the tariffs, trade war, and supply chain disruptions. We will not be increasing prices in any form for the foreseeable future,” Bigscreen says. “Customers will not pay any further shipping fees, tariffs, import duties, taxes, or VAT,” the company adds, noting that the final checkout amount includes no hidden fees.

Bigscreen Beyond 2 | Image courtesy Bigscreen

While the company says it “expected this may happen long before we announced Bigscreen Beyond 2,” the United States’ Harmonized Tariff Schedule is still evolving.

President Trump issued an executive order last week exempting many electronics, such as smartphones, monitors, and laptops, from the combined 145% reciprocal tariff rate.

Although not specifically named, VR headsets are expected benefit from the exemption, as UploadVR notes, however these devices will still be subject to a 20% tariff which was put in place in March 2025.

Prior to the exemption, Shanghai-based PC VR headset creator Pimax was the first to address tariffs with the announcement it would offload some of the pressure to its ‘Pimax Prime’ software subscription, keeping the final ‘all-in’ price of flagship Crystal Super at parity to the same ~$1,690 pre-tariff pricing, albeit with the inclusion of a $95 US-only surcharge.

Filed Under: News, PC VR News & Reviews

[Industry Direct] From Founder to XR Newbie: Why I Bet on Immersed (and how you can too)

April 16, 2025 From roadtovr

Industry Direct by Kit Navock

Industry Direct is our program for sponsors who want to speak directly to the Road to VR newsletter audience. Industry Direct posts are written by sponsors with no involvement from the Road to VR editorial team. Industry Direct sponsors help make Road to VR possible.

Hey friends, I’m Kit Navock, the new CMO at Immersed. Not long ago, I sold my company. And like a lot of founders post-acquisition, I was taking stock—trying to figure out what kind of work (and what kind of people) I actually want to build with next. Long story short: I joined Immersed as its new CMO because I now see the company’s vision. And now, those who can also see the vision can join our journey by becoming an investor before we continue to accelerate! But first, let me tell you how I ended up here.

Immersed had been on my radar for a while. I was an early investor years ago and got to know the team slowly, the organic way—movie nights, shared meals, and bouncing around ideas at their Austin office. At some point, I even hired someone from their friend group at my acquired company. What started as curiosity turned into deep respect. These weren’t just smart people — they were good people — gritty, ambitious, humble, and mission-driven.

Fast-forward to a couple of months ago, I texted Renji (Immersed’s founder) about some basketball news. His reply?

“You should come work with us.”

It wasn’t an obvious choice. I only took his call to amuse him. Even if I hadn’t known him for years, he’s a pretty compelling guy. The thought stuck with me. I thought about it for days. I called my inner circle—the friends and mentors I trust most. I told them I was living the good life: consulting 20 hours a week, fresh off selling a company, finally slowing down a bit. But this opportunity just felt… different.

And they all said the same thing: You should go for it.

I took a lower salary than I’ve had in years, with equity that might not be liquid for a while. But I wasn’t joining for a quick win—I was joining because I believed in the people and the product for something that might become the next tech giant. And when I finally got hands-on with what they’re building, my mind was blown. It all finally clicked.

View post on imgur.com

Visor: The XR Device That Shouldn’t Exist, But Does

I’ll be real—I’m newer to XR.

Not to tech in general, but to this AR/VR world. I wasn’t the guy duct-taping sensors to my face and hacking OpenVR builds back in 2016. I’m a mainstreamer. A noob. I had bought an Apple Vision Pro just to see what spatial computing felt like.

It generally worked. I was more productive, more focused. Then I tried Visor—Immersed’s own headset—and it was everything I didn’t know I wanted.

Visor weighs ~186g (lighter than my phone), looks like thick sunglasses, runs standalone with dual 4K micro-OLED displays and a 3-hour battery. But more importantly, it’s built for real tools: VS Code, Figma, email, Blender, terminal, tabs, Netflix. It didn’t feel like it was made as a posthumous add-on to my computer.

It’s not a devkit.
It’s not a toy.
It’s not heavy.
It’s not $3,499.
It’s just… the one that fits into an actual workday.

I may be new here, but even I can tell: this isn’t some prototype.
It’s the first headset that made me think, “Oh—this is ready for the rest of the world to get onboard.”

From App to Platform (and AI Co-Pilot)

If you’ve used Immersed (and you probably have), you already know: it’s the most-used AR/VR productivity app in the world. Multi-monitor AR/VR workspaces. Low-latency streaming. Real-time collaboration across platforms. Tens of millions of sessions. Thousands of power users grinding through real work in XR.

Now, with Visor, we’re going beyond just an app. It’s the front door to a full spatial computing platform—an OS built for focus, deep work, and native 3D tools (not floating 2D windows). In my head, this is sort of how Steve Jobs thought about Apple; he wanted to build an entire ecosystem.

For me, what really sealed this line of thinking was Curator AI—a built-in co-pilot that understands my workflow, reduces context switching, and quietly boosts my productivity. It’s not a bolt-on gimmick—it’s the connective tissue of the platform. I don’t think I am alone in thinking this is also what I wanted Siri to be right now, but it’s not.

Moreover, the Immersed platform combined with the Visor and Curator AI —this moves us beyond just LLMs into the physical space. And because it’s OpenXR-compatible, devs can build right on top—whether it’s tools, agents, or entire apps.

This isn’t about porting your desktop into 3D. It’s about building spatial productivity from the ground up—with AI baked in from day one. It’s built for the builders.

Why This Matters Now

I didn’t join Immersed just to talk about screens floating in space and talking avatars. I joined because I believe spatial computing is going to reshape how we work and live. The combination of immersive environments, native tools, and always-on AI support is a powerful leap—and it’s arriving faster than most people think. I think the people at Immersed have the vision, the tech, and the people to make this happen. That’s why I joined.

This isn’t just for hobbyists. What Immersed is building is for engineers, designers, traders, artists, founders, filmmakers—anyone who works on a screen. And as AI agents and humanoid robotics continue to evolve, the value of an immersive spatial platform that just works is only going to grow.

Visor is more than a headset. The Immersed platform is more than just a virtual reality space where we can work. Curator is more than just an LLM AI agent. Together, they form an entire ecosystem for us to all level up.

We’re super pumped to share this with all of you.

If you’ve been waiting for someone to build the right headset—and the right company behind it—now’s your chance to be part of it. Join Immersed in bringing spatial computing to the masses.

👉 Own Stock in Immersed
Before the rest of the world catches up.

Let’s build the future—together.
— Kit

Filed Under: Sponsored Newsletter

Report: Apple CEO “cares about nothing else” Than Building Breakout AR Glasses Before Meta

April 16, 2025 From roadtovr

Apple is rumored to be working on two versions of Vision Pro, however a new report from Bloomberg’s Mark Gurman alleges the Cupertino tech giant is aiming to beat Meta to the punch with a pair of AR glasses.

Citing someone with knowledge of the matter, the report maintains Apple CEO Tim Cook has put development of AR glasses as a top priority, as the company plans to release such a device before Meta.

“Tim cares about nothing else,” the source told Bloomberg. “It’s the only thing he’s really spending his time on from a product development standpoint.”

Creating the sort of all-day AR glasses Apple is aiming for is still a multi-year challenge though. Packing in high-resolution displays, a powerful chip and a high-density (but very small) battery for all-day power represents a number of technical challenges. And creating such a device at a consumer price point is arguably the biggest of them all.

Meta’s Orion AR Glasses | Image courtesy Meta

While Apple is reticent to go on record, Meta has been fairly transparent with its XR roadmap. In late 2024, Meta unveiled its Orion AR glasses, which the company hopes will lead to the productization of such a device before 2030, and priced “at least in the space of phone, laptop territory.” For now, Orion costs Meta somewhere in the neighborhood of $10,000 per unit, largely owing to its custom silicon carbide waveguide optics.

And although Orion itself isn’t being productized right away, Meta is well on its way in the XR space, having not only produced multiple generations of Quest standalone headsets, but also its Ray-Ban Meta Glasses, which are laying foundation for its AR glasses of the near future.

The smart glasses, built in partnership with EssilorLuxottica, have been very successful too—so much so that Meta is reportedly preparing a next generation of the device which will include a monoscopic heads-up display. Granted, those aren’t augmented reality glasses, but rather still smart glasses. You can learn more about the differences between the two here.

Ray-Ban Meta Glasses, Image courtesy Meta, EssilorLuxottica

For now, Gurman maintains Apple is working on new versions of Apple Watch and AirPods which will be embedded with AI-enabled cameras, however the Fruit Company is still internally debating whether to counter Meta with a pair of smart glasses of their own.

According to Gurman, Apple has been developing such a device designed to work with Siri and Apple Visual Intelligence, although the company is unsure whether it will allow the glasses to actually capture media, owing to the company’s stance on user privacy.

This follows a wider leadership shakeup at Apple, reported by Bloomberg last month, which also saw Apple’s Vision Products Group (VPG) redistributed across the company.

Tasked with developing Vision Pro, VPG was initially created in 2023, which notably departed from its “functional” management structure introduced by Steve Jobs in the early ’90s. Essentially, this puts Vision Pro’s product development back in line with the company’s other hardware, including iPhone, iPad, etc.

Filed Under: Apple Vision Pro News & Reviews, News

Pimax Updates Prices in Response to US-China Trade War, Using Software Subscription to Absorb Costs

April 11, 2025 From roadtovr

China-based PC VR headset creator Pimax has issued a statement addressing the impact of the recent US–China trade war on its operations, particularly concerning its Crystal Super VR headset. It’s going to be slightly more expensive for US-based customers, but Pimax’s recent subscription-based payment structure seems to be offsetting much of the costs.

Announced back in April 2024, Crystal Super is the company’s next flagship PC VR headset, offering a base 57 PPD version with QLED panels that features a resolution of 3,840 x 3,840 pixels per eye and a 120-degree field-of-view (FOV). It’s still only available in pre-order, although shipping is expected to start soon.

At the time of this writing, the US has levied a 145% tariff on all goods manufactured in China. This is bad news for XR headset creators the world over, as China is by far the segment’s largest manufacturing hub. And Shanghai-based Pimax is seemingly the first of the bunch to announce price changes in response.

Pimax released a blogpost wherein it describes just what’s happening to US-based customers in relation to new tariffs. And it’s not as bad as you’d think.

The company says all US orders of Crystal Super placed before February 4th, 2025 will not include any extra tariff costs, however they may face a delay of about 20 days due to bulk shipments to US-based warehouses.

Pimax Crystal Super | Image courtesy Pimax

Orders placed between February 4th and April 10th will include a $75 ‘Regional Surcharge’ however, which Pimax says partially offsets increased shipping and logistics costs.

Moreover, starting April 10th, all new US orders will carry a $95 surcharge, with shipments expected to begin in June. Pimax says it’s also establishing a factory in Delaware to handle final assembly.

That said, the overall price of Crystal Super isn’t really changing. Pimax has now updated its pricing structure, and although it’s become less straight forward following the rollout of its subscription-based software pricing, it’s actually helping to offset tariff-related costs.

Now, the base price of Pimax Crystal Super has been lowered to $799, with the remaining $885 payable later through Pimax Play with Prime—a total cost $1,684 (excl. US-only $95 surcharge).

For everyone else around the globe, it’s essentially a nominal change. Previously, Crystal Super was priced at $999 with the remaining Prime subscription costing $696—total cost $1,695. You’ll now see that local pricing has be updated to reflect the lower upfront cost.

Notably, the company highlights that its 14-day trial period is still in place. For users outside the US, this could make Crystal Super slightly more attractive than before, as it requires less upfront money down—still refundable if you send it back before the trial period ends and you need Prime to continue using it.

That said, Pimax is in a unique position to rebalance its costs by leveraging its expensive, but now extremely useful subscription model. The same can’t be said for platform holders like Meta, which already subsidize hardware in effort to make software more attractive.

While Meta hasn’t announced any price hikes, the company has actually raised headsets prices in the past, with the COVID-19 pandemic forcing the company to temporarily raise the price of Quest 2 from $300 to $400 back in 2022. So, we’ll just have to wait and see.

– – — – –

We’ll be following the effects of US-China trade war tariffs on XR hardware closely, so check back soon for more.

Filed Under: News, PC VR News & Reviews

Researchers Catalog 170+ Text Input Techniques to Improve Typing in XR

April 8, 2025 From roadtovr

Efficient text entry without an actual keyboard remains an industry-wide challenge for unlocking productivity use-cases in XR headsets. Researchers have created a comprehensive catalog of existing text entry techniques to codify different methods and analyze their pros and cons. By making the catalog freely available, the researchers hope to give others a head start on creating new and improved techniques.

Guest Article by Max Di Luca

Massimiliano Di Luca leads the VR Lab at the University of Birmingham, UK, where he is an Associate Professor in the School of Psychology and in the School of Computer Science. He previously worked at Meta where he pioneered work on hand inputs and haptics for VR. His most recent collaboration with industry was recently recognized by the ACM SIGCHI 2025 awards for pioneering the interaction framework of Android XR through exemplary industry-academia collaboration, establishing foundational input methods and interaction guidelines for XR operating systems.

As immersive experiences become increasingly sophisticated, the challenge of efficient text entry remains a crucial barrier to seamless interaction in virtual and augmented reality (VR/AR). From composing emails in virtual workspaces to logging-in and socializing in the metaverse, the ability to input text efficiently is essential for the usability of all applications in extended reality (XR).

To address this challenge, my team from the VR Lab at the University of Birmingham (UK) along with researchers from the University of Copenhagen, Arizona State University, the Max Planck Institute for Intelligent Systems, Northwestern University, and Google developed the XR TEXT Trove—a comprehensive research initiative cataloging over 170 text entry techniques tailored for XR. The TEXT Trove is a structured repository of text entry techniques and a series of filters that aim at selecting and highlighting the pros and cons of the breadth of text input methods developed for XR in both academia and industry.

These techniques are categorised using a range of 32 codes, including 13 interaction attributes such as Input Device, Body Part (for input), Concurrency, and Haptic Feedback Modality, as well as 14 performance metrics like Words Per Minute (WPM) and Total Error Rate (TER). All in all, the number of techniques and extensivity of the attributes provide a comprehensive overview of the state of XR text entry techniques.

Several key takeaways can be surmised from our research. First and foremost, text input performance is inherently limited by the number of inputting elements (whether fingers, controllers, or other character selectors). Only multi-finger typing can lead to performance comparable to touch-typing speed with a keyboard on regular PCs. As visualized in the plots below, each additional input element (or finger) adds about 5 WPM speed on top users.

Words per minute using multiple fingers, and different input devices. (each dot represents one technique analyzed in the study).

Our research also indicates that haptic feedback, the presence of external surfaces, and fingertip-only visualization are preferable ways to improve typing performance. For instance, typing on surfaces (instead of in mid-air) contributes to a more comfortable and potentially more efficient typing experience. External surfaces also minimize sustained muscle strain, making interactions more comfortable and reducing the onset of Gorilla Arm Syndrome.

Finally, and more interestingly, as of today, no alternative has fully replaced the keyboard format, probably because it still delivers the highest words-per-minute. Perhaps because it also requires high learning curves. We believe that the main path for faster typing in VR than PC might lay on the need to reduce travel distances on a multi-finger keyboard via Machine Learning and AI. XR needs its own ‘swipe typing’ moment, which made one-finger typing on smartphones much more efficient.

In that regard, the deep dive from the XR Text Trove represents a significant step towards a more comprehensive understanding of text input in virtual and augmented reality. By providing a structured and searchable database, we aimed to offer a resource for researchers and developers alike, paving the way for more efficient and user-friendly text entry solutions in the immersive future.

As we explain in our paper, this work has the potential to significantly benefit the XR community: “To support XR research and design in this area, we make the database and the associated tool available on the XR TEXT Trove website. The full paper will be presented at the prestigious ACM CHI conference next month in Yokohama, Japan.

Several authors in our team are co-creators of the Locomotion Vault, which similarly catalogs VR locomotion techniques in an effort to give researchers and designers a head-start on identifying and improving various methods.

Filed Under: Guest Articles, News, XR Industry News

« Previous Page
Next Page »

  • Home