• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

XR Industry News

Google Partners with Prominent Eyewear Makers for Upcoming Android XR Smartglasses

May 20, 2025 From roadtovr

Google today announced that it is working with eyewear makers Warby Parker and Gentle Monster to bring the first Android XR smartglasses to market. The move mirrors Meta’s early partnership with EssilorLuxottica, the dominant eyewear maker that’s behind Meta’s Ray-Ban smartglasses.

While no productized Android XR smartglasses have been announced, Google said today it is working with eyewear makers Warby Parker and Gentle Monster on the first generation of products. Android XR smartglasses will prominently feature Google’s Gemini AI, and some will include on-board displays for visual output.

Image courtesy Google

Warby Parker is a well known American eyewear brand, founded in 2010, which has pioneered a lower cost, direct-to-consumer glasses business. Gentle Monster, founded in 2011, is a well known South Korean eyewear brand, and has a similar approach as Warby Parker.

While influential, both eyewear makers pale in comparison to EssilorLuxottica, the massive eyewear and lens conglomerate behind brands like Ray-Ban and Oakley.

EssilorLuxottica and Meta partnered several years ago around their smartglasses ambitions. Things seem to be going well for the partnership as the duo has launched several iterations of the Meta Ray-Ban smartglasses featuring classic Ray-Ban designs.

Ray-Ban Meta Glasses, Image courtesy Meta, EssilorLuxottica

Google is now taking the same tact by partnering with two well known glasses-makers to ensure that it has strong brand and fashion credibility behind its upcoming Android XR smartglasses.

The company’s first pair of smartglasses, Google Glass, launched way back in 2012. Although they were impressively compact for their time (especially considering the inclusion of a display), the asymmetrical design of the bulky display optics was seen as socially off-putting—just a bit too weird to pass as regular glasses.

That sent Google (and others) back to the drawing board for years, waiting until the tech could advance enough to make smartglasses that looked more socially acceptable.

It’s unclear when the first Android XR smartglasses will launch, or what they might cost, but Google also said today that developers will be able to start developing for Android XR smartglasses later this year.

Filed Under: News, XR Industry News

Project Starline Immersive Videoconferencing Now Branded Google Beam, Heading to Market with HP

May 20, 2025 From roadtovr

Today at its annual I/O developer conference, Google affirmed plans to bring its Project Starline immersive videoconferencing platform to market with HP. While this partnership was confirmed last year, the product is now officially called Google Beam, with more info promised soon.

Google’s Project Starline is a platform for immersive videoconferencing which was first introduced in 2021. But rather than using a headset, the platform is built around cameras and a light-field display. The light-field display shows natural 3D depth without the need for the viewer to wear a headset or glasses. The goal, the company says, is to create a system that feels like two people are talking to each other face-to-face in the same room, rather than feeling like they are separated by a screen and cameras.

Image courtesy Google

Google has been evolving the system over the years to improve usability and quality. Today the company showed a glimpse of the latest version of the system which it says is coming to market under the name Google Beam.

Image courtesy Google

As confirmed last year, Google is working with HP to bring Google Beam to market starting this year with an initial focus on enterprise customers seeking high-quality videoconferencing. While details are still light, Google says that “HP will have a lot more to share a few weeks from now.”

Image courtesy Google

Filed Under: News, XR Industry News

Industry Insider Expects New Valve XR Headset to Launch in 2026

May 19, 2025 From roadtovr

It appears Valve has been developing a standalone XR headset, codenamed ‘Deckard’, for some time. Now, an industry insider has apparently gotten a peek at the headset’s design, calling it “quite amazing,” further noting it’s potentially arriving sometime next year.

Stan Larroque, Founder of XR hardware company Lynx, confirmed in a recent X post the he’s actually seen the design for Valve’s next XR headset.

The design of Valve next HMD is quite amazing!

— Stan Larroque (@stanlarroque) May 17, 2025

Larroque further confirmed that neither him nor his company Lynx, which released the Lynx R-1 mixed reality headset, is under any type of non-disclosure agreement (NDA).

Larroque tells Road to VR that Valve Deckard won’t compete against Lynx’s upcoming hardware, as they separately “address two different markets [and] price points.”

Still, beating around the bush somewhat, Larroque tells us Valve and Lynx “might share suppliers for some components,” which definitely smells like a supply chain leak.

“I would be equally pissed if Lynx nextgen ID got leaked so I won’t share more,” Larroque says in an X post. “I’m just excited for good new XR HMDs. The HMD-making world is so small, we all share the same suppliers for some components.”

Valve Patent from 2022 | Image courtesy Brad Lynch

Furthermore, he tells Road to VR that he’s heard that mass production and eventual availability is slated for 2026, which differs slightly from a previous report wherein leaker and data miner ‘Gabe Follower’ alleged Deckard would arrive by the end of 2025, priced at $1,200.

While Valve hasn’t confirmed anything yet, the rumor mill has been drumming up its fair share of speculation even since the Deckard naming scheme was discovered by data miners in January 2021.

There have been leaked prototype designs (seen above) from 2022, as well as leaked 3D models hidden in a SteamVR update late last year (seen below), which appeared to show off a new VR motion controller, codenamed ‘Roy’.

Valve ‘Roy’ Model Leak | Image courtesy Brad Lynch

Then, last month, tech analyst and VR pundit Brad ‘SadlyItsBradley‘ Lynch reported Valve was gearing up production for the long-awaited device, evidenced by Valve’s recent importation of equipment to manufacture VR headset facial interfaces inside the USA.

Lynch alleges the equipment in question “is being provided by Teleray Group who also manufactured the gaskets for the Valve Index and HP G2 Omnicept.”

Exactly what and when are still relatively big question marks, although it appears Valve is moving forward with its standalone XR headset at an opportune time. Provided Larroque’s supply chain leaks are true, and it is indeed coming in 2026, a number of previous reports suggest there will be some healthy competition out there when it does.

In July 2024, The Information alleged Meta is planning to release two flagship consumer headsets sometime in 2026, codenamed ‘Pismo Low’ and ‘Pismo High’. Beyond that, a competitor to Apple Vision Pro, tentatively thought of as ‘Quest Pro 2’, is reported to arrive in 2027. Meanwhile, we’re waiting for any real shred of evidence to come from Apple of any forthcoming headset.

By then, Samsung’s Project Moohan should be in the wild, which when it launched in late 2025 will run Google’s upcoming Android XR operating system. The device is slated to bring the full-fat Android App Store to an XR device for the first time in addition to XR content.

While we’d expect Valve to skip the flashy keynotes and simply seed developers first with hardware in its usual lowkey manner, you never know when a random purchase link might just pop up on Steam, so we’ll be keeping our eyes peeled from now until whenever.

Filed Under: PC VR News & Reviews, XR Industry News

Google Teases Android Smart Glasses Ahead of I/O Developer Conference Next Week

May 16, 2025 From roadtovr

Google may be getting ready to unveil a pair of smart glasses at its Google I/O developer conference next week, ostensibly hoping to take on Ray-Ban Meta Glasses.

In a promo for Google I/O, Android Ecosystem President Sameer Samat showed off what appears to be a pair of smart glasses.

While Samat didn’t speak directly about the device, when donning the glasses, he said Google I/O attendees will have a chance to see “a few more really cool Android demos.”

Using our CSI-style enhancement abilities (aka ‘crop a YouTube screenshot’), the distinctly Ray-Ban Wayfarer-style glasses appear to have a single camera sensor on the left temple.

Image courtesy Google

There also what appears to be an LED above the camera sensor, likely to inform others when video or pictures are being taken, which may indicate it’s going for feature parity with Ray-Ban Meta Glasses.

The glasses chunky arms are also likely packed with battery and onboard processors, which, owing to Samat’s tease, is probably running some version of its upcoming Android XR operating system. Notably, just under the left arm we can see a small slit close to Samat’s ear, possibly for integrated audio. Alternatively, it may not be a a slit at all, but rather a button of some sort.

Meanwhile Apple may be readying its own pair of smart glasses, with a recent Bloomberg report maintaining the company is now developing a processor specifically optimized for the task.

In any case, we’re hoping to find out more at Google I/O, which is slated to kick off May 20th – 21st where the company will feature livestreamed keynotes, developer sessions, and more. Outside of the keynote, which may actually mention Android XR, the event is set to include two developer talks specifically dedicated to Android XR.

We’ll of course be tuning in, although you can watch the keynote live on YouTube starting on Tuesday, May 20th at 10 AM PT (local time here).

Check out the moment below:

Filed Under: News, XR Industry News

Half the Size & Half the Price is What Vision Pro Needs to Take Off

May 6, 2025 From roadtovr

Apple has set the bar for UX on a standalone headset. As soon as the company can get the same experience into a smaller and cheaper package, it’s going to become significantly more appealing to a wider range of people.

Apple has billed Vision Pro as “tomorrow’s technology, today.” And frankly, that feels pretty accurate if we’re talking about the headset’s core user experience, which is far beyond other products on the market. Vision Pro is simple and intuitive to use. It might not do as much as a headset like Quest, but what it does do, it does extremely well. But it’s still undeniably big, bulky, and expensive… my recommendation is that it’s not worth buying for most people.

And that’s probably why there seems to be a broadly held notion that Vision Pro is a bad product… a rare flop for Apple. But as someone who has used the headset since launch, I can plainly see all the ways the headset is superior to what else is out there.

Saying Vision Pro is a bad product is a bit like saying a Ferrari is a bad car for not being as widespread as a Honda Accord.

I don’t know if the first generation of Vision Pro met Apple’s sales expectations or fell short of them. But what I do know is that the headset offers an incredibly compelling experience that’s significantly held back by its price and size.

If Apple can take the exact same specs, capabilities, and experience, and fit them into something that’s half the size and costs half as much, I’m certain the headset will see a massive boost in demand.

A more compact Vision Pro concept | Photo generated by Road to VR

Cutting it down to half the size would mean bringing it down around 310 grams; certainly not be easy but also not entirely unrealistic, especially if they stick to an off-board battery. After all, Bigscreen Beyond is around 180 grams. It might not be a standalone headset, but it shows how compact the housing, optics, and displays can be.

And half the cost would mean a price tag of roughly $1,750. Still not cheap compared to most headsets out there, but significantly more attainable, especially if Apple can market it as also being the best TV most people will have in their home.

This might seem obvious. Making any tech product smaller and cheaper is a good thing.

But my point here is that Vision Pro is disproportionately held back by its size and cost. It has way more to be gained by halving its size and cost than Quest, for instance, because Quest’s core UX is still very clunky.

Fitting the Quest experience into something half the size and half the cost would be nice, but the core UX would still be holding it back in a big way.

On the other hand, Vision Pro feels like its core UX is just waiting to be unleashed… halving the size and cost wouldn’t just be nice, it would be transformative.

Of course this is much easier said than done. After all, you might counter that the very reason why Vision Pro’s core UX is so great is because it costs so much. It must be the expensive hardware that makes the difference between Quest and Vision Pro.

While this is perhaps true in some specific cases, in so many more cases, it’s the software experience that makes Vision Pro excel in usability. For instance, we explained previously that Quest 3 actually has higher effective resolution than Vision Pro, but it’s the thoughtful software design of Vision Pro that lead most people to the conclusion that Vision Pro looks much better visually.

And when I say that Vision Pro will take off when it reaches half the size and half the price, I’m not even factoring in several key improvements that will hopefully come with future versions of the headset (like sharper passthrough with less motion blur and some enhancements to the software).

Apple has set a high bar for how its headset should feel and how easy it is to use. The question now is not if, but when can the company deliver the same experience in a smaller and less expensive package.

Filed Under: Apple Vision Pro News & Reviews, News, XR Industry News

Spacetop Launches Windows App to Turn Laptops into Large AR Workspaces

May 2, 2025 From roadtovr

Late last year, Sightful announced it was cancelling its unique laptop with built-in AR glasses, instead pivoting to build a version of its AR workspace software for Windows. Now the company has released Spacetop for Windows, which lets you transform your environment into a private virtual display for productivity on the go.

Like its previous hardware, Spacetop works with XREAL AR glasses, however the new subscription-based app is targeting a much broader set of AI PCs, including the latest hardware from Dell, HP, Lenovo, Asus, Acer and Microsoft.

Previously, the company was working on its own ‘headless’ laptop of sorts, which ran an Android-based operating system called SpaceOS. Sightful however announced in October  2024 it was cancelling and refunding customers for its Spacetop G1 AR workspace device, which was slated to cost $1,900.

At the time, Sightful said the pivot came down to just how much neural processing units (NPU) could improve processing power and battery efficiency when running AR applications.

Image courtesy Sightful

Now, Sightful has released its own Spacetop Bundle at $899, which includes XREAL Air 2 Ultra AR glasses (regularly priced at $699) and 12-month Spacetop subscription (renews annually at $200).

Additionally, Sightful is selling optional optical lenses at an added cost, including prescription single-vision lens inserts for $50, and prescription progressive-vision lens inserts for $150.

Recommended laptops include Dell XPS Core Ultra 7 (32GB), HP Elitebook, Lenovo Yoga Slim, ASUS Zenbook, Acer Swift Go 14, and Microsoft Surface Pro for Business (Ultra 7), however Sightful notes this list isn’t exhaustive, as the cadre of devices which integrate Intel Core Ultra 7/9 processors with Meteor Lake architecture (or newer) is continuously growing.

Key features include:

  • Seamless access to popular apps: Spacetop works with consumer and business apps
    that power productivity every day for Windows users
  • Push, slide, and rotate your workspace with intuitive keystrokes
  • Travel mode that keeps your workspace with you on the go, whether in a plane, train, coffee shop, Ubering, or on your sofa
  • Bright, crystal-clear display that adjusts to lighting for use indoors and out
  • Natural OS experience, designed to feel familiar yet unlock the potential of spatial computing vs. a simple screen extension
  • All-day comfort with lightweight glasses (83g)
  • Massive 100” display for a multi-monitor / multi-window expansive workspace
  • Ergonomic benefits help avoid neck strain, hunching, and squinting at a small display

Backed by over $61M in funding, Sightful was founded in 2020 by veterans from PrimeSense, Magic Leap, and Broadcom. It is headquartered in Tel Aviv with offices in Palo Alto, New York, and Taiwan. You can learn more about Spacetop for Windows here.

Filed Under: AR Development, ar industry, News, XR Industry News

Quest Devs Can Now Publish Apps That Use the Headset’s Cameras to Scan the World

May 1, 2025 From roadtovr

While Meta’s Quest has always relied heavily on cameras for tracking location of the headset, controllers, and the world around the user, developers haven’t had the same privileged access to the headset’s cameras. Earlier this year Meta gave developers the ability to experiment with direct access to the headset’s cameras in private projects; starting this week developers can now publicly release apps that make use of the new feature.

This week’s update of the Passthrough Camera API for Quest means that developers can now publish apps to the Horizon store that directly access the front-facing cameras of Quest 3 and 3S. This opens the door to third-party applications which can scan the world around the user to understand more about it. For instance, developers could add computer-vision capabilities to track objects or people in the scene, or to build a map of the environment for analysis and interaction.

For a long time this was impossible due to limitations Meta placed on what developers could and couldn’t do with the headset’s hardware. Despite computer-vision capabilities being widely available to developers on smartphones, Meta was hesitant to allow the same on its headsets, apparently due to privacy concerns (and surely amplified by the many privacy controversies the company has faced in the past).

Previously, third-party apps could learn some information about the world around the user—like the shape of the room and objects within it—but this information was provided by the system in a way that prevented apps from directly seeing what the cameras could see. This made it possible for developers to build mixed reality applications that were, to some extent, aware of the space around the user. But it made some use-cases difficult or even impossible; for example, tracking a specific object held by the user.

Last year Meta announced it would finally unlock direct access to the headset’s cameras. In March, it began offering an experimental version of the capability to developers, allowing them to build apps that accessed the headset’s cameras. But they weren’t allowed to publish those apps to the public, until now.

The company has also specified the technical capabilities and performance of the cameras that the developers can access on Quest 3 and 3S:

  • Image capture latency: 40-60ms
  • GPU overhead: ~1-2% per streamed camera
  • Memory overhead: ~45MB
  • Data rate: 30Hz
  • Max resolution: 1280×960
  • Internal data format YUV420

Meta says that a developer’s use of camera data on Quest is covered under its Developer Data Use Policy, including a section on “Prohibited Uses of User Data,” which prohibits certain uses of data, including to “perform, facilitate, or provide tools for surveillance,” and “uniquely identifying a device or user, except as permitted [in the policy].”

Filed Under: Meta Quest 3 News & Reviews, News, XR Industry News

Snapchat CEO to Keynote AWE 2025 as Company Aims to Strengthen Its Position in XR

April 22, 2025 From roadtovr

The CEO of Snap Inc, the company behind Snapchat and the Spectacles AR glasses, will take the stage at AWE 2025 in June to highlight the company’s latest developments in AR. The prominent placement on the event’s schedule comes as Snap aims to strengthen its foothold in the XR industry.

Snap may be one of the only companies offering fully standalone AR glasses that you can get today, but the company is still seen as an outsider among the broader XR community.

That’s partly because Snap is approaching its AR ambitions from a different angle than other major players in the space.

Standalone headsets like Quest join the likes of PC VR & PSVR 2 as primarily gaming-focused devices. Then there’s Apple’s Vision Pro which focuses on entertainment and productivity.

Meanwhile, Snap’s Spectacles are born out of the company’s social-centric approach to AR, which emphasizes both location-based and co-located experiences (meaning experiences which are tied to real-world locations and those which involve multiple users in the same physical space).

Evan Spiegel | Image courtesy Snap Inc

This June, Snap CEO and co-founder Evan Spiegel will take to the main stage at AWE 2025—one of the largest and longest-running XR-focused conferences in the world—in an effort to share the company’s vision for AR and to strengthen bridges into the existing XR industry.

The event is being held in Long Beach, California from June 10th to 12th, and it’s expected to host more than 6,000 attendees, 300 exhibitors, 400 speakers, and a 150,000 sqft expo floor. Early-bird tickets are still available, and Road to VR readers can get an exclusive 20% discount.

Spiegel’s keynote will be flanked by presentations from Qualcomm and XREAL, peers which are well established in the conference and the industry at large.

Ironically, Snap’s commitment to building an AR platform from the ground up is one reason why it has remained something of an outsider in the XR space.

The company isn’t just building its own AR glasses, it’s also building Snap OS, a bespoke operating system for Spectacles. And it has its own authoring tool—Lens Studio—which developers need to learn to build for the headset, rather than using off-the-shelf tools like Unity. The unique approach and device capabilities mean that porting existing XR content isn’t straightforward.

Yet its commitment to building its platform from the ground up shows the company’s authentic belief in the XR space.

Speaking recently to Road to VR, Snap’s VP of Hardware, Scott Myers, said that the company is building Spectacles to be more than just an extension of Snapchat. The company believes glasses like Spectacles will one day replace smartphones altogether. This belief is guiding the standalone nature of Spectacles, which is designed to work without a phone or tethered compute unit.

“We want people to look up [through their glasses], not down [at their smartphone,” Myers said.

Beyond its emphasis on social and location-based AR experiences, Myers said the company is uniquely focused on making its platform the best in the world for developers, by building great tools and iterating aggressively on feedback.

Myers said he personally uses Spectacles “nearly every day” to test new features and experiences. “We’re learning together with developers to make developing [as easy as possible],” he said.

Snap will need to play its cards right to position itself for success in the coming years, as tech giants Meta, Apple, and Google are all vying to be the first to build a pair of mainstream AR glasses.


Road to VR is proud to be the Premier Media Partner of AWE USA 2025, allowing us to offer readers an exclusive 20% discount on tickets to the event.

Filed Under: News, XR Industry News

Researchers Catalog 170+ Text Input Techniques to Improve Typing in XR

April 8, 2025 From roadtovr

Efficient text entry without an actual keyboard remains an industry-wide challenge for unlocking productivity use-cases in XR headsets. Researchers have created a comprehensive catalog of existing text entry techniques to codify different methods and analyze their pros and cons. By making the catalog freely available, the researchers hope to give others a head start on creating new and improved techniques.

Guest Article by Max Di Luca

Massimiliano Di Luca leads the VR Lab at the University of Birmingham, UK, where he is an Associate Professor in the School of Psychology and in the School of Computer Science. He previously worked at Meta where he pioneered work on hand inputs and haptics for VR. His most recent collaboration with industry was recently recognized by the ACM SIGCHI 2025 awards for pioneering the interaction framework of Android XR through exemplary industry-academia collaboration, establishing foundational input methods and interaction guidelines for XR operating systems.

As immersive experiences become increasingly sophisticated, the challenge of efficient text entry remains a crucial barrier to seamless interaction in virtual and augmented reality (VR/AR). From composing emails in virtual workspaces to logging-in and socializing in the metaverse, the ability to input text efficiently is essential for the usability of all applications in extended reality (XR).

To address this challenge, my team from the VR Lab at the University of Birmingham (UK) along with researchers from the University of Copenhagen, Arizona State University, the Max Planck Institute for Intelligent Systems, Northwestern University, and Google developed the XR TEXT Trove—a comprehensive research initiative cataloging over 170 text entry techniques tailored for XR. The TEXT Trove is a structured repository of text entry techniques and a series of filters that aim at selecting and highlighting the pros and cons of the breadth of text input methods developed for XR in both academia and industry.

These techniques are categorised using a range of 32 codes, including 13 interaction attributes such as Input Device, Body Part (for input), Concurrency, and Haptic Feedback Modality, as well as 14 performance metrics like Words Per Minute (WPM) and Total Error Rate (TER). All in all, the number of techniques and extensivity of the attributes provide a comprehensive overview of the state of XR text entry techniques.

Several key takeaways can be surmised from our research. First and foremost, text input performance is inherently limited by the number of inputting elements (whether fingers, controllers, or other character selectors). Only multi-finger typing can lead to performance comparable to touch-typing speed with a keyboard on regular PCs. As visualized in the plots below, each additional input element (or finger) adds about 5 WPM speed on top users.

Words per minute using multiple fingers, and different input devices. (each dot represents one technique analyzed in the study).

Our research also indicates that haptic feedback, the presence of external surfaces, and fingertip-only visualization are preferable ways to improve typing performance. For instance, typing on surfaces (instead of in mid-air) contributes to a more comfortable and potentially more efficient typing experience. External surfaces also minimize sustained muscle strain, making interactions more comfortable and reducing the onset of Gorilla Arm Syndrome.

Finally, and more interestingly, as of today, no alternative has fully replaced the keyboard format, probably because it still delivers the highest words-per-minute. Perhaps because it also requires high learning curves. We believe that the main path for faster typing in VR than PC might lay on the need to reduce travel distances on a multi-finger keyboard via Machine Learning and AI. XR needs its own ‘swipe typing’ moment, which made one-finger typing on smartphones much more efficient.

In that regard, the deep dive from the XR Text Trove represents a significant step towards a more comprehensive understanding of text input in virtual and augmented reality. By providing a structured and searchable database, we aimed to offer a resource for researchers and developers alike, paving the way for more efficient and user-friendly text entry solutions in the immersive future.

As we explain in our paper, this work has the potential to significantly benefit the XR community: “To support XR research and design in this area, we make the database and the associated tool available on the XR TEXT Trove website. The full paper will be presented at the prestigious ACM CHI conference next month in Yokohama, Japan.

Several authors in our team are co-creators of the Locomotion Vault, which similarly catalogs VR locomotion techniques in an effort to give researchers and designers a head-start on identifying and improving various methods.

Filed Under: Guest Articles, News, XR Industry News

Meta’s Next-gen Smart Glasses Reportedly Set to Include a Display & Wrist-worn XR Controller

April 2, 2025 From roadtovr

Meta is reportedly working on a version of its Ray-Ban smart glasses which will include a single display for viewing photos and apps. Now, according to a new Bloomberg report from Mark Gurman, the company is aiming to introduce it sometime later this year alongside its wrist-worn XR controller for hand-gesture input.

As per a previous Bloomberg report from January, the device is allegedly codenamed ‘Hypernova’. Citing Meta employees, the device could cost between $1,000 and $1,400, although the final price likely still hasn’t been decided.

The price increase over the company’s $300 Ray-Ban Meta Glasses, which don’t include displays of any sort, is reportedly driven by the inclusion of a single display visible in the lower-right quadrant of the right lens.

Unlike augmented reality glasses, which correctly place digital images in the user’s field-of-view, the device as described would be closer to Google Glass in function. Find out more about the differences between Smart Glasses and AR Glasses in our full explainer.

Bloomberg’s latest report now maintains Hypernova will include dedicated apps for taking pictures, viewing photos and accessing maps. This also includes notifications from phone apps, such as Messenger and WhatsApp, the report maintains.

Ray-Ban Meta Glasses, Image courtesy Meta, EssilorLuxottica

It’s said Hypenova will rely “heavily on the Meta View phone app,” and may not include its own on-board app store despite running a customized version of Android—suggesting it’s more akin to a smartphone peripheral and not a standalone platform as such.

It is however said to include many of the same features of Ray-Ban Meta Glasses, such as capturing images and video, accessing AI via built-in microphones and pairing with a phone for calls and music playback.

Additionally, it’s said Hypernova is getting a spec bump in the camera department. The latest version of Meta Ray-Ban comes with a 12-megapixel camera, similar to an iPhone 11 (2019) in quality. Instead, Hypernova is hoping to “rival the iPhone 13 from 2021,” according to people familiar with the matter.

Like the company’s display-less Ray-Ban Glasses, the report maintains users can control Hypernova using capacitive touch controls located on the temples, allowing to scroll through media.

Wrist-worn XR Controller seen with Orion | Image courtesy Meta

It seems however Meta is looking to finally productize its wrist-worn XR controller, which uses electromyography (EMG) sensors to detect things such pinching and hand rotation for UI selection. Hypernova is said to come bundled in the box with the wrist-worn controller, which we’ve also seen in action with the company’s internal Orion AR glasses.

Bloomberg maintains a second-gen ‘Hypernova 2’ is already in the works, which is said to include a binocular heads-up display system (again, smart glasses, not AR) with people familiar with the matter maintaining it’s currently planned for release in 2027.

Granted, anything could happen. Meta regularly shelves products late in development, such as its allegedly canceled variant the device without a camera—a move targeting lower costs and increased user privacy.

Still, Hypernova likely won’t be the next smart glasses device Meta releases. The report maintains Meta is finalizing ‘Supernova 2’, which function like Ray-Ban Meta Glasses, but houses inside a sportier Oakley design.

All of this is leading up to the release of Meta’s first true AR glasses. The company revealed its internal developer kit Orion in late 2024; Meta CTO and Reality Labs chief Andrew Bosworth has said an AR device based on their work with Orion could come before 2030, priced “at least in the space of phone, laptop territory.”

Filed Under: Meta Quest 3 News & Reviews, News, XR Industry News

Next Page »

  • Home