• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home

Feature

Hands-on: Meta Ray-Ban Display Glasses & Neural Band Offer a Glimpse of Future AR Glasses

September 18, 2025 From roadtovr

The newly announced Meta Ray-Ban Display glasses, and the ‘Neural Band’ input device that comes with them, are still far from proper augmented reality. But Meta has made several clever design choices that will pay dividends once their true AR glasses are ready for the masses.

The Ray-Ban Display glasses are a new category for Meta. Previous products communicated to the user purely through audio. Now, a small, static monocular display adds quite a bit of functionality to the glasses. Check out the full announcement of the Meta Ray-Ban Display glasses here for all the details, and read on for my hands-on impressions of the device.

A Small Display is a Big Improvement

Meta Ray-Ban Display Glasses | Image courtesy Meta

A 20° monocular display isn’t remotely sufficient for proper AR (where virtual content floats in the world around you), but it adds a lot of new functionality to Meta’s smart glasses.

For instance, imagine you want to ask Meta AI for a recipe for teriyaki chicken. On the non-display models, you could definitely ask the question and get a response. But after the AI reads it out to you, how do you continue to reference the recipe? Well, you could either keep asking the glasses over and-over, or you could pull your phone out of your pocket and use the Meta AI companion app (at which point, why not just pull the recipe up on your phone in the first place?).

Now with the Meta Ray-Ban Display glasses, you can actually see the recipe instructions as text in a small heads-up display, and glance at them whenever you need.

In the same way, almost everything you could previously do with the non-display Meta Ray-Ban glasses is enhanced by having a display.

Now you can see a whole thread of messages instead of just hearing one read through your ear. And when you reply you can actually read the input as it appears in real-time to make sure it’s correct instead of needing to simply hear it played back to you.

When capturing photos and videos you now see a real-time viewfinder to ensure you’re framing the scene exactly as you want it. Want to check your texts without needing to talk out loud to your glasses? Easy peasy.

And the real-time translation feature becomes more useful too. In current Meta glasses you have to listen to two overlapping audio streams at once. The first is the voice of the speaker and the second is the voice in your ear translating into your language, which can make it harder to focus on the translation. With the Ray-Ban Display glasses, now the translation can appear as a stream of text, which is much easier to process while listening to the person speaking in the background.

It should be noted that Meta has designed the screen in the Ray-Ban Display glasses to be off most of the time. The screen is set off and to the right of your central vision, making it more of a glanceable display than something that’s right in the middle of your field-of-view. At any time you can turn the display on or off with a double-tap of your thumb and middle finger.

Technically, the display is a 0.36MP (600 × 600) full-color LCoS display with a reflective waveguide. Even though the resolution is “low,” it’s plenty sharp across the small 20° field-of-view. Because it’s monocular, it does have a ghostly look to it (because only one eye can see it). This doesn’t hamper the functionality of the glasses, but aesthetically it’s not ideal.

Meta hasn’t said if they designed the waveguide in-house or are working with a partner. I suspect the latter, and if I had to guess, Lumus would be the likely supplier. Meta says the display can output up to 5,000 nits brightness, which is enough to make the display readily usable even in full daylight (the included Transitions also help).

From the outside, the waveguide is hardly visible in the lens. The most prominent feature is some small diagonal markings toward the temple-side of the headset.

Photo by Road to VR

Meanwhile, the final output gratings are very transparent. Even when the display is turned on, it’s nearly impossible to see a glint from the display in a normally lit room. Meta said the outward light-leakage is around 2%, which I am very impressed by.

 The waveguide is extremely subtle within the lens | Photo by Road to VR

Aside from the glasses being a little chonkier than normal glasses, the social acceptability here is very high—even more so because you don’t need to constantly talk to the glasses to use them, or even hold your hand up to tap the temple. Instead, the so-called Neural Band (based on EMG sensing), allows you to make subtle inputs while your hand is down at your side.

The Neural Band is an Essential Piece to the Input Puzzle

Photo by Road to VR

The included Neural Band is just as important to these new glasses as the display itself—and it’s clear that this will be equally important to future AR glasses.

To date, controlling XR devices has been done with controllers, hand-tracking, or voice input. All of these have their pros and cons, but none are particularly fitting for glasses that you’d wear around in public; controllers are too cumbersome, hand-tracking requires line of sight which means you need to hold your hands awkwardly out in front of you, and voice is problematic both for privacy and certain social settings where talking isn’t appropriate.

The Neural Band, on the other hand, feels like the perfect input device for all-day wearable glasses. Because it’s detecting muscle activity (instead of visually looking for your fingers) no line-of-sight is needed. You can have your arm completely to your side (or even behind your back) and you’ll still be able to control the content on the display.

The Neural Band offers several ways to navigate the UI of the Ray-Ban Display glasses. You can pinch your thumb and index finger together to ‘select’; pinch your thumb and middle finger to ‘go back’; and swipe your thumb across the side of your finger to make up, down, left, and right selections. There are a few other inputs too, like double-tapping fingers or pinching and rotating your hand.

As of now, you navigate the Ray-Ban Display glasses mostly by swiping around the interface and selecting. In the future, having eye-tracking on-board will make navigation even more seamless, by allowing you to simply look and pinch to select what you want. The look-and-pinch method, combined with eye-tracking, already works great on Vision Pro. But it still misses your pinches sometimes if your hand isn’t in the right spot, because the cameras can’t always see your hands at quite the right angle. If I could use the Neural Band for pinch detection on Vision Pro, I absolutely would—that’s how well it seems to work already.

While it’s easy enough to swipe and select your way around the Ray-Ban Display interface, the Neural Band has the same downside that all the aforementioned input methods have: text input. But maybe not for long.

In my hands-on with the Ray-Ban Display, the device was still limited to dictation input. So replying to a message or searching for a point of interest still means talking out loud to the headset.

However, Meta showed me a demo (that I didn’t get to try myself) of being able to ‘write’ using your finger against a surface like a table or your leg. It’s not going to be nearly as fast as a keyboard (or dictation, for that matter), but private text input is an important feature. After all, if you’re out in public, you probably don’t want to be speaking all of your message replies out loud.

The ‘writing’ input method is said to be a forthcoming feature, though I didn’t catch whether they expected it to be available at launch or sometime after.

On the whole, the Neural Band seems like a real win for Meta. Not just for making the Ray-Ban display more useful, but it seems like the ideal input method for future glasses with full input capabilities.

Photo by Road to VR

And it’s easy to see a future where the Neural Band becomes even more useful by evolving to include smartwatch and fitness tracking functions. I already wear a smartwatch most of the day anyway… making it my input device for a pair of smart glasses (or AR glasses in the future) is a smart approach.

Little Details Add Up

One thing I was not expecting to be impressed by was the charging case of the Ray-Ban Display glasses. Compared to the bulky charging cases of all of Meta’s other smart glasses, this clever origami-like case folds down flat to take up less space when you aren’t using it. It goes from being big enough to accommodate a charging battery and the glasses themselves, down to something that can easily go in a back pocket or slide into a small pocket in a bag.

This might not seem directly relevant to augmented reality, but it’s actually more important than you might think. It’s not like Meta invented a folding glasses case, but it shows that the company is really thinking about how this kind of device will fit into people’s lives. An analog to this for their MR headsets would be including a charging dock with every headset—something they’ve yet to do.

Now with a display on-board, Meta is also repurposing the real-time translation feature as a sort of ‘closed captioning’. Instead of translating to another language, you can turn on the feature and see a real-time text stream of the person in front of you, even if they’re already speaking your native language. That’s an awesome capability for those that are hard-of-hearing.

Live Captions in Meta Ray-Ban Display Glasses | Image courtesy Meta

And even for those that aren’t, you might still find it useful… Meta says the beam-forming microphones in the Ray-Ban Display can focus on the person you’re looking at while ignoring other nearby voices. They showed me a demo of this in action in a room with one person speaking to me and three others having a conversation nearby to my left. It worked relatively well, but it remains to be seen if it will work in louder environments like a noisy restaurant or a club with thumping music.

Meta wants to eventually pack full AR capabilities into glasses of a similar size. And even if they aren’t there yet, getting something out the door like the Ray-Ban Display gives them the opportunity to explore, iterate—and hopefully perfect—many of the key ‘lifestyle’ factors that need to be in place for AR glasses to really take off.


Disclosure: Meta covered lodging for one Road to VR correspondent to attend an event where information for this article was gathered.

Filed Under: Feature, News, Smart Glasses, XR Industry News

Hands-on: Samsung‘s Android XR Headset is a Curious Combo of Quest & Vision Pro, With One Stand-out Advantage

December 12, 2024 From roadtovr

Samsung is the first partner to formally announce a new MR headset based on the newly announced Android XR. The device, codenamed “Project Moohan,” is planned for consumer release in 2025. We went hands on with an early version.

Note: Samsung and Google aren’t yet sharing any key details for this headset like resolution, weight, field-of-view, or price. During my demo I also wasn’t allowed to capture photos or videos, so we only have an official image for the time being.

If I told you that Project Moohan felt like a mashup between Quest and Vision Pro, you’d probably get the idea that it has a lot of overlapping capabilities. But I’m not just making a rough analogy. Just looking at the headset, it’s clear that it has taken significant design cues from Vision Pro. Everything from colors to button placement to calibration steps, make it unmistakably aware of other products on the market.

And then on the software side, if I had told you “please make an OS that mashes together Horizon OS and VisionOS,” and you came back to me with Android XR, I’d say you nailed the assignment.

It’s actually uncanny just how much Project Moohan and Android XR feel like a riff on the two other biggest headset platforms.

But this isn’t a post to say someone stole something from someone else. Tech companies are always borrowing good ideas and good designs from each other—sometimes improving them along the way. So as long as Android XR and Project Moohan got the good parts of others, and avoided the bad parts, that’s a win for developers and users.

And many of the good parts do indeed appear to be there.

Hands-on With Samsung Project Moohan Android XR Headset

Image courtesy Google

Starting from the Project Moohan hardware—it’s a good-looking device, no doubt. It definitely has the ‘goggles’-style look of Vision Pro, as well as a tethered battery pack (not pictured above).

But where Vision Pro has a soft strap (that I find rather uncomfortable without a third-party upgrade), Samsung’s headset has a rigid strap with tightening dial, and an overall ergonomic design that’s pretty close to Quest Pro. That means an open-peripheral design which is great for using the headset for AR. Also like Quest Pro, the headset has some magnetic snap-on blinders for those that want a blocked-out peripheral for fully immersive experiences.

And though the goggles-look and even many of the button placements (and shapes) are strikingly similar to Vision Pro, Project Moohan doesn’t have an external display to show the user’s eyes. Vision Pro’s external ‘EyeSight’ display has been criticized by many, but I maintain it’s a desirable feature, and one that I wish Project Moohan had. Coming from Vision Pro, it’s just kind of awkward to not be able to ‘see’ the person wearing the headset, even though they can see you.

Samsung has been tight-lipped about the headset’s tech details, insisting that it’s still a prototype. However, we have learned the headset is running a Snapdragon XR2+ Gen 2 processor, a more powerful version of the chip in Quest 3 and Quest 3S.

In my hands-on I was able to glean a few details. For one, the headset is using pancake lenses with automatic IPD adjustment (thanks to integrated eye-tracking). The field-of-view feels smaller than Quest 3 or Vision Pro, but before I say that definitively, I first need to try different forehead pad options (confirmed to be included) which may be able to move my eyes closer to the lenses for a wider field-of-view.

From what I got to try however, the field-of-view did feel smaller—albeit, enough to still feel immersive—and so did the sweet spot due to brightness fall-off toward the outer edges of the display. Again, this is something that may improve if the lenses were closer to my eyes, but the vibe I got for now is that, from a lens standpoint, Meta’s Quest 3 is still leading, followed by Vision Pro, with Project Moohan a bit behind.

Although Samsung has confirmed that Project Moohan will have its own controllers, I didn’t get to see or try them yet. I was told they haven’t decided if the controllers will ship with the headset by default or be sold separately.

So it was all hand-tracking and eye-tracking input in my time with the headset. Again, this was a surprisingly similar mashup of both Horizon OS and VisionOS. You can use raycast cursors like Horizon OS or you can use eye+pinch inputs like VisionOS. The Samsung headset also includes downward-facing cameras so pinches can be detected when your hands are comfortably in your lap.

When I actually got to put the headset on, the first thing I noticed was how sharp my hands appeared to be. From memory, the headset’s passthrough cameras appear to have a sharper image than Quest 3 and less motion blur than Vision Pro (but I only got to test in excellent lighting conditions). Considering though how my hands seemed sharp but things further away seemed less so, it almost felt like the passthrough cameras might have been focused at roughly arms-length distance.

Continue on Page 2: Inside Android XR »

Inside Android XR

Anyway, onto Android XR. As said, it’s immediately comparable to a mashup of Horizon OS and VisionOS. You’ll see the same kind of ‘home screen’ as Vision Pro, with app icons on a transparent background. Look and pinch to select one and you get a floating panel (or a few) containing the app. It’s even the same gesture to open the home screen (look at your palm and pinch).

The system windows themselves look closer to those of Horizon OS than VisionOS, with mostly opaque backgrounds and the ability to move the window anywhere by reaching for an invisible frame that wraps around the entire panel.

In addition to flat apps, Android XR can do fully immersive stuff too. I got to see a VR version of Google Maps which felt very similar to Google Earth VR, allowing me to pick anywhere on the globe to visit, including the ability to see locations like major cities modeled in 3D, Street View imagery, and, newly, volumetric captures of interior spaces.

While Street View is monoscopic 360 imagery, the volumetric captures are rendered in real-time and fully explorable. Google said this was a gaussian splat solution, though I’m not clear on whether it was generated from existing interior photography that’s already available on standard Google Maps, or if it required a brand new scan. It wasn’t nearly as sharp as you’d expect from a photogrammetry scan, but not bad either. Google said the capture was running on-device and not streamed, and that sharpness is expected to improve over time.

Google Photos has also been updated for Android XR, including the ability to automatically convert any existing 2D photo or video from your library into 3D. In the brief time I had with it, the conversions looked really impressive; similar in quality to the same feature on Vision Pro.

YouTube is another app Google has updated to take full advantage of Android XR. In addition to watching regular flatscreen content on a large, curved display, you can also watch the platform’s existing library of 180, 360, and 3D content. Not all of it is super high quality, but it’s nice that it’s not being forgotten—and will surely be added to as more headsets are able to view this kind of media.

Google also showed me a YouTube video that was originally shot in 2D but automatically converted to 3D to be viewed on the headset. It looked pretty good, seemingly similar in quality to the Google Photos 3D conversion tech. It wasn’t made clear whether this is something that YouTube creators would need to opt-in to have generated, or something YouTube would just do automatically. I’m sure there’s more details to come.

The Stand-out Advantage (for now)

Android XR and Project Moohan, both from a hardware and software standpoint, feel very much like a Google-fied version of what’s already on the market. But what it clearly does better than any other headset right now is conversational AI.

Google’s AI agent, Gemini (specifically the ‘Project Astra‘ variant) can be triggered right from the home screen. Not only can it hear you, but it can see what you see in both the real world and the virtual world—continuously. Its ongoing perception of what you’re saying and what you’re seeing makes it feel smarter, better integrated, and more conversational than the AI agents on contemporary headsets.

Yes, Vision Pro has Siri, but Siri can only hear you and is mostly focused on single-tasks rather than an ongoing conversation.

And Quest has an experimental Meta AI agent that can hear you and see what you’re seeing—but only the real world. It has no sense of what virtual content is in front of you, which creates a weird disconnect. Meta says this will change eventually, but for now that’s how it works. And in order to ‘see’ things, you have to ask it a question about your environment and then stand still while it makes a ‘shutter’ sound, then starts thinking about that image.

Gemini, on the other hand, gets something closer to a low framerate video feed of what you’re seeing in both the real and virtual worlds; which means no awkward pauses to make sure you’re looking directly at the thing you asked about as a single picture is taken.

Gemini on Android XR also has a memory about it, which gives it a boost when it comes to contextual understanding. Google says it has a rolling 10-minute memory and retains “key details of past conversations,” which means you can refer not only to things you talked about recently, but also things you saw.

I was shown what is by now becoming a common AI demo: you’re in a room filled with stuff and you can ask questions about it. I tried to trip the system up with a few sly questions, and was impressed at its ability to avoid the diversions.

I used Gemini on Android XR to ask it to translate sign written in Spanish into English. It quickly gave me a quick translation. Then I asked it to translate another nearby sign into French—knowing full well that this sign was already in French. Gemini had no problem with this, and correctly noted, “this sign is already in French, it says [xyz],” and it even said the French words in a French accent.

I moved on to asking about some other objects in the room, and after it had been a few minutes since asking about the signs, I asked it “what did that sign say earlier?” It knew what I was talking about and read the French sign aloud. Then I said “what about the one before that?”….

A few years ago this question—”what about the one before that?”—would have been a wildly challenging question for any AI system (and it still is for many). Answering it correctly requires multiple levels of context from our conversation up to that point, and an understanding of how the thing I had just asked about relates to another thing we had talked about previously.

But it knew exactly what I meant, and quickly read the Spanish sign back to me. Impressive.

Gemini on Android XR can also do more than just answer general questions. It remains to be seen how deep this will be at launch, but Google showed me a few ways that Gemini can actually control the headset.

For one, asking it to “take me to the Eiffel tower,” pulls up an immersive Google Maps view so I can see it in 3D. And since it can see virtual content as well as real, I can continue having a fairly natural conversation, with questions like “how tall is it?” or “when was it built?”

Gemini can also fetch specific YouTube videos that it thinks are the right answer to your query. So saying something like “show a video of the view from the ground,” while looking at the virtual Eiffel tower, will pop up a YouTube video to show what you asked for.

Ostensibly Gemini on Android XR should also be able to do the usual assistant stuff that most phone AI can do (ie: send text messages, compose an email, set reminders), but it will be interesting to see how deep it will go with XR-specific capabilities.

Gemini on Android XR feels like the best version of an AI agent on a headset yet (including what Meta has right now on their Ray-Ban smartglasses) but Apple and Meta are undoubtedly working toward similar capabilities. How long Google can maintain the lead here remains to be seen.

Gemini on Project Moohan feels like a nice value-add when using the headset for spatial productivity purposes, but its true destiny probably lies on smaller, everyday wearable smartglasses, which I also got to try… but more on that in another article.

Filed Under: Feature, hardware preview, News, XR Industry News

Hands-on: Shiftall MeganeX Superlight Packs a Wishlist of Ergonomics Into a Tiny Package

November 1, 2024 From roadtovr

Japan-based Shiftall is the latest company making an effort to deliver an ultra-compact VR headset for enthusiasts who are willing to spend big on maximizing their PC VR experience. Despite the tiny package, the MeganeX Superlight headset still manages to deliver the optical adjustments that should be standard for every headset. Though undoubtedly expensive, the headset overall is promising, provided the company can finalize a few tweaks before crossing the finish line.

Available for pre-order in Japan, United States, EU & UK, the $1,900 MeganeX Superlight from Shiftall is purportedly set to start shipping between February and March of next year. You can check out the full breakdown of specs here.

This is a tethered headset designed for the SteamVR ecosystem. Shiftall is selling the headset by itself, which means you’ll need to bring your own SteamVR Tracking beacons and controllers—or drop another $580 to buy them new.

This week I got to check out a prototype version of the MeganeX Superlight headset and found it to be a promising piece of hardware that’s certain to be held back by its steep price.

Photo by Road to VR

Shiftall CEO Takuma Iwasa told me the headset is primarily targeted toward hardcore VR users, especially those spending long stretches in VRChat. Considering his own claim of more than 3,000 hours in VRChat, it’s clear he has a real understanding of the needs of this kind of customer.

That’s what led the company to try building a compact PC VR headset: Iwasa wants to deliver something that’s lightweight and comfortable for long sessions.

A big part of a VR headset being comfortable is about being able to adjust it to fit each individual. Getting the headset’s lenses into the ideal position for your eyes is crucial to maximizing visual quality and comfort.

To that end, I was happy to see the MeganeX Superlight includes a list of optical adjustments that I’ve long wished was standard on every headset: IPD, eye-relief, diopter, a flip-up visor, and even a lens angle adjustment.

Photo by Road to VR

IPD (or interpupillary distance) is standard on most headsets, it’s the distance between the lenses. Matching the distance between the lenses to the distance between your eyes is important to making it easy for your eyes to fuse the stereoscopic image, and for getting your eyes into the ‘sweet spot’ of the lens (the optical center, where the lens has the greatest).

On the MeganeX Superlight, IPD is set by entering your IPD measurement into the software on your computer, causing the headset’s motorized lenses to move into the desired position.

Eye-relief is less common to find on VR headsets. This is the distance from the lens to your eye. Not only is this important for maximizing field-of-view, it’s also important for dialing in the ‘sweet spot’ of the lens. That’s because the sweet spot isn’t just a plane, it’s a volume (technically speaking, this is often called the ‘eye-box’).

On the MeganeX Superlight, the mount which connects the headset itself to the headstrap makes it easy to adjust eye-relief by pinching a pair of pads which allows you to freely slide the headset closer or farther away from your eyes.

Diopter is even rarer than eye-relief. This setting changes the focus of the lens to account for a person’s vision correction needs. Rather than wearing glasses, users can dial in their diopter to enjoy a sharp view.

Photo by Road to VR

On Shiftall’s headset, there’s a small dial near the side of each lens which is used to adjust the diopter for each eye. Although this is a manual process (ie: you can’t just enter a value and have the headset set it automatically), Shiftall tells me that part of the headset’s setup process will include a calibration screen to make this process easier.

While a growing number of headsets include decent passthrough views via external cameras, if the goal is to simply look outside of your headset, it’s hard to beat your very own eyes. To that end the MeganeX Superlight has a little plunger on the headstrap mount that makes it quick and easy to flip up the visor for a glimpse of the outside world, and to flip it back down when you’re done.

And last but not least—something I’ve seen on only one other company’s headsets—is an independent lens angle adjustment.

Many VR headsets have a pivot at the point where their headstrap connects to the headset, but the angle is entirely at the mercy of how the facepad rests on the user’s face.

On the other hand, because the MeganeX Superlight headset essentially hangs down from your forehead, a small dial on the side of the mount allows you to independently adjust the angle of the headset (and thus the lenses) regardless of how the headstrap is resting on your head.

Taken all together, these adjustments make it easier for a wider range of people to get the best and most comfortable visual experience from the headset.

And if you’re planning to pay nearly $2,000 for a headset that’s not only compact, but also includes a whopping 13.6MP (3,552 × 3,840) micro-OLED display per-eye, you’re definitely going to want it to have the adjustments necessary to give you the best visuals it can.

The MeganeX Superlight’s displays are incredibly crisp, to the point that there’s simply no visible pixels, sub-pixels, or even a hint of screen-door effect that I could see in my time with the headset. The virtual world not only looks completely sharp and solid thanks to all of those pixels, it also looks very vivid thanks to the rich colors and deep blacks shown by the 10-bit display.

While I need more time with the headset to be sure, my initial impression from memory was that the MeganeX Superlight felt like it had a slightly larger field-of-view, slightly larger sweet spot, and less glare compared to Bigscreen Beyond (its nearest competitor).

From a resolution standpoint, there’s so few examples of VR content that actually have the underlying graphical fidelity to show a meaningful difference—between Bigscreen Beyond’s impressive 6.5MP (2,560 × 2,560) per-eye resolution and the MeganeX Superlight’s even more impressive 13.6MP (3,552 × 3,840) per-eye resolution—that the improvement wasn’t obviously noticeable.

But it stands to reason that the MeganeX Superlight should be the superior headset in cases where high resolving power is most important, like in flight simulators where long sightlines to distant objects are common, and for virtual desktops where resolving fine text is crucial. I’m especially interested to try the MeganeX Superlight for the latter.

While greater resolving power is always a plus, there’s no question that if you want to run VR content anywhere near the headset’s native resolution, you’re going to need to pair it with top-tier PC.

At the headset’s native 13.6MP per-eye resolution and 90Hz refresh rate, your computer will need to pump out an absurd 2.5 gigapixels per second (assuming naive stereoscopic rendering). [Note: Shiftall says the MeganeX Superlight only works with modern NVIDIA GPUs. AMD is not supported at present.]

If you don’t already have (or aren’t planning to buy) an NVIDIA 3080, 4080, or better, it’s hard to make a case for paying $1,900 for the extra pixels on MeganeX Superlight over the $1,000 Bigscreen Beyond (assuming both headsets were otherwise equal).

Photo by Road to VR

While I was impressed with the array of optical adjustments, stunning resolution, and vibrant colors of the MeganeX Superlight, I have the same reservation about the headset that I did with Bigscreen Beyond: the lack of built-in audio is a big oversight. I understand that there’s some people out there who are happy to deal with putting on their own headphones or earbuds over top of their headset, but my gut is that most people prefer the convenience of not having to deal with yet another thing to put on.

Bigscreen Beyond has since rectified this issue with an optional headstrap with on-board audio. And making it optional is fine; the people who want it can get it, and those that want to use their own aren’t stuck with it.

Shiftall tells me it’s also planning to build an optional headstrap with on-board audio, but it won’t be available (or probably even announced) before the headset starts shipping early next year. I understand that making and launching hardware is extremely difficult, but it’s a real shame to not have an audio headstrap available at launch.

Another issue I saw during my time with the headset is some pupil-swim in the lenses. That means when your eyes move in smooth pursuit (as opposed to saccading) across the lens, the scene seems to warp in an uncomfortable way.

This is typically an issue with poor lens calibration, and it isn’t uncommon with prototype headsets which aren’t being made with final tooling or calibration processes.

While there’s no reason to think the company can’t dial in its lens calibration before launch, getting it right is very important. So it’s something I’ll definitely want to get another look at closer to the headset’s release.

Assuming Shiftall manages to improve the pupil-swim—as it says it expects to—the company is on track to deliver a pretty impressive headset. The only major issues are that of cost and the lack of on-board audio. Those two factors ensure that the MeganeX Superlight will remain a niche headset. But if the company can find a clutch of users that want what it’s offering, it will have further proven out the existence of a hardcore PC VR crowd that’s willing to spend big to maximize their VR experience.

Filed Under: Feature, hardware preview, News, PC VR News & Reviews, XR Industry News

  • Home