• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home
  • About us
  • Contact Us

VR Research

Meta Reveals VR Headset Prototypes Designed to Make VR ‘Indistinguishable From Reality’

June 20, 2022 From roadtovr

Meta says its ultimate goal with its VR hardware is to make a comfortable, compact headset with visual finality that’s ‘indistinguishable from reality’. Today the company revealed its latest VR headset prototypes which it says represent steps toward that goal.

Meta has made it no secret that it’s dumping tens of billions of dollars in its XR efforts, much of which is going to long-term R&D through its Reality Labs Research division. Apparently in an effort to shine a bit of light onto what that money is actually accomplishing, the company invited a group of press to sit down for a look at its latest accomplishments in VR hardware R&D.

Reaching the Bar

To start, Meta CEO Mark Zuckerberg spoke alongside Reality Labs Chief Scientist Michael Abrash to explain that the company’s ultimate goal is to build VR hardware that meets all the visual requirements to be accepted as “real” by your visual system.

VR headsets today are impressively immersive, but there’s still no question that what you’re looking at is, well… virtual.

Inside of Meta’s Reality Labs Research division, the company uses the term ‘visual Turing Test’ to represent the bar that needs to be met to convince your visual system that what’s inside the headset is actually real. The concept is borrowed from a similar concept which denotes the point at which a human can tell the difference between another human and an artificial intelligence.

For a headset to completely convince your visual system that what’s inside the headset is actually real, Meta says you need a headset that can pass that “visual Turing Test.”

Four Challenges

Zuckerberg and Abrash outlined what they see as four key visual challenges that VR headsets need to solve before the visual Turing Test can be passed: varifocal, distortion, retina resolution, and HDR.

Briefly, here’s what those mean:

  • Varifocal: the ability to focus on arbitrary depths of the virtual scene, with both essential focus functions of the eyes (vergence and accommodation)
  • Distortion: lenses inherently distort the light that passes through them, often creating artifacts like color separation and pupil swim that make the existence of the lens obvious.
  • Retina resolution: having enough resolution in the display to meet or exceed the resolving power of the human eye, such that there’s no evidence of underlying pixels
  • HDR: also known as high dynamic range, which describes the range of darkness and brightness that we experience in the real world (which almost no display today can properly emulate).

The Display Systems Research team at Reality Labs has built prototypes that function as proof-of-concepts for potential solutions to these challenges.

Varifocal

Image courtesy Meta

To address varifocal, the team developed a series of prototypes which it called ‘Half Dome’. In that series the company first explored a varifocal design which used a mechanically moving display to change the distance between the display and the lens, thus changing the focal depth of the image. Later the team moved to a solid-state electronic system which resulted in varifocal optics that were significantly more compact, reliable, and silent. We’ve covered the Half Dome prototypes in greater detail here if you want to know more.

Virtual Reality… For Lenses

As for distortion, Abrash explained that experimenting with lens designs and distortion-correction algorithms that are specific to those lens designs is a cumbersome process. Novel lenses can’t be made quickly, he said, and once they are made they still need to be carefully integrated into a headset.

To allow the Display Systems Research team to work more quickly on the issue, the team built a ‘distortion simulator’, which actually emulates a VR headset using a 3DTV, and simulates lenses (and their corresponding distortion-correction algorithms) in-software.

Image courtesy Meta

Doing so has allowed the team to iterate on the problem more quickly, wherein the key challenge is to dynamically correct lens distortions as the eye moves, rather than merely correcting for what is seen when the eye is looking in the immediate center of the lens.

Retina Resolution

Image courtesy Meta

On the retina resolution front, Meta revealed a previously unseen headset prototype called Butterscotch, which the company says achieves a retina resolution of 60 pixels per degree, allowing for 20/20 vision. To do so, they used extremely pixel-dense displays and reduced the field-of-view—in order to concentrate the pixels over a smaller area—to about half the size of Quest 2. The company says it also developed a “hybrid lens” that would “fully resolve” the increased resolution, and it shared through-the-lens comparisons between the original Rift, Quest 2, and the Butterscotch prototype.

Image courtesy Meta

While there are already headsets out there today that offer retina resolution—like Varjo’s VR-3 headset—only a small area in the middle of the view (27° × 27°) hits the 60 PPD mark… anything outside of that area drops to 30 PPD or lower. Ostensibly Meta’s Butterscotch prototype has 60 PPD across its entirely of the field-of-view, though the company didn’t explain to what extent resolution is reduced toward the edges of the lens.

Continue on Page 2: High Dynamic Range, Downsizing »

Filed Under: butterscotch, Feature, half dome, holocake 2, mark zuckerberg, Meta, meta reality labs, meta reality labs research, michael abrash, News, Reality Labs, reality labs display systems research, starburst, vr hdr, VR Headset, vr headset prototypes, VR Research

Researchers Show Full-body VR Tracking with Controller-mounted Cameras

May 9, 2022 From roadtovr

Filed Under: body tracking, fbt, full body tracking controllers, full body tracking standalone, full-body tracking, News, VR Research, VR Tracking

NVIDIA Researchers Demonstrate Ultra-thin Holographic VR Glasses That Could Reach 120° Field-of-view

May 6, 2022 From roadtovr

A team of researchers from NVIDIA Research and Stanford published a new paper demonstrating a pair of thin holographic VR glasses. The displays can show true holographic content, solving for the vergence-accommodation issue. Though the research prototypes demonstrating the principles were much smaller in field-of-view, the researchers claim it would be straightforward to achieve a 120° diagonal field-of-view.

Published ahead of this year’s upcoming SIGGRAPH 2022 conference, a team of researchers from NVIDIA Research and Stanford demonstrated a near-eye VR display that can be used to display flat images or holograms in a compact form-factor. The paper also explores the interconnected variables in the system that impact key display factors like field-of-view, eye-box, and eye-relief. Further, the researchers explore different algorithms for optimally rendering the image for the best visual quality.

Commercially available VR headsets haven’t improved in size much over the years largely because of an optical constraint. Most VR headsets use a single display and a simple lens. In order to focus the light from the display into your eye, the lens must be a certain distance from the display; any closer and the image will be out of focus.

Eliminating that gap between the lens and the display would unlock previously impossible form-factors for VR headsets; understandably there’s been a lot of R&D exploring how this can be done.

In NVIDIA-Stanford’s newly published paper, Holographic Glasses for Virtual Reality, the team shows that it built a holographic display using a spatial light modulator combined with a waveguide rather than a traditional lens.

The team built both a large benchtop model—to demonstrate core methods and experiment with different algorithms for rending the image for optimal display quality—and a compact wearable model to demonstrate the form-factor. The images you see of the compact glasses-like form-factor don’t include the electronics to drive the display (as the size of that part of the system is out of scope for the research).

You may recall a little while back that Meta Reality Labs published its own work on a compact glasses-size VR headset. Although that work involves holograms (to form the system’s lenses), it is not a ‘holographic display’, which means it doesn’t solve the vergence-accommodation issue that’s common in many VR displays.

On the other hand, the Nvidia-Stanford researchers write that their Holographic Glasses system is in fact a holographic display (thanks to the use of a spatial light modulator), which they tout as a unique advantage of their approach. However, the team also writes that it’s possible to display typical flat images on the display as well (which, like contemporary VR headsets, can converge for a stereoscopic view).

Image courtesy NVIDIA Research

Not only that, but the Holographic Glasses project touts a mere 2.5mm thickness for the entire display, significantly thinner than the 9mm thickness of the Reality Labs project (which was already impressively thin!).

As with any good paper though, the Nvidia-Stanford team is quick to point out the limitations of their work.

For one, their wearable system has a tiny 22.8° diagonal field-of-view with an equally tiny 2.3mm eye-box. Both of which are way too small to be viable for a practical VR headset.

Image courtesy NVIDIA Research

However, the researchers write that the limited field-of-view is largely due to their experimental combination of novel components that aren’t optimized to work together. Drastically expanding the field-of-view, they explain, is largely a matter of choosing complementary components.

“[…] the [system’s field-of-view] was mainly limited by the size of the available [spatial light modulator] and the focal length of the GP lens, both of which could be improved with different components. For example, the focal length can be halved without significantly increasing the total thickness by stacking two identical GP lenses and a circular polarizer [Moon et al. 2020]. With a 2-inch SLM and a 15mm focal length GP lens, we could achieve a monocular FOV of up to 120°”

As for the 2.3mm eye-box (the volume in which the rendered image can be seen), it’s way too small for practical use. However, the researchers write that they experimented with a straightforward way to expand it.

With the addition of eye-tracking, they show, the eye-box could be dynamically expanded up to 8mm by changing the angle of the light that’s sent into the waveguide. Granted, 8mm is still a very tight eye-box, and might be too small for practical use due to variations in eye-relief distance and how the glasses rest on the head, from one user to the next.

But, there’s variables in the system that can be adjusted to change key display factors, like the eye-box. Through their work, the researchers established the relationship between these variables, giving a clear look at what tradeoffs would need to be made to achieve different outcomes.

Image courtesy NVIDIA Research

As they show, eye-box size is directly related to the pixel pitch (distance between pixels) of the spatial light modulator, while field-of-view is related to the overall size of the spatial light modulator. Limitations on eye-relief and converging angle are also shown, relative to a sub-20mm eye-relief (which the researchers consider the upper limit of a true ‘glasses’ form-factor).

An analysis of this “design trade space,” as they call it, was a key part of the paper.

“With our design and experimental prototypes, we hope to stimulate new research and engineering directions toward ultra-thin all-day-wearable VR displays with form-factors comparable to conventional eyeglasses,” they write.

The paper is credited to researchers Jonghyun Kim, Manu Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes, and Gordon Wetzstein.

Filed Under: Holographic Display, holographic vr display, holographic vr glasses, Holography, News, VR Headset, VR Research

Meta Offered a Glimpse into the XR R&D That’s Costing It Billions

November 3, 2021 From roadtovr

During the Connect 2021 conference last week, Meta Reality Labs’ Chief Scientist, Michael Abrash, offered a high-level overview of some of the R&D that’s behind the company’s multi-billion dollar push into XR and the metaverse.

Michael Abrash leads the team at Meta Reality Labs Research which has been tasked with researching technologies that the company believes could be foundational to XR and the metaverse decades in the future. At Connect 2021, Abrash shared some of the group’s very latest work.

Full-body Codec Avatars

Meta’s Codec Avatar project aims to achieve a system capable of capturing and representing photorealistic avatars for use in XR. A major challenge beyond simply ‘scanning’ a person’s body is getting it to then move in realistic ways—not to mention making the whole system capable of running in real-time so that the avatar can be used in an interactive context.

The company has shown off its Codec Avatar work on various occasions, each time showing improvements. Initially the project started off simply with high quality heads, but it has since evolved to full-body avatars.

The video above is a demo representing the group’s latest work on full-body Codec Avatars, which researcher Yaser Sheikh explains now supports more complex eye movement, facial expressions, and hand and body gestures which involve self-contact. It isn’t stated outright, but the video also shows a viewer watching the presentation in virtual reality, implying that this is all happening in real-time.

With the possibility of such realistic avatars in the future, Abrash acknowledged that it’s important to think about security of one’s identity. To that end he says the company is “thinking about how we can secure your avatar, whether by tying it to an authenticated account, or by verifying identity in some other way.”

Photorealistic Hair and Skin Rendering

While Meta’s Codec Avatars are already looking pretty darn convincing, the research group believes the ultimate destination for the technology is to achieve photorealism.

Above Abrash showed off what he says is the research group’s latest work in photorealistic hair and skin rendering, and lighting thereof. It wasn’t claimed that this was happening in real-time (and we doubt it is), but it’s a look at the bar the team is aiming for down the road with the Codec Avatar tech.

Clothing Simulation

Along with a high quality representation of your body, Meta expects clothing with continue to be an important way that people want to express themselves in the metaverse. To that end, they think that making clothes act realistically will be an important part of that experience. Above the company shows off its work in clothing simulation and hands-on interaction.

High-fidelity Real-time Virtual Spaces

While XR can easily whisk us away to other realities, teleporting friends virtually to your actual living space would be great too. Taken to the extreme, that means having a full-blown recreation of your actual home and everything in it, which can run in real-time.

Well… Meta did just that. They built a mock apartment complete with a perfect replica of all the objects in it. Doing so makes it possible for a user to move around the real space and interact with it like normal while keeping the virtual version in sync.

So if you happen to have virtual guests over, they could actually see you moving around your real world space and interacting with anything inside of it in an incredibly natural way. Similarly, when using AR glasses, having a map of the space with this level of fidelity could make AR experiences and interactions much more compelling.

Presently this seems to serve the purpose of building out a ‘best case’ scenario of a mapped real-world environment for the company to experiment with. If Meta finds that having this kind of perfectly synchronized real and virtual space becomes important to valuable use-cases with the technology, it may then explore ways to make it easy for users to capture their own spaces with similar precision.

Continued on Page 2 »

Filed Under: AR Research, meta reality labs, meta reality labs connect 2021, michael abrash, News, VR Research

Stunning View Synthesis Algorithm Could Have Huge Implications for VR Capture

August 19, 2021 From roadtovr

As far as live-action VR video is concerned, volumetric video is the gold standard for immersion. And for static scene capture, the same holds true for photogrammetry. But both methods have limitations that detract from realism, especially when it comes to ‘view-dependent’ effects like specular highlights and lensing through translucent objects. Research from Thailand’s Vidyasirimedhi Institute of Science and Technology shows a stunning view synthesis algorithm that significantly boosts realism by handling such lighting effects accurately.

Researchers from the Vidyasirimedhi Institute of Science and Technology in Rayong Thailand published work earlier this year on a real-time view synthesis algorithm called NeX. It’s goal is to use just a handful of input images from a scene to synthesize new frames that realistically portray the scene from arbitrary points between the real images.

Researchers Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn write that the work builds on top of a technique called multiplane image (MPI). Compared to prior methods, they say their approach better models view-dependent effectis (like specular highlights) and creates sharper synthesized imagery.

On top of those improvements, the team has highly optimized the system, allowing it to run easily at 60Hz—a claimed 1000x improvement over the previous state of the art. And I have to say, the results are stunning.

Though not yet highly optimized for the use-case, the researchers have already tested the system using a VR headset with stereo-depth and full 6DOF movement.

The researchers conclude:

Our representation is effective in capturing and reproducing complex view-dependent effects and efficient to compute on standard graphics hardware, thus allowing real-time rendering. Extensive studies on public datasets and our more challenging dataset demonstrate state-of-art quality of our approach. We believe neural basis expansion can be applied to the general problem of light-field factorization and enable efficient rendering for other scene representations not limited to MPI. Our insight that some reflectance parameters and high-frequency texture can be optimized explicitly can also help recovering fine detail, a challenge faced by existing implicit neural representations.

You can find the full paper at the NeX project website, which includes demos you can try for yourself right in the browser. There’s also WebVR-based demos that work with PC VR headsets if you’re using Firefox, but unfortunately don’t work with Quest’s browser.

Notice the reflections in the wood and the complex highlights in the pitcher’s handle! View-dependent details like these are very difficult for existing volumetric and photogrammetric capture methods.

Volumetric video capture that I’ve seen in VR usually gets very confused about these sort of view-dependent effects, often having trouble determining the appropriate stereo depth for specular highlights.

Photogrammetry, or ‘scene scanning’ approaches, typically ‘bake’ the scene’s lighting into textures, which often makes translucent objects look like cardboard (since the lighting highlights don’t move correctly as you view the object at different angles).

The NeX view synthesis research could significantly improve the realism of volumetric capture and playback in VR going forward.

Filed Under: Jiraphon Yenphraphai, light field, News, Pakkapon Phongthawee, Supasorn Suwajanakorn, Suttisak Wizadwongsa, view synthesis, vistec, Volumetric Video, VR Research, vr video, VR Video Capture

Copyright © 2022 GenVR, Inc.
  • Home
  • About us
  • Contact Us