• Skip to primary navigation
  • Skip to main content

VRSUN

Hot Virtual Reality News

HOTTEST VR NEWS OF THE DAY

  • Home
  • About us
  • Contact Us

ar industry

Magic Leap Commits to OpenXR & WebXR Support Later This Year on ML2

June 7, 2022 From roadtovr

In an ongoing shift away from a somewhat proprietary development environment on its first headset, Magic Leap has committed to bringing OpenXR support to its Magic Leap 2 headset later this year.

Although Magic Leap 2 is clearly the successor to Magic Leap 1, the goal of the headsets are quite different. With the first headset the company attempted to court developers who would build entertainment and consumer-centric apps, and had its own ideas about how its ‘Lumin OS’ should handle apps and how they should be built.

After significant financial turmoil and then revival, the company emerged with new CEO and very different priorities for Magic Leap 2. Not only would the headset be clearly and unequivocally positioned for enterprise use-cases, the company also wants to make it much easier to build apps for the headset.

To that end Magic Leap’s VP of Product Marketing & Developer Programs, Lisa Watts, got on stage at week’s AWE 2022 to “announce and reaffirm to all of you and to the entire industry [Magic Leap’s] support for open standards, and making our platform very easy to develop for.”

In the session, which was co-hosted by Chair of the OpenXR Working Group, Brent Insko, Watts reiterated that Magic Leap 2 is built atop an “Android Open Source Project-based OS interface standard,” and showed a range of open and accessible tools that developers can currently use to build for the headset.

Toward the end of the year, Watts shared, the company expects Magic Leap 2 to also include support for OpenXR, Vulkan, and WebXR.

Image courtesy Magic Leap

OpenXR is a royalty-free standard that aims to standardize the development of VR and AR applications, making hardware and software more interoperable. The standard has been in development since 2017 and is backed by virtually every major hardware, platform, and engine company in the VR industry, and a growing number AR players.

In theory, an AR app built to be OpenXR compliant should work on any OpenXR compliant headset—whether that be HoloLens 2 or Magic Leap 2—without any changes to the application.

OpenXR has picked up considerable steam in the VR space and is starting to see similar adoption momentum in the AR space, especially with one of the sector’s most visible companies, Magic Leap, on board.

Filed Under: AR Development, ar industry, brent insko, lisa watts, Magic Leap 2, magic leap 2 openxr, magic leap 2 webxr, News, OpenXR, WebXR

Reality Labs Chief Scientist Outlines a New Compute Architecture for True AR Glasses

May 2, 2022 From roadtovr

Speaking at the IEDM conference late last year, Meta Reality Labs’ Chief Scientist Michael Abrash laid out the company’s analysis of how contemporary compute architectures will need to evolve to make possible the AR glasses of our sci-fi conceptualizations.

While there’s some AR ‘glasses’ on the market today, none of them are truly the size of a normal pair of glasses (even a bulky pair). The best AR headsets available today—the likes of HoloLens 2 and Magic Leap 2—are still closer to goggles than glasses and are too heavy to be worn all day (not to mention the looks you’d get from the crowd).

If we’re going to build AR glasses that are truly glasses-sized, with all-day battery life and the features needed for compelling AR experiences, it’s going to take require a “range of radical improvements—and in some cases paradigm shifts—in both hardware […] and software,” says Michael Abrash, Chief Scientist at Reality Labs, Meta’s XR organization.

That is to say: Meta doesn’t believe that its current technology—or anyone’s for that matter—is capable of delivering those sci-fi glasses that every AR concept video envisions.

But, the company thinks it knows where things need to head in order for that to happen.

Abrash, speaking at the IEDM 2021 conference late last year, laid out the case for a new compute architecture that could meet the needs of truly glasses-sized AR devices.

Follow the Power

The core reason to rethink how computing should be handled on these devices comes from a need to drastically reduce power consumption to meet battery life and heat requirements.

“How can we improve the power efficiency [of mobile computing devices] radically by a factor of 100 or even 1,000?” he asks. “That will require a deep system-level rethinking of the full stack, with end-to-end co-design of hardware and software. And the place to start that rethinking is by looking at where power is going today.”

To that end, Abrash laid out a graph comparing the power consumption of low-level computing operations.

Image courtesy Meta

As the chart highlights, the most energy intensive computing operations are in data transfer. And that doesn’t mean just wireless data transfer, but even transferring data from one chip inside the device to another. What’s more, the chart uses a logarithmic scale; according to the chart, transferring data to RAM uses 12,000 times the power of the base unit (which in this case is adding two numbers together).

Bringing it all together, the circular graphs on the right show that techniques essential to AR—SLAM and hand-tracking—use most of their power simply moving data to and from RAM.

“Clearly, for low power applications [such as in lightweight AR glasses], it is critical to reduce the amount of data transfer as much as possible,” says Abrash.

To make that happen, he says a new compute architecture will be required which—rather than shuffling large quantities of data between centralized computing hubs—more broadly distributes the computing operations across the system in order to minimize wasteful data transfer.

Compute Where You Least Expect It

A starting point for a distributed computing architecture, Abrash says, could begin with the many cameras that AR glasses need for sensing the world around the user. This would involve doing some preliminary computation on the camera sensor itself before sending only the most vital data across power hungry data transfer lanes.

Image courtesy Meta

To make that possible Abrash says it’ll take co-designed hardware and software, such that the hardware is designed with a specific algorithm in mind that is essentially hardwired into the camera sensor itself—allowing some operations to be taken care of before any data even leaves the sensor.

Image courtesy Meta

“The combination of requirements for lowest power, best requirements, and smallest possible form-factor, make XR sensors the new frontier in the image sensor industry,” Abrash says.

Continue on Page 2: Domain Specific Sensors »

Filed Under: AR glasses, AR Headset, ar industry, iedm 2021, Meta, michael abrash, News, Reality Labs, vr industry

Snap Acquires Brain-Computer Interface Startup NextMind

March 23, 2022 From roadtovr

Snap announced it’s acquired neurotech startup NextMind, a Paris-based company known for creating a $400 pint-sized brain-computer interface (BCI).

In a blog post, Snap says NextMind will help drive “long-term augmented reality research efforts within Snap Lab,” the company’s hardware team that’s currently building AR devices.

“Snap Lab’s programs explore possibilities for the future of the Snap Camera, including Spectacles. Spectacles are an evolving, iterative research and development project, and the latest generation is designed to support developers as they explore the technical bounds of augmented reality.”

Snap hasn’t detailed the terms or price of the NextMind acquisition, saying only that the team will continue to operate out of Paris, France. According to The Verge, NextMind will also be discontinuing production of its BCI.

Photo captured by Road to VR

Despite increasingly accurate and reliable hand and eye-tracking hardware, input methods for AR headsets still isn’t really a solved problem. It’s not certain whether NextMind’s tech, which is based on electroencephalogram (EEG), was the complete solution either.

NextMind’s BCI is non-invasive and slim enough to integrate into the strap of an XR headset, something that creators like Valve have been interested in for years. It’s also

Granted, there’s a scalp, connective tissue, and a skull to read through, which limits the kit’s imaging resolution, which allowed NextMind to do some basic inputs like simple UI interaction—very far off from the sort of ‘read/write’ capabilities that Elon Musk’s Neuralink is aiming for with its invasive brain implant.

Snap has been collecting more companies to help build out its next pair of AR glasses. In addition to NextMind, Snap acquired AR waveguide startup WaveOptics for over $500 million last May, and LCOS maker Compound Photonics in January.

Snap is getting close too. Its most recent Spectacles (fourth gen) include displays for real-time AR in addition to integrated voice recognition, optical hand tracking, and a side-mounted touchpad for UI selection.

Filed Under: ar industry, AR Input, bci, brain computer interface, News, next mind, nextmind, snap, snap ar, snap chat, snap spectacles, spectacles

Copyright © 2022 GenVR, Inc.
  • Home
  • About us
  • Contact Us