Display Maker Demonstrates Flagship OLED VR Display & Pancake Optics, Its Best Yet
Kopin is an electronics manufacturer best known for its microdisplays. In recent years the company has been eyeing the emerging XR industry as a viable market for their wares. To that end, the company has been steady at work creating VR displays and optics that it hopes headset makers will want to snatch up.
At AWE 2022 last month, the company demonstrated its latest work on that front with a new plastic pancake optic and flagship VR display.
Kopin’s P95 pancake optic has just a 17mm distance between the display and lens, along with a 95° field-of-view. Furthermore, it differentiates itself as being an all-plastic optic, which makes it cheaper, lighter, more durable, and more flexible than comparable glass optics. The company says its secret sauce is being able to make plastic pancake optics that are as optically performant as their glass counterparts.
At AWE, I got to peak through the Kopin P95 optic. Inside I saw a sharp image with seemingly quite good edge-to-edge clarity. It’s tough to formulate a firm assessment of how it compares to contemporary headsets as my understanding is that the test pattern being shown had no geometric or color corrections, nor was it calibrated for the numbers shown.
You’ll notice that the P95 is a non-Fresnel optic which should mean it won’t suffer from the kind of ‘god-rays’ and glare that almost every contemporary VR headset exhibits. Granted, without seeing dynamic content it’s tough to know whether or not the multi-element pancake optic introduces any of its own visual artifacts.
Even though the test pattern wasn’t calibrated, it does reveal the retina resolution of the underlying display—Kopin’s flagship ‘Lightning’ display for VR devices.
This little beauty is a 1.3″ OLED display with a 2,560 × 2,560 resolution running up to 120Hz. Kopin says the display has 10-bit color, making viable for HDR.
Combined, the P95 pancake optic and the Lightning display appear to make a viable, retina resolution, compact display architecture for VR headsets. But it isn’t necessarily a shoe-in.
For one, the 95° field-of-view is just barely meeting par. Ostensibly Kopin will need to grow its 1.3″ Lighting display larger if it wants to meet or exceed what’s offered in today’s VR headsets.
Further, the company wasn’t prepared to divulge any info on the brightness of the display or the efficiency of the pancake lens—both of which are key factors for use in VR headsets.
Because pancake lenses use polarized light and bounce that light around a few times, they always end up being less efficient—meaning more brightness on the input to get the same level of brightness output. That typically means more heat and more power consumption, adding to the tradeoffs that would be required if building a headset with this display architecture.
Kopin has been touting its displays and optics as a solution for VR headsets for several years at this point, but at least in the consumer & enterprise space they don’t appear to have found any traction just yet. It’s not entirely clear what’s holding the company back from break into the VR space, but it likely comes down to the price or the performance of the offerings.
That said, Kopin has been steadily moving toward the form-factor, resolution, and field-of-view the VR industry has been hoping for, so perhaps the P95 optic and latest Lightning display will be the point at which the company starts turning heads in the VR space.
Hands-on: Mojo Vision’s Smart Contact Lens is Further Along Than You Might Think
Having not had a chance to see Mojo Vision’s latest smart contact lens for myself until recently, I’ll admit that I expected the company was still years away from having a working contact lens with more than just a simple notification light or a handful of static pixels. Upon looking through the company’s latest prototype I was impressed to see a much more capable prototype than I had expected.
When I walked into Mojo Vision’s demo suite at AWE 2022 last month I was handed a hard contact lens that I assumed was a mockup of the tech the company hoped to eventually shrink and fit into the lens. But no… the company said this was a functional prototype, and everything inside the lens was real, working hardware.
The company tells me this latest prototype includes the “world’s smallest” MicroLED display—at a miniscule 0.48mm, with just 1.8 microns between pixels—an ARM processor, 5GHz radio, IMU (with accelerometer, gyro, and magnetometer), “medical-grade micro-batteries,” and a power management circuit with wireless recharging components.
And while the Mojo Vision smart contact lens is still much thicker than your typical contact lens, last week the company demonstrated this prototype can work in an actual human eye, using Mojo Vision CEO Drew Perkins as the guinea pig.
And while this looks, well… fairly creepy when actually worn in the eye, the company tells me that, in addition to making it thinner, they’ll cover the electronics with cosmetic irises to make it look more natural in the future.
At AWE I wasn’t able to put the contact lens in my own eye (Covid be damned). Instead the company had the lens attached to a tethered stick which I held up to my eye to peer through.
When I did I was surprised to see more than just a handful of pixels, but a full-blown graphical user interface with readable text and interface elements. It’s all monochrome green for now (taking advantage of the human eye’s ability to see green better than any other color), but the demo clearly shows that Mojo Vision’s ambitions are more than just a pipe dream.
Despite the physical display in the lens itself being opaque and directly in the middle of your eye, you can’t actually see it because it’s simply too small and too close. But you can see the image that it projects.
Compared to every HMD that exists today, Mojo Vision’s smart contact lens is particularly interesting because it moves with your eye. That means the display itself—despite having a very small 15° field-of-view—moves with your vision as you look around. And it’s always sharp no matter where you look because it’s always over your fovea (the center part of the retina that sees the most detail). In essence, it’s like having ‘built-in’ foveated rendering. A limited FoV remains a bottleneck to many use-cases, but having the display actually move with your eye alleviates the limitation at least somewhat.
But what about input? Mojo Vision has also been steady at work on figuring out how users will interact with the device. As I wasn’t able to put the lens into my own eye, the company instead put me in a VR headset with eye-tracking to emulate what it would be like to use the smart contact lens itself. Inside the headset I saw roughly the same interface I had seen through the demo contact lens, but now I could interact with the device using my eyes.
The current implementation doesn’t constrain the entire interface to the small field-of-view. Instead, your gaze acts as a sort of ‘spotlight’ which reveals a larger interface as you move your eyes around. You can interact with parts of the interface by hovering your gaze on a button to do things like show the current weather or recent text messages.
It’s an interesting and hands-free approach to an HMD interface, though in my experience the eyes themselves are not a great conscious input device because most of our eye-movements are subconsciously controlled. With enough practice it’s possible that manually controlling your gaze for input will become as simple and seamless as using your finger to control a touchscreen; ultimately another form of input might be better but that remains to be seen.
This interface and input approach is of course entirely dependent on high quality eye-tracking. Since I didn’t get to put the lens on for myself, I have no indication if Mojo Vision’s eye-tracking is up to the task, but the company claims its eye-tracking is an “order of magnitude more precise than today’s leading [XR] optical eye-tracking systems.”
In theory it should work as well as they claim—after all, what’s a better way to measure the movement of your eyes than with something that’s physically attached to them? In practice, the device’s IMU is presumably just as susceptible to drift as any other, which could be problematic. There’s also the matter of extrapolating and separating the movement of the user’s head from sensor data that’s coming from an eye-mounted device.
If the company’s eye-tracking is as precise (and accurate) as they claim, it would be a major win because it could enable the device to function as a genuine AR contact lens capable of immersive experiences, rather than just a smart contact lens for basic informational display. Mojo Vision does claim it expects its contact lens to be able to do immersive AR eventually, including stereoscopic rendering with one contact in each eye. In any case, AR won’t be properly viable on the device until a larger field-of-view is achieved, but it’s an exciting possibility.
So what’s the road map for actually getting this thing to market? Mojo Vision says it fully expects FDA approval will be necessary before they can sell it to anyone, which means even once everything is functional from a tech and feature standpoint, they’ll need to run clinical trials. As for when that might all be complete, the company told me “not in a year, but certainly [sooner than] five years.”
Meta Reveals VR Headset Prototypes Designed to Make VR ‘Indistinguishable From Reality’
Meta says its ultimate goal with its VR hardware is to make a comfortable, compact headset with visual finality that’s ‘indistinguishable from reality’. Today the company revealed its latest VR headset prototypes which it says represent steps toward that goal.
Meta has made it no secret that it’s dumping tens of billions of dollars in its XR efforts, much of which is going to long-term R&D through its Reality Labs Research division. Apparently in an effort to shine a bit of light onto what that money is actually accomplishing, the company invited a group of press to sit down for a look at its latest accomplishments in VR hardware R&D.
Reaching the Bar
To start, Meta CEO Mark Zuckerberg spoke alongside Reality Labs Chief Scientist Michael Abrash to explain that the company’s ultimate goal is to build VR hardware that meets all the visual requirements to be accepted as “real” by your visual system.
VR headsets today are impressively immersive, but there’s still no question that what you’re looking at is, well… virtual.
Inside of Meta’s Reality Labs Research division, the company uses the term ‘visual Turing Test’ to represent the bar that needs to be met to convince your visual system that what’s inside the headset is actually real. The concept is borrowed from a similar concept which denotes the point at which a human can tell the difference between another human and an artificial intelligence.
For a headset to completely convince your visual system that what’s inside the headset is actually real, Meta says you need a headset that can pass that “visual Turing Test.”
Zuckerberg and Abrash outlined what they see as four key visual challenges that VR headsets need to solve before the visual Turing Test can be passed: varifocal, distortion, retina resolution, and HDR.
Briefly, here’s what those mean:
- Varifocal: the ability to focus on arbitrary depths of the virtual scene, with both essential focus functions of the eyes (vergence and accommodation)
- Distortion: lenses inherently distort the light that passes through them, often creating artifacts like color separation and pupil swim that make the existence of the lens obvious.
- Retina resolution: having enough resolution in the display to meet or exceed the resolving power of the human eye, such that there’s no evidence of underlying pixels
- HDR: also known as high dynamic range, which describes the range of darkness and brightness that we experience in the real world (which almost no display today can properly emulate).
The Display Systems Research team at Reality Labs has built prototypes that function as proof-of-concepts for potential solutions to these challenges.
To address varifocal, the team developed a series of prototypes which it called ‘Half Dome’. In that series the company first explored a varifocal design which used a mechanically moving display to change the distance between the display and the lens, thus changing the focal depth of the image. Later the team moved to a solid-state electronic system which resulted in varifocal optics that were significantly more compact, reliable, and silent. We’ve covered the Half Dome prototypes in greater detail here if you want to know more.
Virtual Reality… For Lenses
As for distortion, Abrash explained that experimenting with lens designs and distortion-correction algorithms that are specific to those lens designs is a cumbersome process. Novel lenses can’t be made quickly, he said, and once they are made they still need to be carefully integrated into a headset.
To allow the Display Systems Research team to work more quickly on the issue, the team built a ‘distortion simulator’, which actually emulates a VR headset using a 3DTV, and simulates lenses (and their corresponding distortion-correction algorithms) in-software.
Doing so has allowed the team to iterate on the problem more quickly, wherein the key challenge is to dynamically correct lens distortions as the eye moves, rather than merely correcting for what is seen when the eye is looking in the immediate center of the lens.
On the retina resolution front, Meta revealed a previously unseen headset prototype called Butterscotch, which the company says achieves a retina resolution of 60 pixels per degree, allowing for 20/20 vision. To do so, they used extremely pixel-dense displays and reduced the field-of-view—in order to concentrate the pixels over a smaller area—to about half the size of Quest 2. The company says it also developed a “hybrid lens” that would “fully resolve” the increased resolution, and it shared through-the-lens comparisons between the original Rift, Quest 2, and the Butterscotch prototype.
While there are already headsets out there today that offer retina resolution—like Varjo’s VR-3 headset—only a small area in the middle of the view (27° × 27°) hits the 60 PPD mark… anything outside of that area drops to 30 PPD or lower. Ostensibly Meta’s Butterscotch prototype has 60 PPD across its entirely of the field-of-view, though the company didn’t explain to what extent resolution is reduced toward the edges of the lens.
Continue on Page 2: High Dynamic Range, Downsizing »
Hands-on: Magic Leap 2 Shows Clear Improvements, But HoloLens 2 Retains Some Advantages
Magic Leap 2 isn’t available just yet, but when it hits the market later this year it will be directly competing with Microsoft’s HoloLens 2. Though Magic Leap 2 beats out its rival in several meaningful places, its underlying design still leaves HoloLens 2 with some advantages.
Magic Leap as a company has had a wild ride since its founding way back in 2010, with billions of dollars raised, an ambitious initial product that fell short of the hype, and a near-death and rebirth with a new CEO.
The company’s latest product, Magic Leap 2, in many ways reflects the ‘new’ Magic Leap. It’s positioned clearly as an enterprise product, aims to support more open development, and it isn’t trying to hype itself as a revolution. Hell—Magic Leap is even (sensibly) calling it an “AR headset” this time around instead of trying to invent its own vocabulary for the sake of differentiation.
After trying the headset at AWE 2022 last week, I got the sense that, like the company itself, Magic Leap 2 feels like a more mature version of what came before—and it’s not just the sleeker look.
Magic Leap 2 Hands-on
The most immediately obvious improvement to Magic Leap 2 is in the field-of-view, which is increased from 50° to 70° diagonally. At 70°, Magic Leap 2 feels like it’s just starting to scratch that ‘immersive’ itch, as you have more room to see the augmented content around you which means less time spent ‘searching’ for it when it’s out of your field-of-view.
While I suspect many first-time Magic Leap 2 users will come away with a ‘wow the field-of-view is so good!’ reaction… it’s important to remember that the design of ML2 (like its predecessor), ‘cheats’ a bit when it comes to field-of-view. Like the original, the design blocks a significant amount of your real-world peripheral vision (intentionally, as far as I can tell), which makes the field-of-view appear larger than it actually is by comparison.
This isn’t necessarily a bad thing if only the augmented content is your main focus (I mean, VR headsets have done this pretty much since day one), but it’s a questionable design choice for a headset that’s designed to integrate your real-world and the augmented world. Thus real-world peripheral vision remains a unique advantage that HoloLens 2 holds over both ML1 and ML2… but more on that later.
Unlike some other AR headsets, Magic Leap 2 (like its predecessor) has a fairly soft edge around the field-of-view. Instead of a hard line separating the augmented world from the real-world, it seems to gently fade away, which makes it less jarring when things go off-screen.
Another bonus to immersion compared to other devices is the headset’s new dimming capability which can dynamically dim the lenses to reduce incoming ambient light in order to make the augmented content appear more solid. Unfortunately this was part of the headset that I didn’t have time to really put through its paces in my demo as the company was more focused on showing me specific content. Another thing I didn’t get to properly compare is resolution. Both are my top priority for next time.
Tracking remains as good as ever with ML2, and on-par with HoloLens 2. Content feels perfectly locked to the environment as you move your head around. I did see some notable blurring, mostly during positional head movement specifically. ML1 had a similar issue and it has likely carried over as part of the headset’s underlying display technology. In any case it seems mostly hidden during ‘standing in one spot’ use-cases, and impacts text legibility more than anything else.
And while the color-consistency issue across the image is more subtle (the ‘rainbow’ look), it’s still fairly obvious. It didn’t appear to be as bad as ML1 or HoloLens 2, but it’s still there which is unfortunate. It doesn’t really impact the potential use-cases of the headset, but it does bring a slight reduction to the immersiveness of the image.
While ML2 has been improved almost across the board, there’s one place where it actually takes a step back… and it was one of ML1’s most hyped features: the mystical “photonic lightfield chip” (AKA a display with two focal planes)—is no longer. Though ML2 does have eye-tracking (likely improved thanks to doubling the number of cameras), it only supports a single focal plane (as is the case for pretty much all AR headsets available today).