Valve patents: speculation about the Index HMD

See Full Article >>

Boost Your Social Clout

Hi all!

We’re all stoked to see the upcoming Index HMD. A few days ago I was digging around in Valve patents (that have been published after the Vive came out), and I think there are some nice tidbits in those that may indicate some of the features coming in the Valve Index.

In chronological order of published date (all are links to the patent PDFs):


1 – Mar. 23, 2017 – PLAYER BIOFEEDBACK FOR DYNAMICALLY CONTROLLING A VIDEO GAME STATE

This has been talked about a lot before, and isn’t really VR specific. VNN has a video on brain interfacing recently. A VR HMD is already something you put on your head, so it’s not that out-there to think that some future HMD may have sensors that can figure out your current emotional state and adjust the game based on that.

This is starting off slightly off-topic here, but I just want to put out some links here. Mary Lou Jepsen worked at Facebook/Oculus for about a year back in 2015-2016 and has worked on lots of interesting display technology before. She went straight from Facebook to start up OpenWater that develops technology that can enable MRI-like brain imaging from just infrared light, but with far superior resolution, and in a form factor that would fit a headband. See her TED talk about it here. GabeN has said (and VNN quotes) that “[computer-brain-interfaces] are a lot further along than most people anticipate”. Maybe this is it? I wonder if that’s an addon that comes later, and is what the expansion port thing on the front is for. Also, look at this patent (not by Valve, though).


2 – Nov. 28, 2017 – DISPLAY WITH STACKED EMISSION AND CONTROL LOGIC LAYERS

Some interesting part from the patent on what this means in practice (emphasis mine):

Corresponding advantages of silicon electron mobility and processing for display fabrication allows a clear and immediate pathway to retinal near eye displays (e.g., dimensions of 10,000×10,000 or more), high dynamic range, greatly improved fill factor, and brightness (e.g., 10,000 cd/m2 or more). In addition, with respect to production of a display panel, significant improvements in pixel density for significantly larger sizes of display panel may be achieved at significantly lower costs (e.g., $50 or less). In addition, the described techniques may provide a variety of benefits with respect to the use of such a display panel, including to provide one or more of the following: increased bandwidth in communication of video signals to the display panel, photorealistic immersive visual experience […].


3 – Feb. 8, 2018 – MITIGATION OF SCREEN DOOR EFFECT IN HEAD – MOUNTED DISPLAYS

This is pretty straight-forward. This one is talking about using a microlens display in between the lens and the users eye in order to reduce the screen door effect. As far as I understand it, it’s like having an array of tiny magnifying glasses so that you’ll see more of the pixels and less of the space in between the pixels. It differs from the diffusion filter used in the Odyssey+, as that was placed between the panels and the lens, not between the lens and your eyes (as this patent is about). It probably also wouldn’t reduce the sharpness of the display, as I understand it.


4 – Feb. 22, 2018 – SYSTEMS AND METHODS FOR DETECTION AND / OR CORRECTION OF PIXEL LUMINOSITY AND / OR CHROMINANCE RESPONSE VARIATION IN DISPLAYS

This is something to use in the production process in order to reduce visual artifacts that vary between individual displays and from pixel to pixel. By having the GPU (or an on-board chip) modify the video signal to reverse the artifacts for the given display, it works kind-of like noise cancellation, but for brightness and color “noise” in the display, thus reducing mura.

Interesting side note about putting this workload in the GPU or on an on-board chip (again, emphasis mine):

In certain HMD – related embodiments, mura correction processing in accordance with aspects of the present invention is performed host – side on the graphics processing unit (“GPU”). However, depending on the requirements of each particular implementation, such processing may be effected in silicon, in the headset itself, on a tether, or in the display panel electronics, for example. Such alternative implementations may provide greater image compressibility, which is important in situations involving limited link bandwidths (e.g., wireless systems).

Although it shouldn’t come as a surprise that Valve has at least toyed with the thought of a wireless system.


5 – Mar. 22, 2018 – OPTICAL SYSTEM FOR HEAD – MOUNTED DISPLAY SYSTEM

This one was a bit hard for me to understand. It talks about using double layer fresnel lenses in an HMD in a way that provides multiple “fields” (e.g., small FOV angles, large FOV angles), and allows for seamless transitions when looking between these zones. Figure 3 is of particular interest, I think, as it indicates how this can be used to create an HMD with a massively larger FOV than one could with only one fresnel layer.


6 – May 3, 2018 – USING PUPIL LOCATION TO CORRECT OPTICAL LENS DISTORTION

This thing describes itself. Using eye tracking to correct for optical distortions. The XTAL HMD already has automatic IPD adjustment based on their eye tracking, but this is going beyond that by configuring optimal lens distortion mapping based on where your pupils are in three dimensional space in front of the lenses.


All in all, very much to be exited about. All of this leads me to believe that Valve has made the most graphically stunning VR HMD to date. It might not have eye tracking or wireless (yet?), but I do think it will at least have substantially reduced screen door effect and higher FOV. What do you think?

submitted by /u/olemartinorg
[link] [comments]

Boost Your Social Clout