Following the exciting HoloLens announcement from Microsoft yesterday, Dr Simon Taylor takes a look at what it all means.

Dr Simon Taylor, Founder & Research Director, Zappar

I have previously written extensively on the subject of “the ideal AR wearable display” in connection with the mysterious Magic Leap. Yesterday we heard of Microsoft’s interest in the same idea with the announcement of “HoloLens”. There is a little more information on this one, and it’s interesting to compare how close it comes to the ideal AR display I discussed in my previous blog post. Most of the facts and guesses about HoloLens in here are based on the 'Hands-On' description from Wired.

This device is an optical see-through augmented reality display - the user views the real world directly and the virtual content is combined with it optically. This seems to me like the way to go for any headwear that will be worn for an extended period. I identified three key areas of display technology for these devices that required improvement to build my ideal optical see-through AR headwear: field of view, correct focus behaviour, and controllable opacity.

1. Field of view

The 'Hands-On' linked to above suggests the depth camera has a field of view that is 120 by 120 degrees, which is great. There is no indication of the field of view of the display but I hope it would be similarly wide. Lacking any further information on this I’ll give them the benefit of the doubt and a cautious tick for wide field of view.

2. Correct focus behaviour

Correct focus behaviour is important for believable optical see-through AR - if you’ve got virtual content rendered on top of a nearby object, it is important the virtual content is in focus when you are looking at the object (and likewise the virtual content should be out-of-focus if your eyes are focussed at a different depth). There’s a lot of talk in the 'Hands-On' article about “tricking your brain into seeing light as matter” and getting light to be “the exact angle”. This sounds like a description of a “light-field display” - a display capable of controlling the angle at which light rays that are emitted, and thus able to simulate light sources at arbitrary depths. The article doesn’t provide any technical details around the display - whether there are fixed virtual focus planes or if it is completely controllable, for instance - but it is presented as a key feature of the device so I can award a reasonably confident tick in the “correct focus behaviour” category.

3. Controllable opacity

One of the important elements of my ideal device was the ability to control the opacity of virtual objects across the display. I saw this as a key requirement for a device that could be always-worn and offer a full spectrum of experiences from small semi-transparent notifications (with an undimmed view of the rest of the world) to fully immersive VR experiences, and lots of things in-between. It appears HoloLens does not offer this capability. The ‘Hands-On' article mentions "buttons allow you to adjust the volume and to control the contrast of the hologram”, which suggests to me a global value for the mixing of real and virtual scenes. I suspect that the actual mixing of light is still strictly additive, meaning that there is no ability to block specific light rays from the real world, just to globally darken everything by a certain amount. That makes it impossible for example to render a black virtual object on a real white table - to do that you need to selectively block the light rays from the table in the real world that would be occluded by the virtual object. Without that feature I’m not going to give Microsoft a tick for “controllable opacity” in this case.

Is 2 out of 3 enough?

Magic Leap talk of “Cinematic Reality” and a device where virtual objects are indistinguishable from real-world ones. That is only going to be possible with controllable opacity across the display. Microsoft are perhaps going to beat them to market, but leaving out this capability will in my opinion severely limit the utility of the device. By referring to the images as “holograms”, Microsoft are setting expectations based on the ephemeral, semi-transparent “holograms” people recognise from science fiction films. Without the ability to make virtual objects appear solid it’s hard to create believable and immersive experiences. It is no coincidence Microsoft did the demos in specific underground rooms set up for the various scenarios, with presumably plain backgrounds where their instructional videos are going to appear. It is no coincidence the carpet in the Mars simulation room is black, allowing the virtual Mars surface to have more contrast to play with.

In my opinion the inability to render solid virtual objects in the real world is a bit of a show-stopper for immersive AR experiences, but if it’s possible to adjust the contrast so none of the world is visible then it becomes an interesting VR device with a light-field display and perfect head-tracking powered by the depth sensor in the device. For me the Mars scenario would be more compelling as a full VR experience anyway - surely it’s better to see the martian landscape stretching into the distance rather than on top of a set of drawers in my office - but one where I am still able to walk around and crouch down for a closer look at rocks. The only problem is how to stop people walking into objects during their exploration; but I expect there are better ways to render those cues into the virtual world than just to make all of the content semi-transparent and thus massively reduce the feeling of immersion and presence in the martian world.

So 2 out 3 isn’t bad, but won’t produce a device capable of the full range of experiences I had in mind when discussing the ideal wearable display. However HoloLens does offer a compelling set of features and I’m excited to get hands-on with the device when it is released to developers.