Hololens tricks everyone should know

We started our preliminary work on mixed reality experiences last year and we now feel confident enough to share our knowledge and experience gained by developing on Windows Holographic, especially with Hololens. You can consider those as my Hololens notes 101.

Hololens tests in the Arts Décoratifs museum, Paris

Hololens is less powerful than an iPhone

3D @60 fps is King

I assure you don’t want your experience to run at less than a stable 60 frames per second. Even 59 fps is not good enough, and will make the experience somewhat disappointing. Also, when you let other people try your experience and you stream the video on your laptop, it drops the framerate, so you better optimize your assets and your code.

Ban complex 3D models

The graphic processing unit inside the Hololens device is not powerful enough to run super-awesome-demos-from-the-future, sorry. You need to thoughtfully decide which holograms need to have detailed visuals, and simplify the others. As a result, ask your 3D artist to design objects with this low-poly rule in mind : the device won’t display a 250K-poly scene smoothly. We’re using Unity and C# here, not Unreal Engine and native code (unfortunately, if you ask me). You can also be smart and leverage LoD algorithms: display a low-poly version of your mesh until the camera crosses a specified threshold, replacing the low-poly model by a much more detailed one.

Avoid complex shaders

As said before, Hololens doesn’t have the graphics horsepower we all need and deserve. That’s why the Holotoolkit provides Physically Based Rendering custom shaders optimized for the device. If you feeling like it, you can also recode your own shaders, to decrease the amount of per-pixel operations.

Be wise on animations

Animations will slow down the device, and will give you a monstrous headache. Limit yourself to simple animations and transitions, like the extremely traditional but efficient fading and cross-fading.

Don’t put too many holograms

You will quickly realize how narrow the holographic frame is. The result is impressive, but the “screen” is really tiny. You can’t and shouldn’t put many holograms in there — the only outcome is a polluted environment. But that’s the reason you should leverage Windows Holographic’s spatial awareness APIs! Nature gave us space, and computers finally understand our environment. Let’s use their capabilities to place holograms all around us. If you disagree, just have a look at PokemonGo. 🙂

You are probably designing your experience in a spacious office. Don’t forget the place where your app will be used might be crowded (I’m thinking of museums, stores, tech events). Keep space for humans — don’t forget them!


Hire a 3D designer & developers that love math.

Hololens is less powerful than Kinect

Limited gestures

There’s just a single gesture available right now in the SDK : AirTap. While your designers can start thinking today about all the potential gestures that might (hopefully) come in the near future, you need to deal with it.

Sensors issues

The environment is 3D scanned using Kinect-like cameras. You should rely on it to place your holograms procedurally, but don’t count on it if you expect the mesh to be really precise. It won’t detect all-white or all-black surface, and tiny objects like headphones … or VR devices. 🙂

The gaze gesture relies on head rotation sensors and obviously produces significant noise. HoloToolkit provides smoothing algorithm that filters the noise. As a consequence, I would recommend displaying the gaze cursor only when necessary — in scenarios that involve tapping on buttons (or stuff. you can do everything with holograms!).


Working on Hololens is frustrating because of the many, many limitations of today’s hardware. BUT, it shows you how cool the future of computing is. It’s finally happening and we can’t wait to see what the community will build for Windows Holographic devices.

Thomas N