Creating and Sharing Brand Stories

Holographic Moments 13

   |   [a] 

The Trials and Tribulations of Spectator View

Hi, I’m Jeff. You may remember me from my post about transitioning from Unity development to Hololens development. This time I get to talk about the ins and outs of making spectator view work.

A holographic experience where only one person participates makes that person seem delusional. However if you introduce multiple people into the mix, the experience becomes amusing. One of the challenges of our Build presentation was to show the audience what our user was seeing from the perspective of a third person. This is a feature that many people new to the holographic world assume to be automatically possible, but there is quite a bit of engineering that goes into it.

Before we start, you should know… we name our HoloLenses.

First, a really important thing to be aware of is that in order for one HoloLens (Martha), to be able to see what another HoloLens (Apollo) is seeing, they need to communicate to each other via networking. You may think that if an app is multi-user it would already have this function. However this is not the case and our Praeses app for building code inspection is not multi-user in its core functionality. This is a special kind of networking; the second HoloLens (Apollo) is purely acting as a spectator while a high-end PC is capturing the content. It is extremely important to decide early on when developing the app whether it is going to be connected to a Spectator View setup as you need to be mindful of the architecture of your code to account for the networking capability. To capture our Spectator View, we used Microsoft’s Holotoolkit Sharing Service because it has supporting documentation on how the setup (a HoloLens rig, a capture PC and a DSLR camera) play together.

Photo credit: Microsoft

Second, there needs to be a way for the functions to sync and talk to each other between the HoloLenses (Martha and Apollo). Independent from the Holotoolkit Spectator View is Unity networking, called UNet, that comes out of the box. While it is possible to do the entire experience using solely Holotoolkit Sharing Service, personally, I found the documentation (plus a number of tutorials online) of UNet to be easier to digest than Holotoolkit. And surprisingly, we discovered that UNet can work alongside Sharing Service unencumbered. Interestingly, since our development, Microsoft has migrated to using only UNet for Spectator View.

Initially we were very worried that our codes would not work with networking, since our functions weren’t written with networking capabilities from the beginning (as mentioned above as being important and we had to learn the hard way), but we figured out a workaround. We basically determined, based on our storyboards, that there were a limited number of features of our app that needed to be shot with Spectator View. Jumping from this idea, we pinpointed that instead of syncing the functions, we could just sync the user input. So instead of having our devices trade functions between each other, they simply traded users.

Normally with a HoloLens, the gaze cursor and tap is determined by the user currently wearing the device, but instead we were telling Martha (the HoloLens attached to the camera), “Hey, Apollo is the HoloLens you are using to base your inputs (tap and gaze) off of.” This way, our functions existed local to each device, and all that Martha was doing was registering inputs from the actor (wearing Apollo). This greatly simplified what we were trying to achieve in this Spectator View video.

In conclusion, please remember that if you are trying to promote a certain app by using Spectator View, it doesn’t work right out of the box. You will have to dig into the project files and add functionalities. Depending on how simple your code architecture is, you might get lucky and a solution like our input syncing might work, but other times it might not. I personally think input syncing is a good workaround, but not necessarily a solution, as you may be dealing with lag and it is possible that the two devices could get out of sync in their states (e.g., if a keyboard opened in one but not the other). Finally, don’t forget a lot of time was spent on shoot day troubleshooting. Both hardware and software took us roughly 12 hours! Because of this, Jenna and I (as the engineers) had to sit throughout the shoot as well as the rest of the typical shooting crew. Oh those were fun times! And resulted in a lot of bonding between us…

Contact Us