Frequently Asked Questions

You asked, we answered. Here are a few of the reoccurring questions which we gathered throughout our 281% funded Kickstarter campaign.

ZapBox FAQ

Isn't this just Google Cardboard?

Take a look at the comparison chart to see the differences.

The unique part about ZapBox is the additional cardboard components and the underlying software means it offers significantly more guaranteed functionality over the existing Google Cardboard ecosystem which will empower developers to create exciting new content experiences that are not possible with Cardboard alone.

With Cardboard the only guaranteed means of input is head rotation - many headsets don’t have any form of input button. ZapBox on the other hand offers full 3D tracking of both the user and a pair of handheld controllers, along with integrated MR rendering capabilities.

I've heard of Augmented Reality, how is "Mixed Reality" different?

If you think MR sounds a lot like Augmented Reality (AR), you’d be right. We’ve used the term Mixed Reality to describe ZapBox primarily because the experiences it delivers are similar to those shown on HoloLens and Magic Leap, and Mixed Reality is the term they use to describe their content. AR is popular on phones, but the more immersive, larger-scale experiences delivered through headsets feel different enough to warrant a new term.

If that all sounds a bit wishy-washy marketing speak to you, a more technical distinction between MR and AR can be made by considering the contextual relevance of the content in the real world environment. AR is all about context - Pokémon GO characters appear in relevant places in the world, and visual tracking enables things like maintenance instructions overlaid on a specific model of printer. In MR, the real world is used as a stage for virtual content but the context is less relevant. Content appears anchored in the real world to allow natural exploration and interaction, but it doesn’t really “belong” there - mini golf doesn’t really belong in our office, but it’s sure fun to play!

AR gives a great sense of connection between the real and virtual, but you need to go in search of the objects or places with associated content. With the MR approach of ZapBox the content can come to you! This makes browsing, sharing and experiencing content much easier for users. For hobbyist developers the MR paradigm allows content to be shared with the audience of other ZapBox users without them needing to print out or find specific “target images” for your content.

We’re convinced there’s a big future for both AR and MR, and ZapWorks is an ideal platform for creating content for either paradigm.

I'm a developer. Do I have to use ZapWorks to create content for ZapBox?

Initially ZapWorks Studio will be the only supported tool for publishing content for ZapBox. It supports importing assets made in other tools such as videos or animated 3D content (from tools like Blender, 3DS Max or Maya).
ZapWorks and the Zappar platform is perfectly suited to creating and publishing short-form experiences that don’t require app submissions for each piece of content. This provides a significant benefit both to users wanting to discover content and developers wanting to share what they’ve built. We believe that this integrated approach will allow a ZapBox ecosystem to develop and thrive.
Although theoretically possible to develop plugins to expose the underlying technology to other tools (eg Unity or Unreal) the resulting experiences would need to be distributed as standalone apps. This would offer significantly more friction to the user experience, and hence is not currently a priority.

Do I need so many pointcodes in the world? They look quite dense in your demos.

As the videos are recorded live and to give a true impression of the performance that ZapBox offers, they represent the current state of our development.

Using the wide-angle lens adapter increases the field of view of the camera and hence reduces the required density of codes to robustly track the same area. There are also continuing improvements to both the tracking quality of individual codes, along with sensor fusion with device accelerometer and gyroscope, which we expect to provide solid tracking with fewer codes in view.

Another possibility that we will investigate is employing other natural features in the image to constrain the user position, either through visual odometry or full SLAM.

Finally we are also planning to investigate blurring or in-filling the codes in the camera texture, in which case you won't even see them in the experience itself (if they're on a solid background...).

We think they look cool though :)