Tim Bajarin was right (c.f. article), 2013 is turning into the year that Augmented Reality (“AR”) went mainstream.
Between ourselves at Zappar Labs and the rest, there are some really exciting AR initiatives going on across publishing, retail, packaging, events and consumer products - building a bridge between physical objects and digital devices. More consumers are having more positive experiences using their smartphones and tablets to scan and unlock content: they’re getting bite-size entertainment rather than just a web link served up for their time and effort. A new visual language is being created, connecting people with ‘the secret life of things’. It’s a really exciting time and we’re only seeing the tip of the iceberg of what’s capable. But that same iceberg is going to cause some AR companies to spring a serious leak soon. OK, terrible analogy. So what’s our point?
Over the last three years at Zappar what we’ve seen over and over from our partners a desire to:
- Make multiple images that all look similar have AR experiences attached to them - for integrated multi-media and international campaigns (the same image may be cropped or scaled differently or have different localised copy) or;
- Have the same execution be able to display different experiences – for different audiences/ media environments or;
- Make sure the same execution only works for their brand, region or category. Take the example of the iconic Superman Shield or Pepsi Max logo – you don’t necessarily want this displaying a single AR experience in every region and on every product (real or counterfeit!).
The problem comes with marker-less image recognition with server side look up solutions. You basically have a situation reminiscent of the dawn of the internet where it’s a land grab for images (or registered domain names as it was) so that your AR content is attached to a target. Use of stock or iconic photography adds another spanner in the works (who owns the AR space for The Blue Marble?) OK, there’s some refinement you can do with selecting territories and adding location data, but ultimately there’s no easy way to discern between two images in this scenario from a single app. This model works OK for publishing where today’s news is tomorrows chip paper. But that’s a pretty limiting world-view and increasingly crowded space that will soon be commoditized.
At Zappar Labs we talk about this in terms of being a ‘Global Image Namespace’ issue. And we have a solution: the beautifully formed Zapcode. A simple brand identifier that tells a user that an image or product has AR content attached to it, whilst also getting round the global image namespace problem. With 4 billion permutations of a Zapcode we’re not going to run out any time soon (or if we do that’s a nice problem to have!). With Zapcodes you can have two identical images with different content, or similar ones with the same content. You can track the performance of the same image in different media.
Better still, your Zapcode can roam free. So you can attach content to the code if you like and put it on whatever you want – print, packaging, business cards, greetings cards, receipts – heck you can tattoo it on your arm if you’re that way inclined.
Back to the iceberg analogy. At Zappar we tend to take the view from the bridge and plan for the long term. Augmented Reality is exciting but for it to scale properly and for the tools to be open to everyone at any level and be unbounded it needs a different way of thinking beyond marker-less and beyond cloud served image retrieval infrastructures for AR.
2013 may be the year that people start seeing Augmented Reality as more than a gimmick but for truly mass adoption we need solutions that can scale into 2014, 15 and beyond. We believe Zapcodes is just that solution…