Convergence in XR

Field Notes #1

After the LA Film Festival, LEAPcon and new launches in the AR space in the form of and Torch app, as well as the announcement of the Oculus Quest, I realized that this is a time of convergence in the XR space. There are some important trends I see merging in the near future within XR tech. These technologies seem separate, but are ever-more closely related, and increasingly, one will build on top of the other, to give more power to XR storytellers and content creators.

Persistent multi-user city scale depth maps

Magic Leap and both talked recently about how all of us can share scanned depth data, and how when you take this idea far forward, you start to gradually create a consistent three-dimensional understanding of depth in the world around us. This concept is also central to's beta AR app - promising 'mixed-reality with depth, occlusion and physics.

A million inhabitants of a city would very quickly have enough collective depth data to make the more ambitious dream pronounced at LEAPcon possible, a city scale multiverse, the possibility to make Augmented Reality events take place across a city wide surface and beyond. 

“Think of the city having a form of sentience and awareness,” Abovitz at LEAPcon.

The reality of this happening feels to me on-par with Google Maps, the ability to reliably, and collectively, provide depth-data in public spaces for developers to build AR products and experiences on top of is a literal magic leap forward in adding a virtual layer on top of the real world.

Ps. This is relevant for VR, to a lesser degree than AR. But both will benefit!

Seamless Multiplatform Collaboration & Publishing 

The real power of the recent developments in the XR space, including Magic Leaps 'better than expected' launch last week, is the rise of the game engines Unity and Unreal. They have both made huge strides these past years, and deserve much of the credit for the increasing powers being packed into new XR headsets coming to the shelf.

And Torch app shows us the way in terms of collaborative creative work in Augmented Reality, in the form of a shared prototyping tool. Their powerful motto: Design. Prototype. Collaborate. 

It will be interesting to see how WebXR fits into this, I am looking at you 8th Wall.

Simple AI & Machine Learning on your phone

Snapchat built computer vision into it's app recently, and although the implementation is simple (object recognition) it is a powerful signifier of where the XR space is going. Long before that, Google launched it's Machine Learning Unity plug-in. In both cases, we're seeing server-side magical powers that run on a humble smartphone is another huge leap with a whole range of implications. 

Realistic AI-human interaction

Magic Leap showed off a demo of Mica, a humanlike artificial intelligence AI assistant. The Mica demo involves a digital avatar, sitting behind a table and interacting directly (non-verbally) with the viewer.  

I walked into a  physical room and sat in a chair. Mica was sitting at the table in the same room. She smiled at me and look at me. I was struck that she wasn’t just looking at me. She was looking in my eyes. She tilted her head from side to side.

From this Venture Beat article

The experience felt incredibly realistic to me, even if the human skin textures are still not completely there, and the hair dynamics were limited, most likely due to the devices processor. The power in the Mica demo is eye-tracking, occlusion, and her ability to interpret your actions in real-time. Her ability to sit believably on a chair behind a table, from the viewers vantage point, reinforces the sense she is in your space, not just a digital layer on top of it. But most importantly, what this experience really establishes is a close-to-perfect connection between a virtual character and a human. 

It would be great if in future, Mica moves slightly away from the pixie dream girl stereotype but I do think a big leap was made and this will have many repercussions for human-computer interaction, the traditional social structure of our life, and how humans approach working in the future.

The project also reminds of the excellent Baby X project, that came out of Auckland University a few years ago. I remember Mark Sagar's impressive talk at FMX in Stuttgart, and his discussions on a symbiotic relationship between humans and machines.

Cloud rendering for almost anything

As mentioned above with regard to AI, there's a broader trend of distributed cloud rendering for everything, moving away from a dependency on a local processor and hard drive on your device.

Slowly I start to gain the ability to see these very different developments in XR technology stack up together onto a mobile-device, possibly running simply within your browser, in the very near future, to create the emergence of a persistent, intelligent and responsive virtual layer on top of, and embedded into, the real world.

Thank you for reading.