Google Tango out-of-box experience lets you step into an alternate reality

MediaMonks

Google’s Tango platform has been tantalizing developers and content-makers for a few years now. Tango is essentially a super-powered hardware-software system with sophisticated depth-mapping and tracking capabilities baked in.

In marketing-speak: it’s an integrated platform for creating impressive AR experiences in the palm of your hand.

This case study from Media Monks is a fantastic introduction to what’s possible. To quote the video:

…we created a wonderful world that lingers under the fabric of our reality. This universe is revealed by creating a point cloud of your surroundings, which is then transformed into a virtual world you can explore.

Movement triggers everything in the experience: “As you walk, plants sprout. As you look at them, they blossom. Their seeds are drawn to you and eventually grow into a portal.”

Exciting stuff. Learn more about Tango here.

From the creator:

Together with Google, we created a mixed reality experience that introduces users to the features and capabilities of Tango, Google’s new technology platform for computer vision. The app demonstrates Tango’s technical capabilities by combining depth-scanning, motion-tracking and augmented reality in an interactive showcase. By scanning your surroundings with a Tango-powered device, we create a virtual world that interacts with the real world in real time. The experience is packed with virtual flora and fauna that respond to you and objects around you, encouraging users to move around and explore the world with Tango.

Tags: , , ,

  • stucrmnx120fshwf

    I was a little disappointed by the Keystone video, one environment or another, not integration of the real, with the virtual, you could bump into objects, in the real world, whilst exploring the virtual world. Augmented reality, allows you, to not fall over and break your neck, just having a gyroscopically reactive VR, might be good to avoid nausea, induced by disparate internal and external signals, to the brain. But if your banging into objects, it’s not going to sell, take the real from the sensors, change it a little, add the unreal to the environment. For starters, you could reduce the sensor clutter, to the brain, by simplifying the environment to it’s essentials, No 1 being collision avoidance, then add interactive objects, maps etc.

    AR, is done best in an area, with space, for example a park, an office, with objects in the way, constricts the movement, just an empty warehouse, could become a fantasy land, with multiple users, allow interactive modeling. Little or nothing in a space, allows for the insertion, of large moving objects, in time we’ll get better at this, fit it into a headset. Thus freeing both hands, for glove interaction, with the artificial objects, at QHD, the new Lenovo phablet Pro, is pretty good for a Dreamcast headset and it senses it’s environment. Three people in a large enough space, with gloves, could have a lot of fun, or get a lot of modeling done fast.