Real-world projection mapping was all the rage in 2011. Many a building was seen breaking apart or getting filled up with geometric shapes. This technique is executed similarly to how projection mapping is done in 3D software: A flat animation is created that, when projected onto geometry, wraps around the nooks and crannies for added believability.
One of the most important details in projection mapping is knowing where your audience or camera is going to be and how much they will be moving around. This is because there is often some corner-pinning or other trompe l’oeil trickery involved. As a result, most projection mapping projects either deal with a limited amount of camera movement, or some amount of baked-in perspective that doesn’t necessarily match your viewing angle. This isn’t a bad thing; it’s just an aspect of the technique, like the difference between creating a craggy cliff through projection mapping vs. true 3D — unless you’re going to fly right by that cliff, no one’s going to notice the difference.
So, what’s the point? Well, a group by the name of Marshmallow Laser Feast (Memo Akten, Barney Steel and Robin McNicholas) has figured out one way around this issue. If you are able to link the camera’s position to what’s being projected, than any tiny movement the camera makes can be reflected in real-time. This technology has been used previously in immersive VR CAVEs that are mostly in universities and research facilities. MLF was able to recreate it using consumer products like the Playstation Move.
If nothing I have said makes sense (fair enough), the meaningfulness of this technique can be seen in the ad above. Traditional projection mapping would be able to handle the image at 0:37. It would just be a grid projected directly on the wall. The camera could move around and it would make sense because the texture maps directly to the wall. It would NOT be able to handle what happens at 0:40 and forward. This stretching out to infinity is an illusion that can only happen if the perspective of all the projections perfectly matches the camera.
Another “we live in the future” moment comes from Design I/O (Theo Watson and Emily Gobeille). Their Puppet Parade is an interactive installation that allows children to use their arms to puppeteer larger-than-life creatures. This time, it’s the Kinect that serves as the consumer hardware helping get data into the computer. I love how intuitive the puppeting motion is. It makes it seem perfectly natural that you’re able to control the head of a ten-foot-tall creature with the flick of a wrist, when in fact this is pretty much magic.
All this tech is still pretty bleeding edge, but there are two aspects of these projects that I think are important for the future. First, the ability to interact with graphical worlds on set allows for more spontaneous performances. The actors and cinematographer can see what’s happening and react to it, trying out new performances or camerawork that wouldn’t have come up otherwise.
Second, as these tools get more developed and accessible, a wider range of visual artists will be able to contribute their styles to interactive artwork. Eight to 10 years ago there was a motion graphics boom where a wide range of visual styles, especially illustrated styles, were explored in full motion for the first time. I think in the coming 10 years we’ll see a similar boom in designers and illustrators teaming up with interactive teams to create unique visual worlds to explore.