Hi Motionographers! We’re at SIGGRAPH 2011, here in beautiful Vancouver. SIGGRAPH is the Association for Computing Machinery’s annual conference on computer graphics, where you can find the most recent academic research and commercial software/hardware developments for computer graphics and interactive technology. This year, upwards of 20,000 artists, research scientists, gaming developers, engineers, filmmakers, and academics have descended on the conference.
Check out our first dispatch on the Technical Papers Fast Forward event here. More Motionographer SIGGRAPH posts to come, including 3D scanning and printing as it relates to motion graphics, all about lenticular displays and building glasses-free 3D displays, and, of course, highlights from the Computer Animation Festival.
Hi Motionographers! We’re at SIGGRAPH 2011, here in beautiful Vancouver. SIGGRAPH is the Association for Computing Machinery’s annual conference on computer graphics, where you can find the most recent academic research and commercial software/hardware developments for computer graphics and interactive technology. This year, upwards of 20,000 artists, research scientists, gaming developers, engineers, filmmakers, and academics have descended on the conference.
Personally, I can’t fully understand the complex, image-crunching equations that make up much of the nitty-gritty research presented at the SIGGRAPH Technical Papers. However, I am still excited about seeing the research that unpins the tools that will crop up in our software and hardware one, two, five years down the line. SIGGRAPH is the place where now-widespread concepts like 3D camera projection or the Puppet Tool, first premiered.
You want to search through giant databases of 3D models by submitting a rough polygon shape and finding its closest neighbor? Someone has already solved the problem . You hate how when you try to hue shift someone’s clothes the fabric never ends up looking right? A SIGGRAPH talk detailed how to fix that. You want the 3D software to make those pesky pop-up books for you? Yes, someone here has an algorithm for that.
On Day 1, SIGGRAPH holds its Technical Papers Fast Forward event, where each of the 100+ presenters have just under sixty seconds to get across the core ideas of their research. After two hours of back-to-back knowledge bombs, you feel dizzy with excitement, and also a bit of bleeding-edge technology fatigue.
Below are some highlights from the SIGGRAPH 2011 technical papers that particularly relate to our field, and also happened to have excellent documentation on the web.
Anyone who has dug through a 3D model library looking for a “lamp post” or “plane” or “car” has probably run into the problem of not being further able to par down their search results in a meaningful way. These guys have a solution that uses an editable template model to narrow down your search results.
You know how when you hue-shift someone’s clothes it doesn’t look right? In the future you will no longer have to de-spill and then re-spill. Math will do it for you!
Exploring Photobios
Ira Kemelmacher-Shlizerman, Rahul Garg, Steven M. Seitz (University of Washington), Eli Shechtman (Adobe Systems, Inc.) By-Example Image Synthesis | TUESDAY, 9 AUGUST 3:45 PM – 5:15 PM | East Building, Ballroom C
Start with the Noah K. Everyday concept. Then imagine you can build the effect from a photo library, without having to make sure you are taking the picture from the same angle and distance each time. Now get ready to see a LOT more of this type of video in the future.
This is a big one. When copying over expressions from one photo to another, they often feel a bit off (if not totally creepy). This algorithm computes how the face is different in the two photos, so that the transferred element better fits the target face. This is the paper where the direct application to commercial software is most apparent, as well as the winner of the “Every Picture Can Tell a Lie / Holy #$@D* We Are In The Future” award.
Locomotion Skills for Simulated Quadrupeds
Stelian Coros (Disney Research Zurich), Andrej Karpathy, Ben Jones (University of British Columbia), Lionel Reveret (INRIA, Grenoble University, CNR), Michiel van de Panne (University of British Columbia) Call Animal Control! | TUESDAY, 9 AUGUST 3:45 PM – 5:35 PM | East Building, Exhibit Hall A
See Spot Run. Run Spot Run. This paper presents a quadruped simulation that works across uneven terrain, at multiple gaits, and responding to external forces. I have to say I enjoy the funny walks at the end when the system breaks. I’m also curious if this model allows for custom keyframe animation without having to completely “bake” the algorithmically generated animation, thus allowing for fine-tuning the acting while not losing the benefits of locomotion system.
Edge-Aware Color Appearance
Min H. Kim (Yale University and University College London) , Tobias Ritschel (Telecom ParisTech and Max-Planck-Institut Informatik), Jan Kautz (University College London) Colorful | WEDNESDAY, 10 AUGUST 9:00 AM – 10:30 AM | East Building, Exhibit Hall A
This one is very subtle. I hadn’t thought about it until I saw this paper presentation. When you use gaussian blur on something, there is an illusion that the object/layer is lighter. This paper examines how to compensate when you are blurring so that the perceived lightness, saturation, and hue remain intact.
Initially this paper reminded me of the Improve your photography with classical art web site, but it looks like it will be a very useful tool for both the home consumer and the professional. At 3:28 in this video, there are further applications of this research that are VERY promising. In one case, you can draw a mask on an object in one photograph, then feed it a second photograph and the software will guess where the mask should go to outline the same shape. It also can take in a blurry photo and a second, sharp reference photo and apply the often wished-for “un-blur” filter (enhance!).
Subspace Video Stabilization
Feng Liu (Portland State University), Michael Gleicher (University of Wisconsin-Madison), Jue Wang, Hailin Jin, Aseem Agarwala (Adobe Systems, Inc.) Video Resizing & Stabilization | THURSDAY, 11 AUGUST 9:00 AM – 10:30 AM | East Building, Exhibit Hall A
This is the paper that helped create the new warp stabilizer in After Effects CS5.5. Very powerful stuff.
Videos taken by cell phones, web cams, or any other camera with auto-irising often have changes in exposure, color temperature, brightness and contrast. This paper examines how to algorithmically correct for the changes.
This paper reminds me a bit of the excitement I felt when I saw Disney’s Deep Canvas demo, which promised a way to more easily integrate painterly 2D strokes into a 3D space.
For the more committed (i.e., nerdier) readers, there is a downloadable document called Technical Papers First Pages. This is a large (184 MB) PDF file containing the first pages of all Technical Papers presented at SIGGRAPH 2011.