# Blog Archive

Viewing page 5 from March 17, 2012\ to March 13, 2013\ .

# Anatomy of a Maya Binary Cache

## DAGs all the way down.

On a limited number of occasions I have had need to reach directly into some of the raw files produced by Autodesk's Maya. There isn't much documentation I could find on the web, so I will try to lay out what I have learned here.

The generic structure is based on the IFF format, but with enough small changes to warrant this exploration (with lots of kudos to cgkit's implementation, which helped with some of the gritty details).

Posted . Categories: .

# Resources for Learning Python

## We can do this the hard way, or the easy way...

I am often asked my opinion on how to get starting with programming, and usually with Python in particular. I usually outline three different routes that must be taken: learning how to work with Python, learning best practises for Python, and reading lots of good code from others.

Posted . Categories: .

# A Pool of Shotguns

## A transparent connection pool for the Shotgun API in heavily threaded environments.

I have been working very heavily with Shotgun for the last several months, creating much deeper integrations between it and Western X's pipeline.

One of the things that bit me pretty early on is that the official Python API for Shotgun can not make parallel requests.

Under most conditions this isn't a big problem; the underlying connection would just serialize my threads' access to the Shotgun server, adding some latency, but it wasn't too bad. What was very irritating, however, was that a particular version of Python on OS X 10.6 would occasionally segfault during parallel requests. It took quite a few days of debugging Python in GDB (not a particularly easy prospect, especially since the problem was hard to reproduce) to isolate the problem to a bug in the ssl module's use of zlib to compress the request before sending it to the server.

Posted . Categories: .

# SIGGRAPH 2012 - Day 5

## Until Next Year

This happened to me last year as well, and similarly with the smaller Vancouver SIGGRAPH events in the past few months, in which I lose perspective in the massive sea of expensive blockbuster film work:

We have gotten to the part of #siggraph where I develop a persistent rage for not being involved in enough awesome things. #neverenoughtime

@mikeboers on . Visit on Twitter.

However, perspectives need to be kept in check. My short, Blind Spot, has been doing rather well lately, getting into yet another festival (I think that is 6 now), and the VFX of that film was the result of only two people working in their spare time.

But for now the plan is to feed off of the helpful part of my rage and do the best job I can do, both at work, and independently. I'll be back next year, hopefully knowing a few more of you and having a slightly larger influence on the industry and presented works.

Posted . Categories: .

# SIGGRAPH 2012 - Day 4

## On Massive Projects

Sessions continued like normal, with me learning too much and inspiring me to try working with way too many of them... like normal. There was a theme that I kept hitting on this week, however, that requires some additional reflection.

My work in this industry thus far has been constrained to (relatively) small projects, mainly on episodic TV or direct to DVD movies. However, many of these sessions (not just the production sessions but also many talks and tech papers) reveal to me that the scope of these projects is at a completely different level than I am used to.

For example, I attended the ILM Battleship presentation on the 4th day in which there were a number of stats thrown around for a particularly heavy shot (the presenter said that he believed it to be the most complicated VFX shot, ever) including that it took 2-5 days per high resolution fluid sim (of which there were many), they cached approximately 20TB of simulation data, and it consumed nearly 23 years of sequential CPU time. Another was during the Disney Paperman presentation on the 5th day in which the director talked about how casually it seems like he was handed a few dozen animators who just happened to have some spare time.

These scales (of both tech and personel) are staggering since the majority of the work I have done in the industry has been limited to what can be accomplished in a few months by a handful of people, but I am very excited (although terrified) to hopefully be a part of these sorts of massive projects in the future.

I also greatly appreciate that the people who are involved in these projects still respect the work that us little guys do, as demonstrated by a number of the Pixar engineers when I discussed my work on The Borgias with them.

Posted . Categories: .

# SIGGRAPH 2012 - Day 3

## Technical Papers - Video Processing

Today I will focus solely on the video processing session. First, the eulerian video magnification really demonstrated a fundamental gap in my knowledge of signal processing. I like to operate by having an intuition of how every part of a system will behave once we start introducing changes or stresses, and working in the frequency domain is one of those places that still seems like magic to me (and in this case magic is a bad thing).

The paper on selectively deanimating video (see their webpage) resulted in incredibly cool cinemagraphs with very little user effort. While there are still a number of subtle artifacts that I would still remove if doing this work by hand, going from several hours of expert compositing to under a minute of untrained user interaction is a fantastic reduction of complexity. I am certainly inspired to break out some footage I shot a few weeks ago for this very purpose and give it another try.

Finally, the paper on seamless cuts of interview footage (see their video) was very conflicting. The technique is very smart and has absolutely stunning results, but the honest filmmaker in me (not just the general filmmaker, mind you) is absolutely appalled that this tool exists. Unless this becomes accessible to the general public (which it is not in the current incarnation) and therefore always in consideration when watching edited interviews, it exists to convince an audience that a third party interviewee is speaking within a context that is completely artificial. You don't have to take someone out of context anymore in order to twist their words, you can do it right in front of the viewers face without blinking.

Posted . Categories: .

# SIGGRAPH 2012 - Day 2

The day started with a little bit of a jolt, since sleepy Mike is apparently unable to operate an alarm clock effectively after watching the Curiosity landing. Once I finally made it to the conference, my first stop was the Pixmondo Hugo production session.

### Sneaking in Hand-Painted Frames

My new eye for pipeline was very impressed by the massive amount of data that they automatically collected and shuffled around their numerous facilities. I can only hope that when I finish breaking and rebuilding the data flow at work that it can be even remotely as smooth and impressive.

There was a lot of focus on the miniatures used for the train crash sequence. While many people may question how you could get away with miniatures in stereo since your brain can figure out scales from the stereo imagery, they don't realize that you can simply shrink the scale of the camera system (by scaling the interoccular distance by the same as the model scale). This not only gives you a feeling of the intended scale, but I would believe that the cues picked up by the brain are so powerful that they would likely override other details and let you get away with miniatures even easier than before!

Another interesting challenge in stereo is how to deal with some of the often forgetten details of the medium:

Stereo can turn the most mundane aspects of filmmaking into the most troubling; improperly displaced grain will melt your brain. #siggraph

@mikeboers on . Visit on Twitter.

If you have random grain on both eyes then some fraction of them will line up and be perceived as physical bumps. If you have the same grain on both eyes (potentially offset by a constant amount) then you will perceive a fuzzy veil hanging in the frame. What Pixmondo ended up doing for the heavily stylized segments that called for turn-of-the-century grain is generating one set of grain and displacing it by the disparity map for the second eye, effectively wrapping a layer of grain around all of the objects so it ends up on their surfaces.

Finally, I was delighted to learn that there is at least one shot in the film in which the color was hand painted as an homage to Méliès' films themselves. While it was done in Photoshop, they still made the painter work on a 35mm sized frame. Paraphrased:

You really made some poor bastard paint on that tiny frame?!

Mike Seymour, fxguide

### Open Source Can Not Be Cancelled

A few minutes into the time slot for one of the "birds of a feather" sessions, someone from the conference announced that the session had been cancelled for unknown (to them) reasons. The immediate reaction from a fellow named Benjamin (who works at a place called Rushes in the UK) was "How about we just keep talking anyways?". I was delighted at what followed:

The open source pipeline framework #bof at #siggraph was cancelled, but went on anyways due to the enterprising attendants, and a brave MC.

@mikeboers on . Visit on Twitter.

He went on to direct an impromptu discussion with most people electing to stay. I think I have a better handle on answering fundamental questions such as "What is asset management?" and "What is a pipeline?", although those are topics for another time. We ended up with a large mailing list with subgroups for local chapters to carry on the conversation later this year.

I made sure to catch part of the sketching session including CrossShade, for whom I designed and implemented the non-photoreal rendering pipeline for their results. They certainly took what I had created for them and ran with it, creating some really nice looking results.

Someone in the audience asked about integrating the normals (into a depth map), which is something that the researchers and myself both tried to implement (although they were most successful with it than I was). I really wish that we had been able to finish that as the lighting clues from proximate geometry would have been a subtle but fantastic inclusion.

### Sake and Desserts

I attended the traditional opening of the sake barrel, in which Paul Debevec enthusiastically landed a quick follow up (and final) blow with his wooden mallet after the carefully choreographed countdown to the first (and synchronized) blow. Shortly after came the annual dessert reception.

I had a number of fun conversations including ambushing the emissaries of Dropbox to quiz them on their upcoming two-factor authentication (which I have confirmation is using OATH, which will work nicely alongside my existing systems). Good for Dropbox for (apparently) sending them just for interest's sake. I also met up with former colleagues and will potentially be getting involved in more graphic research consultation work on the side.

Lots of exciting things are developing!

Posted . Categories: .

# SIGGRAPH 2012 - Day 1

## Too Much to See

The first day may be a little light on content, but it does look to be an exciting and promising SIGGRAPH (as much as I may be a judge of these things, given that this is only my second time attending). I have already bonded over VFX while trying to find the conference center, and have been completely unable to figure out which sessions to give my attention, ergo:

Finally trying to schedule my day at #SIGGRAPH, and I need to choose between 5 things at the same time. #sadtrombone

@mikeboers on . Visit on Twitter.

The focus of the evening was the Technical Papers Fast Forward, in which 132 papers were presented with only one minute for each (with a stretch break halfway through with an oddly appropriate soundtrack). This year I had the joy of seeing some of my own work presented, even if it wasn't the research. I look forward to watching the full presentation of CrossShade tomorrow and seeing what they ended up doing with my shading pipeline, part of which I have talked about previously.

Posted . Categories: .

# New Project: Haikuize

I just started a new toy project for extracting Haikus from straight prose. It is currently in very rough shape and not very capable, but it is still fun to play with.

Choice examples from Emily Carr's "Klee Wyck" include:

Beaches Trees Held Back
By Rocky Cliffs Pointed Fir
Trees Climbing In Dark

Tipped Forward In Sleep
And Rolled Among The Bundles
The Old Man Shipping

The Sun And The Moon
Crossed Ways Before Day Ended
By And By The Bulls


Another example from Sun Tzu's "The Art of War":

Retained In Command
The General That Hearkens
Not To My Counsel


Watch the project on GitHub to see it develop.

Posted . Categories: .

# Pseudo suExec with PHP Behind Nginx

## For those who don't want to run more than one php-cgi... for some reason.

I recently started transitioning all of the websites under my management from Apache to nginx (mainly to ease running my Python webapps via gunicorn, but that is another story).

Since nginx will not directly execute PHP (via either CGI or nginx-managed FastCGI), the first step was to get PHP running at all. I opted to run php-cgi via daemontools; my initial run script was fairly straight forward:

 1 2 #!/usr/bin/env bash exec php-cgi -b 127.0.0.1:9000 

Couple this with a (relatively) straight forward nginx configuration and the sites will already start responding:

server {

listen          80;
server_name     example.com
root            /var/www/example.com/httpdocs;

index           index.php index.html;
fastcgi_index   index.php;

location ~ \.php {
keepalive_timeout 0;
include /etc/nginx/fastcgi_params;
fastcgi_param   SCRIPT_FILENAME  $document_root$uri;
fastcgi_pass    127.0.0.1:9000;
}

}


The tricky part came when I wanted to run PHP under the user who owned the various sites. I could have (and perhaps should have) opted to spin up a copy of php-cgi for each user, but I decided to try something a little sneakier; PHP will set its own UID on each request.