Blog Archive

Viewing page 5 from November 26, 2011\ to August 09, 2012\ .

SIGGRAPH 2012 - Day 4

On Massive Projects

Sessions continued like normal, with me learning too much and inspiring me to try working with way too many of them... like normal. There was a theme that I kept hitting on this week, however, that requires some additional reflection.

My work in this industry thus far has been constrained to (relatively) small projects, mainly on episodic TV or direct to DVD movies. However, many of these sessions (not just the production sessions but also many talks and tech papers) reveal to me that the scope of these projects is at a completely different level than I am used to.

For example, I attended the ILM Battleship presentation on the 4th day in which there were a number of stats thrown around for a particularly heavy shot (the presenter said that he believed it to be the most complicated VFX shot, ever) including that it took 2-5 days per high resolution fluid sim (of which there were many), they cached approximately 20TB of simulation data, and it consumed nearly 23 years of sequential CPU time. Another was during the Disney Paperman presentation on the 5th day in which the director talked about how casually it seems like he was handed a few dozen animators who just happened to have some spare time.

These scales (of both tech and personel) are staggering since the majority of the work I have done in the industry has been limited to what can be accomplished in a few months by a handful of people, but I am very excited (although terrified) to hopefully be a part of these sorts of massive projects in the future.

I also greatly appreciate that the people who are involved in these projects still respect the work that us little guys do, as demonstrated by a number of the Pixar engineers when I discussed my work on The Borgias with them.

Posted . Categories: .

SIGGRAPH 2012 - Day 3

Technical Papers - Video Processing

Today I will focus solely on the video processing session. First, the eulerian video magnification really demonstrated a fundamental gap in my knowledge of signal processing. I like to operate by having an intuition of how every part of a system will behave once we start introducing changes or stresses, and working in the frequency domain is one of those places that still seems like magic to me (and in this case magic is a bad thing).

The paper on selectively deanimating video (see their webpage) resulted in incredibly cool cinemagraphs with very little user effort. While there are still a number of subtle artifacts that I would still remove if doing this work by hand, going from several hours of expert compositing to under a minute of untrained user interaction is a fantastic reduction of complexity. I am certainly inspired to break out some footage I shot a few weeks ago for this very purpose and give it another try.

Finally, the paper on seamless cuts of interview footage (see their video) was very conflicting. The technique is very smart and has absolutely stunning results, but the honest filmmaker in me (not just the general filmmaker, mind you) is absolutely appalled that this tool exists. Unless this becomes accessible to the general public (which it is not in the current incarnation) and therefore always in consideration when watching edited interviews, it exists to convince an audience that a third party interviewee is speaking within a context that is completely artificial. You don't have to take someone out of context anymore in order to twist their words, you can do it right in front of the viewers face without blinking.

Posted . Categories: .

SIGGRAPH 2012 - Day 2

The day started with a little bit of a jolt, since sleepy Mike is apparently unable to operate an alarm clock effectively after watching the Curiosity landing. Once I finally made it to the conference, my first stop was the Pixmondo Hugo production session.

Sneaking in Hand-Painted Frames

My new eye for pipeline was very impressed by the massive amount of data that they automatically collected and shuffled around their numerous facilities. I can only hope that when I finish breaking and rebuilding the data flow at work that it can be even remotely as smooth and impressive.

There was a lot of focus on the miniatures used for the train crash sequence. While many people may question how you could get away with miniatures in stereo since your brain can figure out scales from the stereo imagery, they don't realize that you can simply shrink the scale of the camera system (by scaling the interoccular distance by the same as the model scale). This not only gives you a feeling of the intended scale, but I would believe that the cues picked up by the brain are so powerful that they would likely override other details and let you get away with miniatures even easier than before!

Another interesting challenge in stereo is how to deal with some of the often forgetten details of the medium:

Stereo can turn the most mundane aspects of filmmaking into the most troubling; improperly displaced grain will melt your brain. #siggraph

@mikeboers on . Visit on Twitter.

If you have random grain on both eyes then some fraction of them will line up and be perceived as physical bumps. If you have the same grain on both eyes (potentially offset by a constant amount) then you will perceive a fuzzy veil hanging in the frame. What Pixmondo ended up doing for the heavily stylized segments that called for turn-of-the-century grain is generating one set of grain and displacing it by the disparity map for the second eye, effectively wrapping a layer of grain around all of the objects so it ends up on their surfaces.

Finally, I was delighted to learn that there is at least one shot in the film in which the color was hand painted as an homage to Méliès' films themselves. While it was done in Photoshop, they still made the painter work on a 35mm sized frame. Paraphrased:

You really made some poor bastard paint on that tiny frame?!

Mike Seymour, fxguide

Open Source Can Not Be Cancelled

A few minutes into the time slot for one of the "birds of a feather" sessions, someone from the conference announced that the session had been cancelled for unknown (to them) reasons. The immediate reaction from a fellow named Benjamin (who works at a place called Rushes in the UK) was "How about we just keep talking anyways?". I was delighted at what followed:

The open source pipeline framework #bof at #siggraph was cancelled, but went on anyways due to the enterprising attendants, and a brave MC.

@mikeboers on . Visit on Twitter.

He went on to direct an impromptu discussion with most people electing to stay. I think I have a better handle on answering fundamental questions such as "What is asset management?" and "What is a pipeline?", although those are topics for another time. We ended up with a large mailing list with subgroups for local chapters to carry on the conversation later this year.

CrossShade

I made sure to catch part of the sketching session including CrossShade, for whom I designed and implemented the non-photoreal rendering pipeline for their results. They certainly took what I had created for them and ran with it, creating some really nice looking results.

Someone in the audience asked about integrating the normals (into a depth map), which is something that the researchers and myself both tried to implement (although they were most successful with it than I was). I really wish that we had been able to finish that as the lighting clues from proximate geometry would have been a subtle but fantastic inclusion.

Sake and Desserts

I attended the traditional opening of the sake barrel, in which Paul Debevec enthusiastically landed a quick follow up (and final) blow with his wooden mallet after the carefully choreographed countdown to the first (and synchronized) blow. Shortly after came the annual dessert reception.

I had a number of fun conversations including ambushing the emissaries of Dropbox to quiz them on their upcoming two-factor authentication (which I have confirmation is using OATH, which will work nicely alongside my existing systems). Good for Dropbox for (apparently) sending them just for interest's sake. I also met up with former colleagues and will potentially be getting involved in more graphic research consultation work on the side.

Lots of exciting things are developing!

Posted . Categories: .

SIGGRAPH 2012 - Day 1

Too Much to See

The first day may be a little light on content, but it does look to be an exciting and promising SIGGRAPH (as much as I may be a judge of these things, given that this is only my second time attending). I have already bonded over VFX while trying to find the conference center, and have been completely unable to figure out which sessions to give my attention, ergo:

Finally trying to schedule my day at #SIGGRAPH, and I need to choose between 5 things at the same time. #sadtrombone

@mikeboers on . Visit on Twitter.

The focus of the evening was the Technical Papers Fast Forward, in which 132 papers were presented with only one minute for each (with a stretch break halfway through with an oddly appropriate soundtrack). This year I had the joy of seeing some of my own work presented, even if it wasn't the research. I look forward to watching the full presentation of CrossShade tomorrow and seeing what they ended up doing with my shading pipeline, part of which I have talked about previously.

Posted . Categories: .

New Project: Haikuize

I just started a new toy project for extracting Haikus from straight prose. It is currently in very rough shape and not very capable, but it is still fun to play with.

Choice examples from Emily Carr's "Klee Wyck" include:

Beaches Trees Held Back
By Rocky Cliffs Pointed Fir
Trees Climbing In Dark

Tipped Forward In Sleep
And Rolled Among The Bundles
The Old Man Shipping

The Sun And The Moon
Crossed Ways Before Day Ended
By And By The Bulls

Another example from Sun Tzu's "The Art of War":

Retained In Command
The General That Hearkens
Not To My Counsel

Watch the project on GitHub to see it develop.

Posted . Categories: .

Pseudo suExec with PHP Behind Nginx

For those who don't want to run more than one php-cgi... for some reason.

I recently started transitioning all of the websites under my management from Apache to nginx (mainly to ease running my Python webapps via gunicorn, but that is another story).

Since nginx will not directly execute PHP (via either CGI or nginx-managed FastCGI), the first step was to get PHP running at all. I opted to run php-cgi via daemontools; my initial run script was fairly straight forward:

1
2
#!/usr/bin/env bash
exec php-cgi -b 127.0.0.1:9000

Couple this with a (relatively) straight forward nginx configuration and the sites will already start responding:

server {

    listen          80;
    server_name     example.com
    root            /var/www/example.com/httpdocs;

    index           index.php index.html;
    fastcgi_index   index.php;

    location ~ \.php {
        keepalive_timeout 0;
        include /etc/nginx/fastcgi_params;
        fastcgi_param   SCRIPT_FILENAME  $document_root$uri;
        fastcgi_pass    127.0.0.1:9000;
    }

}

The tricky part came when I wanted to run PHP under the user who owned the various sites. I could have (and perhaps should have) opted to spin up a copy of php-cgi for each user, but I decided to try something a little sneakier; PHP will set its own UID on each request.

Read more... (1 minute remaining to read.)

Posted . Categories: .

Friendlier (and Safe) Blog Post URLs

Until very recently, the URLs for individual blog posts on this site looked something like:

http://mikeboers.com/blog/601/friendlier-and-safe-blog-post-urls

The 601 is the ID of this post in the site's database. I have always had two issues with this:

  1. The ID is meaningless to the user, but it is what drives the site.
  2. The title is meaningless to the site (you could change it to whatever you want), but it is what appears important to the user.

What they would ideally look like is:

http://mikeboers.com/blog/friendlier-and-safe-blog-post-urls

But since I tend to quickly get a new post up and then edit it a dozen times before I am satisfied (including the title) the URL would not be stable, and implementations I have seen in other blog platforms would force the URL to retain the original title of the post, not the current title.

So I have come up with something more flexible that gives me URLs very similar to what I want, but allow for (relatively) safe changes in the title of the post (and therefore the URL).

Read more... (2 minutes remaining to read.)

Posted . Categories: .

Ultimate physical limits to computation

Lloyd, Seth. 2000. Ultimate physical limits to computation. Nature 406:1047–1054.

I just re-read part of this classic CS paper (PDF), and the figure captions at the back stood out to me as being particularly hilarious:

Figure 1: The Ultimate Laptop

The ‘ultimate laptop’ is a computer with a mass of one kilogram and a volume of one liter, operating at the fundamental limits of speed and memory capacity fixed by physics. [...] Although its computational machinery is in fact in a highly specified physical state with zero entropy, while it performs a computation that uses all its resources of energy and memory space it appears to an outside observer to be in a thermal state at approx. \( 10^9 \) degrees Kelvin. The ultimate laptop looks like a small piece of the Big Bang.

Figure 2: Computing at the Black-Hole Limit

The rate at which the components of a computer can communicate is limited by the speed of light. In the ultimate laptop, each bit can flip approx. \( 10^{19} \) times per second, while the time to communicate from one side of the one liter computer to the other is on the order of 10^9 seconds: the ultimate laptop is highly parallel. The computation can be sped up and made more serial by compressing the computer. But no computer can be compressed to smaller than its Schwarzschild radius without becoming a black hole. A one-kilogram computer that has been compressed to the black hole limit of \( R_S = \frac{2Gm}{c^2} = 1.485 \times 10^{−27} \) meters can perform \( 5.4258 \times 10^{50} \) operations per second on its \( I = 4\pi\frac{Gm2}{ln(2hc)} = 3.827 \times 10^{16} \) bits. At the black-hole limit, computation is fully serial: the time it takes to flip a bit and the time it takes a signal to communicate around the horizon of the hole are the same.

Posted . Categories: .

The Gooch Lighting Model

"A Non-Photorealistic Lighting Model For Automatic Technical Illustration"

I've recently been toying with the Gooch et al. (1998) non-photorealistic lighting model. Unfortunately, the nature of the project does not permit me to post any of the "real" results quite yet, but some of the tests have a nice look to them all on their own.

My implementation takes a normal map and colour map, e.g.:

This is the result from those inputs:

Posted . Categories: .

Cleaning Comments with Akismet

My site recently (finally) started to get hit by automated comment spam. There are few ways that one can traditionally deal with this sort of thing:

  1. Manual auditing: Manually approve each and every comment that is made to the website. Given the low volume of comments I currently have this wouldn't be too much of a hassle, but what fun would that be?
  2. Captchas: Force the user to prove they are human. ReCaptcha is the nicest in the field, but even it has been broken. But this doesn't stop human who are being paid (very little).
  3. Honey pots: Add an extra field1 to the form (e.g. last name, which I currently do not have) that is hidden by CSS. If it is filled out one can assume a robot did it and mark the comment as spam. This still doesn't beat humans.
  4. Contextual filtering: Use Baysian spam filtering to profile every comment as it comes in. By correcting incorrect profiles we will slowly improve the quality of the filter. This is the only automated method which is able to catch humans.

I decided to go with the last option, as offered by Akismet, the fine folks who also provide Gravatar (which I have talked about before). They have a free API (for personal use) that is really easy to integrate into whatever project you are working on.

Now it is time to try it out. I've been averaging about a dozen automated spam comments a day. With luck, none of them will show up here.

*crosses his fingers *

Update:
I was just in touch with Akismet support to offer them a suggestion regarding their documentation. Out of nowhere they took a look at the API calls I was making to their service and pointed out how I could modify it to make my requests more effective in catching spam!

That is spectacular support!


  1. The previously linked article is dead as of Sept. 2014. 

Posted . Categories: .
View posts before November 26, 2011