Blog Archive

Viewing page 1 from archive of July 2012

Working on a three-coffee problem today.

@mikeboers on . Visit on Twitter.

New Project: Haikuize

I just started a new toy project for extracting Haikus from straight prose. It is currently in very rough shape and not very capable, but it is still fun to play with.

Choice examples from Emily Carr's "Klee Wyck" include:

Beaches Trees Held Back
By Rocky Cliffs Pointed Fir
Trees Climbing In Dark

Tipped Forward In Sleep
And Rolled Among The Bundles
The Old Man Shipping

The Sun And The Moon
Crossed Ways Before Day Ended
By And By The Bulls

Another example from Sun Tzu's "The Art of War":

Retained In Command
The General That Hearkens
Not To My Counsel

Watch the project on GitHub to see it develop.

Posted . Categories: .

I have been working with enough RenderMan lately that I can almost see the AOVs as I walk around in meatspace.

@mikeboers on . Visit on Twitter.

@mikeboers I guess that's a bit better than swimming through the code that drives those AOVs. ;)

@w00tDude (w00t Dude) on . Visit on Twitter.

@w00tDude True. You must be haunted by radiosity caches wherever you go, which I imagine to be pretty terrifying.

@mikeboers on . Visit on Twitter.

Posted .

Had to sign an NDA before an interview today, and it was quite possibly the fairest NDA I have ever seen; it was a pleasure to agree to.

@mikeboers on . Visit on Twitter.

I have started working on my DCPU-16 toolkit for @notch's upcoming #0x10c. I can't wait to see how this develops!

@mikeboers on . Visit on Twitter.

Pseudo suExec with PHP Behind Nginx

For those who don't want to run more than one php-cgi... for some reason.

I recently started transitioning all of the websites under my management from Apache to nginx (mainly to ease running my Python webapps via gunicorn, but that is another story).

Since nginx will not directly execute PHP (via either CGI or nginx-managed FastCGI), the first step was to get PHP running at all. I opted to run php-cgi via daemontools; my initial run script was fairly straight forward:

#!/usr/bin/env bash
exec php-cgi -b

Couple this with a (relatively) straight forward nginx configuration and the sites will already start responding:

server {

    listen          80;
    root            /var/www/;

    index           index.php index.html;
    fastcgi_index   index.php;

    location ~ \.php {
        keepalive_timeout 0;
        include /etc/nginx/fastcgi_params;
        fastcgi_param   SCRIPT_FILENAME  $document_root$uri;


The tricky part came when I wanted to run PHP under the user who owned the various sites. I could have (and perhaps should have) opted to spin up a copy of php-cgi for each user, but I decided to try something a little sneakier; PHP will set its own UID on each request.

Read more... (1 minute remaining to read.)

Posted . Categories: .

I'm writing #PHP for the first time in a few years (for some @Wordpress plugins); I feel a little conflicted about this.

@mikeboers on . Visit on Twitter.

Categories: .

Friendlier (and Safe) Blog Post URLs

Until very recently, the URLs for individual blog posts on this site looked something like:

The 601 is the ID of this post in the site's database. I have always had two issues with this:

  1. The ID is meaningless to the user, but it is what drives the site.
  2. The title is meaningless to the site (you could change it to whatever you want), but it is what appears important to the user.

What they would ideally look like is:

But since I tend to quickly get a new post up and then edit it a dozen times before I am satisfied (including the title) the URL would not be stable, and implementations I have seen in other blog platforms would force the URL to retain the original title of the post, not the current title.

So I have come up with something more flexible that gives me URLs very similar to what I want, but allow for (relatively) safe changes in the title of the post (and therefore the URL).

Read more... (2 minutes remaining to read.)

Posted . Categories: .

Ultimate physical limits to computation

Lloyd, Seth. 2000. Ultimate physical limits to computation. Nature 406:1047–1054.

I just re-read part of this classic CS paper (PDF), and the figure captions at the back stood out to me as being particularly hilarious:

Figure 1: The Ultimate Laptop

The ‘ultimate laptop’ is a computer with a mass of one kilogram and a volume of one liter, operating at the fundamental limits of speed and memory capacity fixed by physics. [...] Although its computational machinery is in fact in a highly specified physical state with zero entropy, while it performs a computation that uses all its resources of energy and memory space it appears to an outside observer to be in a thermal state at approx. \( 10^9 \) degrees Kelvin. The ultimate laptop looks like a small piece of the Big Bang.

Figure 2: Computing at the Black-Hole Limit

The rate at which the components of a computer can communicate is limited by the speed of light. In the ultimate laptop, each bit can flip approx. \( 10^{19} \) times per second, while the time to communicate from one side of the one liter computer to the other is on the order of 10^9 seconds: the ultimate laptop is highly parallel. The computation can be sped up and made more serial by compressing the computer. But no computer can be compressed to smaller than its Schwarzschild radius without becoming a black hole. A one-kilogram computer that has been compressed to the black hole limit of \( R_S = \frac{2Gm}{c^2} = 1.485 \times 10^{−27} \) meters can perform \( 5.4258 \times 10^{50} \) operations per second on its \( I = 4\pi\frac{Gm2}{ln(2hc)} = 3.827 \times 10^{16} \) bits. At the black-hole limit, computation is fully serial: the time it takes to flip a bit and the time it takes a signal to communicate around the horizon of the hole are the same.

Posted . Categories: .
View posts before March 12, 2012