Post Archive
WordPress, and the Pingback of Death
The journey to discover why I couldn't keep a website up.
I host a number of websites for clients, friends, and family. A solid number of those are running WordPress.
I rarely suffer problems with them... except for one site. This site has been going down, and staying down, to the point that I routinely SSH in to forceably restart the locked-up PHP processes.
I've tried to fix it in the past to not avail. Previously I've migrated the site to a new host with an updated OS, tweakes a great many configuration settings all over the system and site, and very recently I've changed the server setup to match that of other high-traffic WordPress sites I host.
But the lockups have been increasing with frequency to the point where, today, the site would not stay up for more than 30 minutes before refusing connections.
Finally fed up, this is my journey to fix it.
Repairing Python Virtual Environments
When upgrading Python breaks our environment, we can rebuild.
Python virtual environments are a fantastic method of insulating your projects from each other, allowing each project to have different versions of their requirements.
They work (at a very high level) by making a lightweight copy of the system Python, which symlinks back to the real thing whenever necessary. You can then install whatever you want in lib/pythonX.Y/site-packages
(e.g. via pip
), and you are good to go.
Depending on what provides your source Python, however, upgrading it can break things. For example, I use Homebrew, which (under the hood) stores everything it builds in versioned directories:
$ readlink $(which python) ../Cellar/python/2.7.8_2/bin/python
Whenever there even a minor change to this Python, symlinks back to that versioned directory may not work anymore, which breaks my virtual environments:
$ python dyld: Library not loaded: @executable_path/../.Python Referenced from: /Users/mikeboers/Documents/Flask-Images/venv/bin/python Reason: image not found Trace/BPT trap: 5
There is an easy fix: manually remove the links back to the old Python, and rebuild the virtual environment.
Digital Ocean is Stingy on the Swap
Sometimes a little swap space is all you need, but you have to put in a little effort for it.
I've been provisioning a pile of tiny VPSes (from Digital Ocean) for tiny web services for the last few weeks. While tuning one such site, I made an incorrect assumption that caused MySQL to fall over: Digital Ocean boxes default to having no swap space.
Assuming you want a little bit of leeway in your memory limits, it is easy to add some swap:
# Create a 1GB blank disk image. dd if=/dev/zero of=/var/swap.img bs=1M count=1024 # Activate it as swap space. mkswap /var/swap.img swapon /var/swap.img # Set it to activate at startup. echo "/var/swap.img none swap sw 0 0" >> /etc/fstab
Streamlining MySQL Authentication
Quick tip for easy access.
I've gotten far too used to Postgres' ability to authenticate you by your system uid, and tire of the continual copy-paste of massive passwords for my MySQL servers.
However, there is a way to streamline this: create a .my.cnf
file in your home that looks like:
[client] user=myname password=mypassword
Just make sure that you are the only one who can read it (chmod go= .my.cnf
), and you are good to go!
Pseudo suExec with PHP Behind Nginx
For those who don't want to run more than one php-cgi... for some reason.
I recently started transitioning all of the websites under my management from Apache to nginx (mainly to ease running my Python webapps via gunicorn, but that is another story).
Since nginx will not directly execute PHP (via either CGI or nginx-managed FastCGI), the first step was to get PHP running at all. I opted to run php-cgi
via daemontools; my initial run
script was fairly straight forward:
1 2 | #!/usr/bin/env bash exec php-cgi -b 127.0.0.1:9000 |
Couple this with a (relatively) straight forward nginx configuration and the sites will already start responding:
server { listen 80; server_name example.com root /var/www/example.com/httpdocs; index index.php index.html; fastcgi_index index.php; location ~ \.php { keepalive_timeout 0; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$uri; fastcgi_pass 127.0.0.1:9000; } }
The tricky part came when I wanted to run PHP under the user who owned the various sites. I could have (and perhaps should have) opted to spin up a copy of php-cgi
for each user, but I decided to try something a little sneakier; PHP will set its own UID on each request.
There are no more posts tagged "devops".