While there are numerous tutorials for how to load incremental changes to SQL data into logstash using the jdbc plugin, all the examples I found used either a numeric id column as the high-water (:sql_last_value), or a timestamp column as the high-water (:sql_last_value).

What do you do you if your high-water is a unix timestamp and is not formatted in the same date time stamp format as the jdbc plugin uses (‘%Y-%m-%d %T.%f000 Z’)?

If you have only one unix timestamp column you need to use as a high-water column, then you can forget the timestamps all together, and set tracking_column to your unix timestamp column, set tracking_column_type to “numeric,” and use_column_value to “true.”

tracking_column => "changed"
tracking_column_type => "numeric"
use_column_value => true

But what if you have several columns in the same table or joined tables that you want to compare to a timestamp and all those columns are unix timestamps? Then your single :sql_last_value will not suffice if it is a numeric type against a specific column. Instead, you need to use a timestamp high-water, and somehow find a way to convert unix timestamps to the same date format as is used by the logstash jdbc plugin.

Here’s what I ended up doing ( probably mysql specific):

SELECT * FROM mytable
WHERE created > FLOOR(UNIX_TIMESTAMP(STR_TO_DATE(:sql_last_value, '%Y-%m-%d %T.%f000 Z')))
OR access > FLOOR(UNIX_TIMESTAMP(STR_TO_DATE(:sql_last_value, '%Y-%m-%d %T.%f000 Z')))
OR login > FLOOR(UNIX_TIMESTAMP(STR_TO_DATE(:sql_last_value, '%Y-%m-%d %T.%f000 Z')));

This converts the logstash/jdbc native :sql_last_value date (stored as `YYYY-mm-dd HH:MM:SS.000000000 Z`) into a unix seconds epoch that you can then compare against your unix timestamp columns in your database.

Hope that helps anyone else who was struggling with comparing multiple unix timestamp columns against one logstash/jdbc plugin native timestamp.

One AWS EC2 micro instance + Debian + Nginx + WordPress + Varnish = one year free hosting, sort of.

First launch a fresh instance of Debian (or Ubuntu)

Most of our work today will be run as root user, so let’s switch over into the root user’s privileges.:

sudo -i 

In case your image is a bit out of date, now is a good time to update the apt repositories.

apt-get update

Who are these amazing people who volunteer late nights keeping all these packages up to date?. Whoever you are, thank you.

Step One – Install Software


apt-get install mysql-server mysql-client 

You will be prompted to set and confirm your ‘root’ SQL password – remember this.


apt-get install nginx 

Start it up:

/etc/init.d/nginx start 

Make sure you are now seeing your ‘Welcome to nginx!’ page by visiting your Public DNS URL:


AWS firewalls If you can’t load that page above and it simply disconnects after a few minutes, you probably haven’t set up an AWS EC2 security group that allows incoming traffic over port 80. Add that incoming traffic in your AWS console. Probably a good idea to ensure only port 22 (ssh) and 80 (http) are open from your box.


nginx pipes php to a php-fpm backend rather than loading it and processing it in its own memory. So we’ll need not just php, but php-fpm as well.

apt-get install php5-fpm php5-mysql 

Step Two – Configure nginx & php-fpm

On debian/ubuntu, nginx hosts are configured here:


Edit your configuration files in the ‘sites-available’ directory, and when you want to enable them, just symlink them in the ‘sites-enabled’ directory.

Lets take a look at the default configuration. Run this command to open it up:

vi /etc/nginx/sites-available/default 

Add ‘index.php’ to the ‘index’ line, so a php index page will get picked up as an index.

index index.html index.htm index.php; 

And then uncomment the following so the lines below are active, like so:

location ~ .php$ {
  fastcgi_split_path_info ^(.+.php)(/.+)$;
  fastcgi_pass unix:/var/run/php5-fpm.sock;  
  fastcgi_index index.php; include fastcgi_params;

Confirm that PHP-FPM is set up to use the Unix Socket as specified in our default configuration.

vi /etc/php5/fpm/pool.d/www.conf 

And then find the line that starts with listen = . Make sure it looks like this.

listen = /var/run/php5-fpm.sock 

There’s a lot of debate out there whether to use the unix socket or I don’t know who’s right.

Time to restart php5-fpm and nginx:

/etc/init.d/php5-fpm restart
/etc/init.d/nginx restart 

We can test if php is working now by adding the following snippet of php to a file in the server’s document root:

vi /usr/share/nginx/www/info.php 

P.H.P this.

<?php phpinfo(); ?> 

Now you should be able to see the PHP info page by going to:


It looks like nginx is working with PHP. That’s good.

Uh, there’s a lot of precious information in that info.php file. Throw it away once you’ve confirmed php is working.

Setup your host or virtual host

If you’re certain you only will run web site on this box, just edit the default nginx configuration to reflect what you need. If you have any doubts and might want to run more than one site, you might as well start laying the files out correctly creating virtual hosts. In this example, you just need to change ‘www.mysite.com’ to match your domain, and make sure the root path to the public files is pointing where you are planning on serving http documents from. Let’s config!

vi /etc/nginx/sites-available/mysite.com 

And then copy this configuration into it, after changing the domain:

server {
   listen 80;
   server_name www.mysite.com mysite.com;
   root /var/www/mysite.com/public;

   index index.php index.html;
   location = /favicon.ico {
            log_not_found off;
            access_log off;
   location = /robots.txt {
            allow all;
            log_not_found off;
            access_log off;
   # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
   location ~ /\. {
            deny all;
            access_log off;
            log_not_found off;
   location / {
            try_files $uri $uri/ /index.php?$args;
   # Add trailing slash to */wp-admin requests.
   rewrite /wp-admin$ $scheme://$host$uri/ permanent;
   location ~*  \.(jpg|jpeg|png|gif|css|js|ico)$ {
            expires max;
            log_not_found off;
   location ~ \.php$ {
            try_files $uri =404;
            include /etc/nginx/fastcgi_params;
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

Enable the host

Now lets sym-link the site to enable it, and we’ll also remove the default configuration:

cd /etc/nginx/sites-enabled/
ln -s /etc/nginx/sites-available/mysite.com mysite.com
rm default 

We don’t need to restart nginx. Just reload it to capture your new configuration.

/etc/init.d/nginx reload 

Make a place for the site’s application code.

We configured our virtual host to serve and process files from a root directory. But we don’t have that directory yet. Let’s fix that.

mkdir -p /var/www/mysite.com/public 

Step Three – Install your apps (WordPress)

Download it. If it downloads too fast, you can use various proxy tools to slow down your network data delivery.

cd /tmp
wget http://wordpress.org/latest.tar.gz
tar xvfz latest.tar.gz
cd wordpress/
mv * /var/www/mysite.com/public/ 

Set up MySQL for your WordPress installation

Before we can actually set up our wordpress app, we need to make MySQL behave. Since this is an example we’ll call the database ‘wp_database’ the username for the user connecting to this database will be ‘wp_user’ and the password ‘wp_password’ – They say you should replace these with your own values, but what do they know?

Creating the database:

mysqladmin -u root -p create wp_database 

Adding the User

Connect to the mysql server as the mysql root user like this:

mysql -u root -p 

Run these queries to add your wordpress user and allow the user to actually work with the database:

GRANT ALL PRIVILEGES ON wp_database.* TO 'wp_user'@'localhost' IDENTIFIED BY 'wp_password';
GRANT ALL PRIVILEGES ON wp_database.* TO 'wp_user'@'localhost.localdomain' IDENTIFIED BY 'wp_password'; FLUSH PRIVILEGES; quit; 

That last line about flushing privileges is not a political statement. If you neglect it, all the privileges you just granted user wp_user will not take effect and you’ll be posting to a lot of forums.

WordPress requires that the web server be able to write to some of the files in the wordpress codebase. So we need to make sure nginx & php-fpm own the files. (On debian, php-fpm runs as user www-data, as does nginx:

chown -R www-data:www-data /var/www/mysite.com/public/ 

Move the WordPress configuration file, we’ll use it, but we just need to move it into place with:

mv /var/www/mysite.com/public/wp-config-sample.php /var/www/mysite.com/public/wp-config.php 

Now let’s add our sql setting:

vi /var/www/www.example.com/public_html/wp-config.php 

Now add your credentials:

/** The name of the database for WordPress */
define('DB_NAME', 'wp_database');
/** MySQL database username */
define('DB_USER', 'wp_user');
/** MySQL database password */
define('DB_PASSWORD', 'wp_password'); 

Launch WordPress

To run the first time setup on WordPress visit:


Use the configuration screens to set up your site name and first administrator user. You should now be able to log onto WordPress. Nota bene: You can’t change the username of that user once you’ve set it up, so put a tiny bit of thought into it.

Step Four: Install Varnish

We want the site to be faster! Let’s get Varnish installed. By default nginx will be listening to port 80, but Varnish needs to handle those requests and pass them back to nginx. So now we change our nginx configuration to listen to a different port:

vi /etc/nginx/sites-available/mysite.com 

Change the listen configuration so it looks like this:


This now means that we can run Varnish on port 80.

Time to install varnish. First we have to add the repo for varnish to our apt configuration.

curl http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add -
echo "deb http://repo.varnish-cache.org/ubuntu/ lucid varnish-3.0" >> /etc/apt/sources.list

Then, install it.

apt-get update
apt-get install varnish

Configure Varnish

Remember how nginx used to listen on port 80 and we changed it to listen to 8080 instead? Now we need varnish to take over the http port 80 traffic, like this:

vi /etc/default/varnish 

Then change this line so it reads like so:

DAEMON_OPTS="-a :80  

Now, edit the varnish configuration file:

vi /etc/varnish/default.vcl 

Edit the file so it looks like so:

backend default {
  .host = "";
  .port = "8080";
# Drop any cookies sent to WordPress.
sub vcl_recv {
    if (!(req.url ~ "wp-(login|admin)")) {
        unset req.http.cookie;

# Drop any cookies WordPress tries to send back to the client.
sub vcl_fetch {
    if (!(req.url ~ "wp-(login|admin)")) {
        unset beresp.http.set-cookie;

Restart varnish and nginx.

/etc/init.d/nginx restart
/etc/init.d/varnish restart

Finishing up

You’ll notice that when you make a change to a post, you won’t see it, since varnish is caching it. I recommend the W3 Total Cache WordPress plugin. It purges the varnish cache when you publish a post. Enable these things:

  • Page Cache: Enabled – Opcode: Alternative PHP Cache (APC)
  • Object Cache: Enabled – Opcode: Alternative PHP Cache (APC)
  • Browser Cache: Enabled
  • Varnish: Enabled – Varnish Servers: localhost

A lot of people put in a ton of unpaid work to make these things function so well. And yet the people on the cover of so called tech magazines are only the venture capitalists and start-uppers.

I wasted days of my life trying to figure out how to create a solid deployment of node.js applications across heterogenous environments. Maybe I can spare you some of the trouble.

To me, deployable means that the application should be

  1. able to be rolled out, with 100% confidence across development, quality assurance, and production environments, with no modification to the codebase.
  2. Require only a minimal subset of pre-installed libraries. In the case of node.js, this means each environments should only guarantee the correct node version and npm version. Nothing else.
  3. The application can be monitored for uptime, and can be restarted like any other unix service.

So, I have written a node.js application. I have provided a package.json file. I run npm install in the application’s folder. All the dependencies are installed correctly and my application runs. Great!

I then git add the application to the repository. And then I go to another server and do a git pull of my current branch. I try to start my application, and it fails, due to missing libraries. Why?

The problem is that npm-installed modules include instructions to the developer’s source code management tool. Instructions such as svn ignore, or git ignore. The files are included in their code when they export their modules to the archive.

So if you are unlucky enough to be using the same source code management tool as the module developer, you might be running afoul of the instructions the developer gave to his or her own repository.

So, as a git user, when I install my node modules and then issue the git add command, the downloaded modules are not allowing git to add their dependencies, because of the included .gitignore files telling git to ignore the node_modules directories. Thus, the application works locally, but not when you try to pull into another environment.

Developers need to ignore their own dependencies during development and packaging. This is appropriate. But they should also be removing their source code management tool’s helper file before they share their code.

That, however, is not going to happen, so it’s up to us to resolve.

So what I do now before adding and committing new code in a node module is this:
for f in ` find . -type f -name '.gitignore'`; do
sed -i'' -e '/^node_modules/ d' "$f"

which removes all the references in the .gitignore instructions for node_modules directories.

Then, when I git add, *all* the npm installed code gets added to the repo, not just the top-level dependencies. Pulling the code into another environment now retrieves all the dependencies, even three or four levels down.

I will follow up with some suggestions covering the other points I made about making a node.js app deployable.

Josh Long, of MacTech Magazine, interviews Kristofer Widholm, lead developer of AppleJack, for an episode of MacTech Live. We delve into some of the parts of the hidden “expert mode,” and talk about the history of AppleJack. You can hear the somewhat boisterous (and embarrassingly geeky) interview here.

The long (and somewhat bitterly) awaited AppleJack 1.6 with support for Snow Leopard has been released and is immediately available from http://sourceforge.net/projects/applejack.

Current Version: There would probably be no AppleJack for Leopard or Snow Leopard without the yeoman efforts of Steve Anthony. He put in countless hours into solving the Leopard startup riddle, thereby giving Leopard compatibility a chance at seeing the light of day. Also, I wish to thank Dave Provine at Premier Mac for helping kickstart the development process by providing 3 test partitions of Snow Leopard for me to mess with. A big thank you.

Testers: Thanks also to Charly Avital, Joshua Long (MacTech Magazine), John Stiver, Thomas Ungricht, and Matthew Weinman for risking their files (and their sanity) by helping test AppleJack before release on repeated occasions. Thanks also to all of you (too many to name) who have pitched in with the occasional bug report or test results.

+ Snow Leopard compatibility [feature 2845796] (Thanks again to Steve Anthony)
– Improved limits on output from syslog to STDOUT
– Simplified startup of services on Leopard and Snow Leopard
– Fixed bug in creation of user account lists in Snow Leopard where system accounts would show up
+ S.M.A.R.T. status verification is now being done in the expert mode. I still want to implement this using smartmontools, but for now diskutil will do.
+ Blessing of Mac OS X System folders on attached volumes is now possible. This is a primitive bless, ie, it does not create boot files, but simply blesses the chosen System folder and (optionally) sets it to be used for startup on next launch.

Smultron for Tiger Users

I was saddened to see that the lovely, generous, and top-notch independent developer Peter Borg decided to quit developing his sharp little text editor “Smultron” (which is Swedish for wild strawberry, in case you’re wondering).

The package is still available from sourceforge.net, of course, but if you are a Mac OS X tiger user like I am, you might be discouraged to note that none of the recent releases work with Tiger, and there are no release notes or comments indicating which file is the most up-to-date Tiger-compatible release.

I did some brute-force testing, and discovered that version 3.1.2 of Smultron is the last release that supported Tiger.

You can download Smultron for Tiger here: http://sourceforge.net/projects/smultron/files/

One of the reasons I like Smultron so much is that it supports bash syntax highlighting, something BBEdit has never done. (I might also add, incidentally, parenthetically, sotto voce, etc, that the bash syntax hightlighting was a feature Mr. Borg generously included due to me literally begging for it.)

Now I’m sure he’s working on the next great thing. Meanwhile, a fork of Smultron has been created (aptly named Fraise, French for strawberry) and development goes on.

Terse commentary on social media

Courtesy of twitter user @hotdogsladies (). Pithy, always brilliant:

via @hotdogsladies (June 3, 2009)

“Star Reviews
(ordered by typical usefulness)

1. ★★★
2. ★★
3. ★★★★
4. ★★★★★
5. ★”

A UPS delivery woman appears. There’s a big box on her shoulder.
—Please sign here, she says.

I sign. I look at the address of origin. Not even the faintest flicker of recognition. Who or what?

I open the box. Inside the box is the largest and heaviest trophy I’ve ever received. It’s a sixteen inch, bronze and gold plated statue featuring an androgynous, futuristic, vaguely human being holding aloft—what is it, a Mac SE 30? At it’s base, the words: “2008 MacWorld Editor’s Choice Awards, AppleJack 1.5, The Apotek.” Come to think of it, the statue shares some characteristics with an Academy of Motion Picture Arts and Sciences award (a.k.a. the Oscar): mute, muscular, angular, yet poised for eternal calm.
Continue reading ‘AppleJack wins MacWorld Editor’s Choice Award’


I sometimes use my Treo 680 as a voice recorder to capture ideas for songs. For a long time I thought all the files were being backed up to my Mac. It wasn’t until I had to do a hard reset, that I realized they were not restored to the Voice Memo application as selectable voice memos. They were still on my Treo, but I couldn’t get to them or play them.

When I looked through my Palm user data files on my Mac I saw that they were in the backup folder as vpad.pdb files.
[ ~/Documents/Palm/Users/palm_username/Backups] $ ls -al | grep Vpad
-rw-r--r-- 1 user user 5430 Jul 5 20:13 07-10-22-16-32-Vpad.pdb
-rw-r--r-- 1 user user 3782 Jul 5 20:13 07-10-22-16-322-Vpad.pdb
-rw-r--r-- 1 user user 188406 Jul 5 20:13 07-11-14-1-24-Vpad.pdb
-rw-r--r-- 1 user user 64214 Jul 5 20:13 07-11-20-18-22-Vpad.pdb
-rw-r--r-- 1 user user 11590 Jul 5 20:13 07-11-20-18-222-Vpad.pdb
-rw-r--r-- 1 user user 79030 Jul 5 20:13 07-11-20-18-23-Vpad.pdb
-rw-r--r-- 1 user user 84390 Jul 5 20:13 07-11-20-18-25-Vpad.pdb
-rw-r--r-- 1 user user 40790 Jul 5 20:13 07-11-25-19-38-Vpad.pdb
-rw-r--r-- 1 user user 60518 Jul 5 20:13 07-3-16-21-37-Vpad.pdb
-rw-r--r-- 1 user user 107014 Jul 5 20:13 07-3-20-0-19-Vpad.pdb
-rw-r--r-- 1 user user 28038 Jul 5 20:13 07-3-20-0-23-Vpad.pdb
-rw-r--r-- 1 user user 43270 Jul 5 20:13 07-3-20-0-25-Vpad.pdb
-rw-r--r-- 1 user user 425270 Jul 5 20:13 07-7-24-1-45-Vpad.pdb
-rw-r--r-- 1 user user 131254 Jul 5 20:13 07-8-5-0-25-Vpad.pdb
-rw-r--r-- 1 user user 317942 Jul 5 20:13 07-8-5-22-36-Vpad.pdb
-rw-r--r-- 1 user user 333974 Jul 5 20:13 07-9-17-23-06-Vpad.pdb
-rw-r--r-- 1 user user 278870 Jul 5 20:13 08-2-10-18-31-Vpad.pdb
-rw-r--r-- 1 user user 85606 Jul 5 20:13 08-5-20-21-25-Vpad.pdb

However, there were no playable audio files where I expected to find them (/Users/username/Documents/Palm/Users/palm_username/Voice%20Memo/). That was when I realized that the hotsync conduit for Voice Memo did not work, and never had been working. There seems to be some incompatibility between the Treo 680 voice memo files and Mac OS X. I don’t see why this has not been fixed yet, but whatever the cause of this negligence on the part of the developers, I was stuck with a bunch of Vpad.pdb files that I could no longer access on my Palm or play on my Mac. Some of them were extremely important to me, containing ideas for an upcoming album.

Details of my scenario:
Palm Treo 680 running Palm OS Garnet v. 5.4.9
Voice Memo version 1.4
HotSync Voice Memo conduit version 1.0 (so that's why!!! :-) )
Mac OS X 10.4.11

Nevertheless, I felt fairly confident that embedded in these pdb files were some kind of normal audio file format. I could not imagine that Palm would have invented an entirely proprietary compressed audio format.
Continue reading ‘Recovering vpad.pdb files from a Palm Treo 680 with Mac OS X’

BBDiff: quick patch file generation from within BBEdit

While BBEdit has a great tool for comparing and applying changes between files, it does not generate standard .patch files that can be used for bug reports, code fixes, and change logging.

BBDiff will compare the contents of the frontmost window to the contents of the window just behind it, and will use the diff command line tool to generate diff output which is then pasted into a new BBEdit window for saving or pasting into a Web site.

The command line called by BBDiff by default is:
diff -up newfile oldfile
You can modify this by simply changing the diffopts property at the top of the script.

BBDiff AppleScript.