Log Rotation

I hate log rotation. It’s a pain to configure on my FreeBSD server. Just look at newsyslog.conf. That, and my experiences of the utter non-portability of log rotation programs between different Unixes have led me to believe that programs should probably handle their own log rotation. It just makes life easier having one less thing to integrate with the Operating System.

So, I’ve switched my Apache over to using cronolog in order to get date based logfiles automatically. I’ve used it on a project at work recently and it really works a treat.

I also noticed that PostgreSQL has grown the ability to take care of its own log files in recent versions. So I’ve switched that to doing date based logging with automatic rotation as well. Lovely.

All is not perfect of course. Oh no, that would be far too simple. There’s still the issue of removing old logfiles and/or compressing them. But that seems to be a smaller integration problem.

Of course the trigger for all this activity was finding a 220Mb access_log lying around. Doh!


ruby postgres gem issues

I’m trying to install PostgreSQL support into Rails. This should be fairly simple. Unfortunately, there are a couple of “oopsies” along the way.

First up, there are several gems to pick along the way:

  • postgres
  • postgres-pr
  • ruby-postgres

If you’ve a compiler installed, you want ruby-postgres. It’s a fork of the postgres gem, which knows how to find the postgres libraries and headers through pg_config. The postgres-pr gem is a pure-ruby version.

Being foolish, I tried to install the postgres gem:

  % gem install ruby-postgres-

Sadly, this is what happens:

  Building native extensions.  This could take a while...
  ruby extconf.rb install -l ruby-postgres-
  make install
  make: Nothing to be done for `install'.

Weird1. So I tried putting make into debug mode (set MAKEFLAGS=-d in the environment).

    Finished prerequisites of target file `/opt/domtest/lib/ruby/gems/1.8/gems/ruby-postgres-'.
    Prerequisite `' is older than target `/opt/domtest/lib/ruby/gems/1.8/gems/ruby-postgres-'.
   No need to remake target `/opt/domtest/lib/ruby/gems/1.8/gems/ruby-postgres-'.
  Finished prerequisites of target file `install-so'.

Basically, the freshly-compiled is older than the installed version. Which isn’t installed, BTW. I checked. How can this be?

After much messing around, I discovered that the reason is that ruby gems compiles in place. So when it comes to install time, it ends up comparing the compiled file to itself. Naturally, nothing happens.

But then, ruby gems runs make clean, causing it to be removed. Grrr! How on earth has this ever worked for anybody?

Anyway, at this point, I’m going to give up and use the postgres-pr gem instead.

[1] I’m not the first to see this—see ruby postgres problems.


Singleton Rows in PostgreSQL

A common technique in Unix is the lock file. This is often followed up by having the script that creates the lock file run a command and automatically remove it afterwards. It’s a convenient way of enforcing serialisation.

Unfortunately, I’ve got a number of jobs to serialise across two different machines. They’re all database related tasks. So, I thought that I’d try to put the lock inside PostgreSQL itself as it’s what they share in common (never use NFS for locking!).

What we need is a table that can only ever contain one single row. Initially I thought of something like this:

  pid integer not null,
  hostname text not null,
  command text not null,
  created_at timestamp with time zone default now(),
  -- zero or one rows only in this table
  CHECK (count(*) < 2)

Sadly, you’re not allowed aggregate functions in a CHECK constraint. I’m not sure why, but I’ll have to work around it. So, instead, I came up with this:

  active boolean DEFAULT true NOT NULL UNIQUE,
  pid integer NOT NULL,
  hostname text NOT NULL,
  command text NOT NULL,
  created_at timestamp with time zone DEFAULT now(),
  CONSTRAINT no_inactive_mutex CHECK (active)

This has a boolean column, no nulls allowed, and with a unique index. So no more than two rows. Then, we add an extra constraint to say that no false values are allowed. This gets us to where we want to be—only zero or one rows allowed in this table.

The next bit is wrapping all this up in a script, robustly. But that’s relatively trivial compared to the SQL. The main bit is ensuring that you use an END{} block (in Perl) or trap '...' EXIT (in a shell script) to ensure that you always remove the mutex on exit. Of course, it can never be as simple and reliable as a call to unlink, but it’s good enough for me.

Update 2018-09-12: Bono Stebler points out that there is a more idiomatic PostgreSQL solution, creating an index on a constant value:

CREATE UNIQUE INDEX ON table_name ((true));

MySQL vs PostgreSQL

At work, we’ve just started converting one of our projects to use MySQL instead of the present PostgreSQL. Due to client orders, you understand. Unfortunately, after using PostgreSQL, coming up against MySQL reveals how bizarre, baroque and arbitrary it is.

  • CREATE TABLE automatically creates a transaction. So much for that schema loading script undoing its changes if it fails.
  • TEXT columns aren’t like PostgreSQL’s infinite length VARCHAR. They’re more like a 64k long blob. Heaven help us if we go over that by accident (which is not improbable).
  • Oh, and TEXT columns also can’t be primary keys. You have to switch to varchar for that. This is a minor (but seemingly gratuitous) incompatibility.
  • I defy anybody to make sense of MySQLs handling of date & time data types.
    • Firstly, you have to choose between datetime or timestamp, which appear to be different variations of the same thing. Except that there’s no choice because you can’t appear to set a default value on a datetime column. So you have to pick timestamp.
    • And then you have to understand TIMESTAMP Properties as of MySQL 4.1. This is incredibly obtuse, but manages to just about make clear the utterly arbitrary limitation that only one column can have a default value. That’s just wrong. No, really.
  • DROP USER doesn’t drop a user unless they have no privileges. What’s the point in that? Thankfully, this seems to have been fixed in later versions.
  • One minor, but exceedingly annoying faux pas: ^W in the mysql command line doesn’t delete the previous word. It deletes the entire line. Strictly contrary to every other user of readline.

I’m using MySQL 5.0.16, which I thought had grown up beyond all this sort of thing. Oh well. None of this stuff is an impenetrable problem, but it’s also certainly not endearing me to us MySQL for future projects.

Update: Just for amusement, witness TeeJay having even more pain going in the opposite direction.


Typo Times Fix

After a bit of searching, I’ve found the problems with the odd times on this weblog. It’s a problem with ActiveRecord (part of Rails) and in particular, PostgreSQLAdaptor.

Basically, it computes a default value once, inside Ruby, and then caches it. Which is a bit odd, to say the least.

There are a number of open tickets related to this: 2873, 2877, 2257 and probably more. But looking at them, it seems that the problem will be fixed in Rails 1.1, which should be released “soon”.

Meanwhile, I’ll patch this particular instance of Typo to update the time on create. This is what I’ve used, and it seems to be working ok.

Index: app/models/article.rb
--- app/models/article.rb       (revision 727)
+++ app/models/article.rb       (working copy)
@@ -94,6 +94,9 @@
     self.extended_html = HtmlEngine.transform(extended, self.text_filter)

+  # Work around PostgreSQL adaptor (
+  before_create { |art| art.created_at = }
   def self.time_delta(year, month = nil, day = nil)
     from = Time.mktime(year, month || 1, day || 1)

Index: app/models/comment.rb
--- app/models/comment.rb       (revision 727)
+++ app/models/comment.rb       (working copy)
@@ -29,4 +29,7 @@
     self.body_html = HtmlEngine.transform(body, config["comment_text_filter"], [:filter_html])

+  # Work around PostgreSQL adaptor (
+  before_create { |art| art.created_at = }

Rails Migrations

Recently, I’ve been playing around with Rails and one of the things that has really impressed me is the support for migrations. If you haven’t come across them before, it’s a well defined way of doing agile development with your databases. Your database contains a table schema_info with a column version. And then you define an ordered series of scripts which each contain up and down methods to migrate between versions.

If that sounds complicated, check out the example from the documentation:

  class AddSsl < ActiveRecord::Migration
    def self.up
      add_column :accounts, :ssl_enabled, :boolean, :default => 1

    def self.down
      remove_column :accounts, :ssl_enabled

You’ll notice that we’re not doing any SQL here. That means that as a side effect of falling into line with Rails, you get a level of database portability for free. Marvellous!

All this was prompted by looking at the recipe on migrations in the new Rails Recipes book.

This is not to say that migrations are a panacea, however. As always, by trading SQL for a domain language, you lose access to some features. Foreign key relationships aren’t easy to express in this manner, for instance. But that doesn’t mean they can’t be used—for an example see robby’s post on Rails Migrations and PostgreSQL Constraints. It just ties you to a particular platform. Which is probably no big deal for most people, as they’re likely only using one database anyway.

So far, I’ve had two major tripups with migrations.

  1. The SQLite adaptor doesn’t appear to support some things properly. I think it managed to nuke the contents of a column I was playing with. But I need to look further at this.
  2. Purely my own fault. I had set up a load of migrations in my development environment and I forgot to transfer them to the test environment. I got very confused for a while until I remembered to do that… Again, it’s my own fault because I was running the individual test files by hand, instead of using rake.

But still, compared to everything else I’ve seen, migrations in rails are superb. Particularly after going through similar pains at work in the last week! Rails once again hits the sweet spot, providing just enough framework to get things done easily, without getting in the way too much.

P.S. If you are playing with migrations, I recommend installing Jamis Buck’s verbose_migrations plugin. It gives you a little more feedback on what’s going on. Normally, I like the Unix way of “no feedback until it goes wrong”, but whilst I’m still learning to trust migrations, this plugin is helpful.


Server Move

Well, after a couple of very long and frustrating evenings, I’ve finally managed to replicate the setup of my old, decrepit server onto a shiny new (and hopefully more reliable) dell box. I thought that this would be easy, as I was under the impression that I had everything under version control. But it’s only when you start on an endeavour like this that you realise just how much tweaking you have done to a box. Some particular highlights of this move:

  • I had been installing my squid.conf to the wrong place for about 2 years.
  • I discovered I’d been running clamscan (command line) instead of calling clamd (daemon) for a year. Slow mail ‘r’ us.
  • I discovered that my PostgreSQL backups had not been running for over a year.
  • I noticed that my trac installation had been wikispammed a few days ago (I didn’t notice when it happened). Trac needs a “revert to previous version” button.
  • Rails has caused me the most grief, surprisingly. getting typo up and running has been a pain. Firstly, because I didn’t have FastCGI installed. But I didn’t get an error for some reason. But whe I did install it, all I got was “FastCGI: incomplete headers (35 bytes) received from server”. I have now learned that this means “something went wrong but I’m not going to tell you what is. In my case, it was needing to install the ruby-postgres drivers. But rails isn’t kind enough to tell you that it can’t load the db drivers you’ve requested, instead it just silently fails. Which is a bit of a nuisance. I’ll have to check the trunk to see if it’s been fixed…

Anyway, now I can finally get on with the other more important tasks that I have to do. I gave up being a sysadmin 5 years ago for a damned good reason.


SQL Domains

I’ve not come across SQL domains before. But this blog post, “Or you could use a domain,” nicely shows how you can use them in a database to encapsulate a data type. It seems a bit like using typedef in C—you can’t create new types, but you can create similiar ones based on preexisting ones. Although domains allow you to have much more interesting constraints, looking at the documentation for CREATE DOMAIN.


Bloody Software

Over Christmas, my server has mostly been turned off. This is because it’s in one of the guest bedrooms and my mother who was staying in there didn’t appreciate the noise.

On the day after boxing day, we travelled up to the in-law’s for a few days. And so I switched the server on and ran out of the door. Unfortunately PostgreSQL failed to start up correctly (a recent change to the FreeBSD ports system, to which PostgreSQL had not been adjusted). Thanks to the marvels of PHP, this blog just displayed a database connect error. Including the database username and password in the clear. How screwed is that?

What’s worse is that I kept getting “connection refused” when attempting to connect from the in-law’s broadband connection. I have no idea why, but I suspect that some filtering is occuring in their router or at BT Internet. Grrr. So no way to fix it until I got back, just now.

So, the sooner I can get Typo up and running instead, the better. It won’t work without a database, but I very much doubt that it will default to spamming my username and password to the world… Good job I don’t use that one elsewhere.


Blogging Software

It started innocently enough, but as usual turned into yak shaving rather quickly. I wanted to try out some graphical editors for posting to my blog. Sounds simple enough. Except that a while back, I chose wordpress-pg. Its a version of wordpress, but it talks to PostgreSQL instead. Except that for some reason, my copy doesn’t have the xmlrpc.php file needed. So heading over to the wordpress-pg site, I am greeted by this lovely message.

WARNING! This version is probably subject to flaws in xmlrpc handling. Delete xmlrpc.php if present in your install. This flaw can allows outside attackers to execute arbitrary commands as the user your web server runs as.

Oh dear. So I won’t be doing that then. But part of the problem is that it’s already fixed in wordpress 1.5 (MySQL edition). Sadly, wordpress-pg is a very inactive project. I know, it’s open source, fix it yourself. But right now, I more or less just want something to work. And I don’t want to delve into scary php if I can help it (I know, that’s rich coming from a Perl programmer). So what are the choices?

I could look at getting MySQL installed and moving over to wordpress proper. Well, I’ve finally gotten around to doing some MySQL at work and I now understand it a little bit, even if I don’t like it that much. But I still have to get the actual database ported from PostgreSQL to MySQL somehow, which seems jolly arduous.

The alternative is to take a look at Typo, a Rails powered blogging engine. The selling points are that it appears to be actively maintained, it’s written in Rails (which I wish to play with) and also that it has a wordpress import script. So I’ve just spent an hour attempting to convert my wordpress-pg install over to typo. And I have to say that it works pretty well. Here’s what I found.

Firstly, I followed the basic install to get typo up and running. I saw Piers complaining about it, but it’s reasonable for me. But I’ve used rails before, so I’m probably not the target market. Anyway, I got to the admin screens and set up a test account pretty quickly. I then ran the import script giving it my old wordpress database name. Needless to say, it didn’t work first time. I had to make these changes to the script:

  1. Get rid of all backquotes, that’s some weird MySQLism.
  2. I had to change all entry['ID'] to entry['id']. PostgreSQL downcases things.
  3. I had to rework the trackback support slightly. Because wordpress-pg is based on wordpress 1.2, trackbacks are stored as a comment that starts with &lt;trackback /&gt; at the beginning. So I had to change comment_type = 'trackback' into comment_content LIKE '&lt;trackback /&gt;%'
  4. The import script runs in the dev environment by default, but I had been testing in the production environment. This meant I had no users set up in my dev environment. Consequently, I had to patch up the relationships between the users and the articles afterwards (update articles set user_id=1;). Actually, looking at the script, it doesn’t seem to make those links anyway…

The biggest problem was actually getting the script to access the original data. The script assumes that you can do the MySQLish thing of accessing another database by specifying dbname.table. This doesn’t work in PostgreSQL. But, you can take advantage of the fact that PostgreSQL does support schemas, so instead, you can create a schema called wordpress and load a dump of the main database into there. I had to make a few edits to the dumpfile before loading it in though.

  1. Get rid of the stuff about plpgsql at the top. I don’t use it.
  2. Add CREATE SCHEMA wordpress and SET search_path = wordpress, public, pg_catalog.
  3. Comment out all the ALTER foo SET OWNER TO wordpress lines. You have to import it as the typo user and don’t want that changing behind your back.

After the import you can just run DROP SCHEMA wordpress CASCADE to clean up.

After that, I now have a working blog with all my old content. Hurrah! Not simple, but not too difficult either.

Now before I get this main blog converted over, I just need to work out how to run FastCGI scripts from Apache, something I haven’t done before. And then I’ll be away, and I can uninstall wordpress-pg.