Monday, December 15, 2008

drb.rb:852:in `initialize': getaddrinfo: nodename nor servname provided, or not known (SocketError) (aka DRb failure on OS X)

Update (2009/01/01):
Problem TT moved to redmine.
Update (2009/05/13): TT resolved.

I am currently busy building an priority queue server in ruby and I have chosen to use DRb as my communications platform.

While experimenting the simple examples from the Net (see here and here) I was consistently getting the same error from inside drb.rb (/opt/local/lib/ruby/1.8/drb/drb.rb:852):

/opt/local/lib/ruby/1.8/drb/drb.rb:852:in `initialize': getaddrinfo: nodename nor servname provided, or not known (SocketError)
from /opt/local/lib/ruby/1.8/drb/drb.rb:852:in `open'
from /opt/local/lib/ruby/1.8/drb/drb.rb:852:in `open_server_inaddr_any'
from /opt/local/lib/ruby/1.8/drb/drb.rb:864:in `open_server'
from /opt/local/lib/ruby/1.8/drb/drb.rb:759:in `open_server'
from /opt/local/lib/ruby/1.8/drb/drb.rb:757:in `each'
from /opt/local/lib/ruby/1.8/drb/drb.rb:757:in `open_server'
from /opt/local/lib/ruby/1.8/drb/drb.rb:1346:in `initialize'
from /opt/local/lib/ruby/1.8/drb/drb.rb:1634:in `new'
from /opt/local/lib/ruby/1.8/drb/drb.rb:1634:in `start_service'
from ./queue-provider.rb:32

What's up?
After poking drb.rb::self.open_server_inaddr_any(host, port) with a stick a few times two issues came to light:

  1. Multiple network address families are not catered for properly in the the code.

  2. where port == 0 fails under OS X but not Linux

Multiple Address Families
The code in question looks like this:

def self.open_server_inaddr_any(host, port)
infos = Socket::getaddrinfo(host, nil,
Socket::SOCK_STREAM, 0,
family = infos.collect { |af, *_| af }.uniq
case family
when ['AF_INET']
return'', port)
when ['AF_INET6']
return'::', port)

From that we can see that we only seem to expect one network address family which is a little naive. Socket::getaddrinfo() on my MacBook Pro has the following to say (where host == 'localhost'):

$ irb
irb(main):006:0> require "socket"
=> true
irb(main):007:0> host = 'localhost'
=> "localhost"
irb(main):008:0> Socket::getaddrinfo(host, nil,
irb(main):009:1* Socket::AF_UNSPEC,
irb(main):010:1* Socket::SOCK_STREAM, 0,
irb(main):011:1* Socket::AI_PASSIVE)
=> [["AF_INET6", 0, "localhost", "::1", 30, 1, 6], ["AF_INET6", 0, "localhost", "fe80::1%lo0", 30, 1, 6], ["AF_INET", 0, "localhost", "", 2, 1, 6]]

When you take this as your input you'll see that we don't end up matching either 'AF_INET' or 'AF_INET6' and we fall through to return because the case block expects a match against an array with one element. OS X Weirdness
I have used DRb on both Linux and Windblowns in the past without a hitch so I was rather surprised to run into something like this which is a show stopper on OS X. I though I'd see if I was having the same problems on Linux to have something to compare with:

$ irb
irb(main):001:0> require "socket"
=> true
irb(main):002:0> port = 0
=> 0
=> #

Works a treat! Let's try that on OS X:

$ irb
irb(main):001:0> require "socket"
=> true
irb(main):002:0> port = 0
=> 0
SocketError: getaddrinfo: nodename nor servname provided, or not known
from (irb):3:in `initialize'
from (irb):3:in `open'
from (irb):3
from :0


DRb Quilt
The first issue is rather trivial to fix:

def self.open_server_inaddr_any(host, port)
infos = Socket::getaddrinfo(host, nil,
families = Hash[*infos.collect { |af, *_| af }[]).flatten]
return'', port) if families.has_key?('AF_INET')
return'::', port) if families.has_key?('AF_INET6')

The code now rather assumes we're dealing with an array of one or more network address families and tries the IPv4 and IPv6 families first and then falls though to

I have opened a TT on RubyForge for this that contains a patch from me to fix the first issue.

What is required to fix the second issue? Dunno just yet, I'll keep looking and see if anything interesting pops up in the TT.

Wednesday, November 26, 2008

Massaging Rails Models (with a happy ending)

How do you alter data in a model so that the data which is stored to and gathered from the database is first filtered/transformed?

Two ways come to mind:

  • Insert the required behavior into your model's before_save, before_create and after_initialize callbacks.
  • Manually modify your attribute accessors for the attributes in question to do the magic for you.

We'll use the following contrived Model as our example:

class Gogga < ActiveRecord::Base

CREATE TABLE  `foo`.`goggas` (
`id` int(11) NOT NULL auto_increment,
`secret` varchar(255) default NULL,

A Gogga has an id and a secret field. Let's pretend we need to keep Gogga.secret encrypted (using the super secret ROT13 algorithm) in our db and we would like the fact that it is encrypted to be transparent to our rails app. We need to therefore handle en/decryption of the secret data transparently from the rest of the app within the model.

The before_save, before_create and after_initialize callbacks are well documented in the Callbacks API documentation.

The strategy behind using the callbacks is to simply insert the behavior we want at the relevant stage of the object's life cycle. Here's one way to accomplish this using the mentioned callbacks:

class Gogga < ActiveRecord::Base
def before_save
self.secret = rot13(self.secret)

def before_create
self.secret = rot13(self.secret)

def after_initialize
self.secret = rot13(self.secret)

def rot13(corpus)
return!("A-Za-z", "N-ZA-Mn-za-m")

If everything works as advertised our secret attribute should now be encoded when you call create or save on its model and decoded when you call a new on its model. The major drawback of this strategy is that if you are manipulating a large list of Goggas you will be post/pre-processing each of those instances.

The Lazy Way
An alternative would be to override the default behavior of the model to auto-generate attribute accessors via method_missing in the mystical black guts of ActiveRecord::Base. There's some info on this in the API docs as well in the Overwriting default accessors section.

This would look something like this:

class Gogga < ActiveRecord::Base
def secret
return rot13(read_attribute("secret"))

def secret=(value)
write_attribute("secret", rot13(value))

def rot13(corpus)
return!("A-Za-z", "N-ZA-Mn-za-m")

This technique has the advantage that you never really mess with the internals of the model (as the view you have of it from outside is tinted by the accessor transformations) and of course work only gets done when you need to read/write the specific attribute.

Now, when you have one or two attributes that need to be protected writing out two accessors for each is not the end of the world. However, when you have several things become messy, tedious and downright boring.

Maybe we can look at some meta-magic to DRY things up a bit in another article.

Thursday, August 21, 2008

Oh ExtJS TreePanel Click, wherefore art thou?

ExtJS is a high quality JS UI library that really eases the cross-platform blues when it comes to designing online apps. Unfortunately I find the documentation can be a be a little obtuse from time to time.

Even the trusty Google librarian cannot answer some of the questions that pop up right off the bat and a fair amount of searching is required to get some info.

TreePanel Click
I have a TreePanel that I populate through lazy loading to save the amount of data I need to send off to the client in any one request. Unfortunately for the life of me I could not work out how to determine which node in the tree was clicked (so that I could fire off an Ajax request to populate another panel with data related to the clicked node).

Attaching Events
The first thing to do would obviously be to attach a click event listener somewhere so that we can pick up when were being prodded. If you're not familiar to JavaScript and ExtJS specifically you'd be inclined to attach an event to each of the nodes.

This is wasteful in the browser context as you should be taking advantage of event bubbling. So we simply attach a listener to the whole tree component:
// Create initial root node
root = new Ext.tree.AsyncTreeNode({
text: 'Invisible Root',

// Create the tree
tree = new Ext.tree.TreePanel({
loader: new Ext.tree.TreeLoader({
root: root,

// Expand invisible root node to trigger load of the first level of actual data

// Listen for mouse clicks
Ext.get('company-tree').on("click", function(){
// Code to determine which node was clicked ...
Where did that click go?
Never having used the TreePanel control before I had no real inkling how I was going to reap the relevant id from the node that was clicked. I tried several approaches but nothing was bearing any fruit till I found this article on the ExtJS forums discussing something similar.

Armed with a new weapon I was able to refactor the listener:
  // Listen for mouse clicks
Ext.get('company-tree').on("click", function(){
node = tree.getSelectionModel().getSelectedNode();
console.log("You clicked on",;

That'll do.

Tuesday, June 17, 2008

Installing PECL/PEAR PHP modules on a RHEL box

The default RHEL 5.2 installation does not come with xdebug as part of any of the php RPMs. A quick look around the Net also provided no real RPM candidates that I could use on this system so I had to fall back to using the package management tools (pecl and pear) provided by php.

PECL's Odyssey
RHEL is a general PITA for me already so I just sigh and get on with it:
$ sudo pecl install xdebug
downloading xdebug-2.0.3.tgz ...
Starting to download xdebug-2.0.3.tgz (286,325 bytes)
...........................................................done: 286,325 bytes
66 source files, building
running: phpize
Configuring for:
PHP Api Version: 20041225
Zend Module Api No: 20060613
Zend Extension Api No: 220060519
/usr/bin/phpize: /tmp/pear/download/xdebug-2.0.3/build/shtool: /bin/sh: bad interpreter: Permission denied
Cannot find autoconf. Please check your autoconf installation and the $PHP_AUTOCONF
environment variable is set correctly and then rerun this script.

ERROR: `phpize' failed
Yep, loving it already!

Why on earth am I not able to invoke /bin/sh (as can be seen by the '/bin/sh: bad interpreter: Permission denied' error above)? Let's see if root can actually run the shell interpreter:
$ sudo /bin/sh
sh-3.2# exit
OK, everything looks good. Why is it breaking when we're trying to run the interpreter from /tmp/pear/download/xdebug-2.0.3/build/shtool?

Back to basics
Perhaps this has something to do with where we're trying to run it from and the user we're doing the installation as (root) seems to be capable of running the interpreter but not from the shtool script for some reason.
$ ls -ld /tmp/
drwxrwxrwt 17 root root 4096 Jun 18 07:41 /tmp/
Obviously _not_ a permissions issue.
$ grep tmp /etc/fstab
/dev/sda2 /tmp ext3 defaults,nosuid,nodev,noexec 1 2
Ah, there you are! /tmp is mounted with a 'noexec' flag so that's what's causing the execution to fail when we try to install xdebug via pecl. No problem, I'll just set pecl to use /var/tmp instead ... oh, wait, on RHEL systems /var/tmp is just a symlink to /tmp.


Hand me half a brick
Time to work around the issue. Let's go find those directories pear expects to be somehow related to /tmp or /var/tmp:
$ pear config-show | grep tmp
PEAR Installer download download_dir /tmp/pear/download
PEAR Installer temp directory temp_dir /var/tmp
I updated these to temporarily point elsewhere:
$ pecl config-show| grep tmp
PEAR Installer download download_dir /root/tmp/pear/download
PEAR Installer temp directory temp_dir /root/tmp
$ sudo mkdir -p /root/tmp/pear/download
Second verse, same as the first
Let's give that installation another whirl and see where we are:
$ sudo pecl install xdebug
downloading xdebug-2.0.3.tgz ...
Starting to download xdebug-2.0.3.tgz (286,325 bytes)
.....................................done: 286,325 bytes
66 source files, building
running: phpize
Configuring for:
PHP Api Version: 20041225
Zend Module Api No: 20060613
Zend Extension Api No: 220060519
building in /var/tmp/pear-build-root/xdebug-2.0.3
running: /root/tmp/pear/download/xdebug-2.0.3/configure
checking for egrep... grep -E
checking for a sed that does not truncate output... /bin/sed
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... configure: error: cannot run C compiled programs.
If you meant to cross compile, use `--host'.
See `config.log' for more details.
ERROR: `/root/tmp/pear/download/xdebug-2.0.3/configure' failed
A quick look at /root/tmp/pear/download/xdebug-2.0.3/configure shows no obvious reasons why we're failing so out comes the cluebat :
$ sudo strace pecl insstall xdebug 2>&1
access("/etc/", R_OK) = -1 ENOENT (No such file or directory)
execve("/usr/bin/pecl", ["pecl", "install", "xdebug"], [/* 16 vars */]) = 0
brk(0) = 0x8c3d000

[... truncated ...]

flock(3, LOCK_UN) = 0
close(3) = 0
write(1, "ERROR: `/root/tmp/pear/download/"..., 63ERROR: `/root/tmp/pear/download/xdebug-2.0.3/configure' failed
) = 63
stat64("/var/tmp/pear-build-root/xdebug-2.0.3", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat64("/var/tmp/pear-build-root/xdebug-2.0.3", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
open("/var/tmp/pear-build-root/xdebug-2.0.3", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 3
fstat64(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
time(NULL) = 1213740767
lstat64("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat64("/var/tmp", {st_mode=S_IFLNK|0777, st_size=4, ...}) = 0
readlink("/var/tmp", "/tmp", 4096) = 4
lstat64("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=4096, ...}) = 0
lstat64("/tmp/pear-build-root", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat64("/tmp/pear-build-root/xdebug-2.0.3", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
getdents(3, /* 4 entries */, 4096) = 80
getdents(3, /* 0 entries */, 4096) = 0
close(3) = 0
stat64("/tmp/pear-build-root/xdebug-2.0.3/config.log", {st_mode=S_IFREG|0644, st_size=4948, ...}) = 0
stat64("/tmp/pear-build-root/xdebug-2.0.3/config.nice", {st_mode=S_IFREG|0755, st_size=93, ...}) = 0
unlink("/tmp/pear-build-root/xdebug-2.0.3/config.log") = 0
unlink("/tmp/pear-build-root/xdebug-2.0.3/config.nice") = 0
rmdir("/tmp/pear-build-root/xdebug-2.0.3") = 0
stat64("/var/tmp/pear-build-root/install-xdebug-2.0.3", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat64("/var/tmp/pear-build-root/install-xdebug-2.0.3", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
open("/var/tmp/pear-build-root/install-xdebug-2.0.3", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 3
fstat64(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
time(NULL) = 1213740767
lstat64("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat64("/var/tmp", {st_mode=S_IFLNK|0777, st_size=4, ...}) = 0
readlink("/var/tmp", "/tmp", 4096) = 4
lstat64("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=4096, ...}) = 0
lstat64("/tmp/pear-build-root", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat64("/tmp/pear-build-root/install-xdebug-2.0.3", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
getdents(3, /* 2 entries */, 4096) = 32
getdents(3, /* 0 entries */, 4096) = 0
close(3) = 0
rmdir("/tmp/pear-build-root/install-xdebug-2.0.3") = 0
stat64("/tmp/glibctestUP8dWe", 0xbfa97ca0) = -1 ENOENT (No such file or directory)
unlink("/tmp/glibctestUP8dWe") = -1 ENOENT (No such file or directory)
umask(022) = 022
close(2) = 0
close(1) = 0
close(0) = 0
munmap(0xb7fd6000, 4096) = 0
setitimer(ITIMER_PROF, {it_interval={0, 0}, it_value={0, 0}}, NULL) = 0
brk(0xa3ce000) = 0xa3ce000
setitimer(ITIMER_PROF, {it_interval={0, 0}, it_value={0, 0}}, NULL) = 0
brk(0xa18a000) = 0xa18a000
After all that pecl (aka pear) still tried to muck with /tmp by trying to run things in /var/tmp/pear-build-root/xdebug-2.0.3.

Fine. Be that way.
$ sudo cd /tmp && rm -fr pear pear-build-root
$ sudo ln -s /root/tmp/pear-build-root .
One more time please Sam:
$ sudo pecl install xdebug
downloading xdebug-2.0.3.tgz ...
Starting to download xdebug-2.0.3.tgz (286,325 bytes)
.....................................................done: 286,325 bytes
66 source files, building
running: phpize
Configuring for:
PHP Api Version: 20041225
Zend Module Api No: 20060613
Zend Extension Api No: 220060519
building in /var/tmp/pear-build-root/xdebug-2.0.3
running: /root/tmp/pear/download/xdebug-2.0.3/configure
checking for egrep... grep -E
checking for a sed that does not truncate output... /bin/sed
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes

[... truncated ...]

Build complete.
Don't forget to run 'make test'.

running: make INSTALL_ROOT="/var/tmp/pear-build-root/install-xdebug-2.0.3" install
Installing shared extensions: /var/tmp/pear-build-root/install-xdebug-2.0.3/usr/lib/php/modules/
running: find "/var/tmp/pear-build-root/install-xdebug-2.0.3" -ls
36962402 4 drwxr-xr-x 3 root root 4096 Jun 18 08:20 /var/tmp/pear-build-root/install-xdebug-2.0.3
36962465 4 drwxr-xr-x 3 root root 4096 Jun 18 08:20 /var/tmp/pear-build-root/install-xdebug-2.0.3/usr
36962466 4 drwxr-xr-x 3 root root 4096 Jun 18 08:20 /var/tmp/pear-build-root/install-xdebug-2.0.3/usr/lib
36962467 4 drwxr-xr-x 3 root root 4096 Jun 18 08:20 /var/tmp/pear-build-root/install-xdebug-2.0.3/usr/lib/php
36962468 4 drwxr-xr-x 2 root root 4096 Jun 18 08:20 /var/tmp/pear-build-root/install-xdebug-2.0.3/usr/lib/php/modules
36962464 608 -rwxr-xr-x 1 root root 618210 Jun 18 08:20 /var/tmp/pear-build-root/install-xdebug-2.0.3/usr/lib/php/modules/

Build process completed successfully
Installing '/usr/lib/php/modules/'
install ok: channel://
configuration option "php_ini" is not set to php.ini location
You should add "" to php.ini
Smell that? It's sweet success!

No more crawling, it's time to walk!
Let's finish the install off for brevity's sake:
$ sudo echo 'zend_extension="/usr/lib/php/modules/"' > /etc/php.d/xdebug.ini
$ php -i | grep xdebug
xdebug support => enabled
xdebug.auto_trace => Off => Off
xdebug.collect_includes => On => On
xdebug.collect_params => 0 => 0
xdebug.collect_return => Off => Off
xdebug.collect_vars => Off => Off
xdebug.default_enable => On => On
xdebug.dump.COOKIE => no value => no value
xdebug.dump.ENV => no value => no value
xdebug.dump.FILES => no value => no value
xdebug.dump.GET => no value => no value
xdebug.dump.POST => no value => no value
xdebug.dump.REQUEST => no value => no value
xdebug.dump.SERVER => no value => no value
xdebug.dump.SESSION => no value => no value
xdebug.dump_globals => On => On
xdebug.dump_once => On => On
xdebug.dump_undefined => Off => Off
xdebug.extended_info => On => On
xdebug.idekey => root => no value
xdebug.manual_url => =>
xdebug.max_nesting_level => 100 => 100
xdebug.profiler_aggregate => Off => Off
xdebug.profiler_append => Off => Off
xdebug.profiler_enable => Off => Off
xdebug.profiler_enable_trigger => Off => Off
xdebug.profiler_output_dir => /tmp => /tmp
xdebug.profiler_output_name => cachegrind.out.%p => cachegrind.out.%p
xdebug.remote_autostart => Off => Off
xdebug.remote_enable => Off => Off
xdebug.remote_handler => dbgp => dbgp
xdebug.remote_host => localhost => localhost
xdebug.remote_log => no value => no value
xdebug.remote_mode => req => req
xdebug.remote_port => 9000 => 9000
xdebug.show_exception_trace => Off => Off
xdebug.show_local_vars => Off => Off
xdebug.show_mem_delta => Off => Off
xdebug.trace_format => 0 => 0
xdebug.trace_options => 0 => 0
xdebug.trace_output_dir => /tmp => /tmp
xdebug.trace_output_name => trace.%c => trace.%c
xdebug.var_display_max_children => 128 => 128
xdebug.var_display_max_data => 512 => 512
xdebug.var_display_max_depth => 3 => 3
I can now delete those temporary directories I created in /root/tmp and get back to twitching in the corner, or ...

Easy alternative - just add remount
Instead of the hoops I jumped through to get the pecl bits to stop addressing /tmp I could simply have remounted the /tmp filesystem to allow execution:
$ sudo mount -o remount,exec /tmp
$ sudo pecl install xdebug
$ sudo mount -o remount,defaults,nosuid,nodev,noexec /tmp
This however would have compromised the security that was put in place to stop the possible malicious execution of bits (especially root kits) in /tmp (or /var/tmp).

You have the power (and the knowledge now) so wield it to your benefit.

Friday, May 30, 2008

warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory

There's nothing quite like being in a complete coding frenzy, communicating with your customers to get feedback on critical bugs, and your mail server going to SMTP heaven on you.

The Setup
My MTA (postfix) is set up to do secure SMTP-AUTH and TLS via Cyrus SASL library (a.k.a. saslauthd via the sasl2-bin package) on an Ubuntu box.

Postfix SASL support (RFC 4954, formerly RFC 2554) is used to authenticate remote SMTP clients to the MTA and the Postfix SMTP client to a remote SMTP server.

The Error
I originally set things up via the Postfix-SMTP-AUTH-TLS-Howto and everything was working fine until earlier today when I started seeing the following log entries when trying to send mail vi the MTA:
May 30 03:03:36 pyxidis postfix/smtpd[2840]: connect from unknown[x.x.x.x]
May 30 03:03:37 pyxidis postfix/smtpd[2840]: setting up TLS connection from unknown[x.x.x.x]
May 30 03:03:40 pyxidis postfix/smtpd[2840]: Anonymous TLS connection established from unknown[x.x.x.x]: TLSv1 with cipher AES128-SHA (128/128 bits)
May 30 03:03:40 pyxidis postfix/smtpd[2840]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory
May 30 03:03:40 pyxidis postfix/smtpd[2840]: warning: SASL authentication failure: Password verification failed
May 30 03:03:40 pyxidis postfix/smtpd[2840]: warning: unknown[x.x.x.x]: SASL PLAIN authentication failed: generic failure
May 30 03:03:46 pyxidis postfix/smtpd[2840]: lost connection after AUTH from unknown[x.x.x.x]
May 30 03:03:46 pyxidis postfix/smtpd[2840]: disconnect from unknown[x.x.x.x]
The Solution
I checked and the saslauthd process was happily running. Next up I had a peek in /var/spool/postfix/var/run/saslauthd/ (which I had previously created as per the HOWTO above) but there were no *mux* files to be seen as there should have been.

I then dawned on me that postfix runs in a chrooted jail and that saslauthd for some reason had stopped writing the required info to the chrooted jail where postfix was running. A quick look at the saslauthd rc script and its default file showed that it no longer had the required config to do this properly.

Why? Dunno. I'll have to go do some snooping a little later.

For now thought the fix was as simple as modifying the OPTIONS variable in the /etc/defaults/saslauthd config file to be something like this:
OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd"
Restart saslauthd and things start appearing where they should and mail is back in business.

Cacti Segfaults

I recently did a cacti installation for a customer and ended up skipping step #3 from the Install and Configure Cacti guide.

Step in question:
shell> mysql cacti 
Why would I have skipped this very crucial step you might say?

Well, the installation was done in two sessions with enough time having elapsed between the initial and final sessions that I had gotten hazy on what was and wasn't done. I had created the database but just never took the next step.

After completing the rest of the configuration I fired Cacti but via my browser and php promptly did a segfault and lay there on the ground haemorrhaging. From my recent experience php has a propensity to do this in two specific cases:
  • Something went awry with a database connection or using a database resource
  • You're pushing the php boundaries with recursive regexps in a pre_match*() function
Once I sat down and went through each of the steps required to set the beastie up I realized I simply needed to import the db schema and initial data to get things going.


Saturday, May 24, 2008

Tales from the (PHP and Perl) Crypt - AES Encryption in MySQL

I was looking for a way to share encrypted information between two systems where a table in MySQL was the integration point.

The one system is based on php while the other component is a perl daemon.

Let's get cryptic
My first stab at this was a perl based solution using the Crypt::CBC and Crypt::Blowfish libraries plus a shared secret/key. This meant I had to develop a perl script which I called from php to do the encryption which is a rather inelegant solution.

At first I could not find the right libs in php to get this done but later stumbled upon the
Mcrypt suit off php and MCrypt perl functions that allow you to do encryption between the two different subsystems.

Unfortunately this means you have double the amount of hassle when it comes to updates and ensuring things Just Work™.

Move it back to the source
Some more checking brought me to the MySQL AES encryption functions that are built into MySQL. They provide the best cryptographic algorithms MySQL currently has to offer and are pretty respectable from a academic encryption perspective.

This means en/decryption is dealt with at one integration point across all languages involved which is much more elegant.

Tales from the Crypt
The MySQL AES encryption functions allow you to en/decrypt data quite easily. To encrypt a string you simply issue the following, assuming your shared secret is lesser-spotted-mountain-squid:
mysql> INSERT INTO test_table (test_column) VALUES(AES_ENCRYPT('this is a super-secret message', 'lesser-spotted-mountain-squid'));
Query OK, 1 row affected (0.09 sec)

mysql> SELECT * FROM test_table;
| test_column |
| Aÿ„1
ý#ôärO™é=:Žï ¼Ñ†kWA |
1 row in set (0.00 sec)
Et voila!

One thing you need to keep in mind is that the field you want to store your encrypted data in must be a MySQL
BLOB data type.

Sucking our super secret string back out into a usable form is as simple as:
mysql> SELECT AES_DECRYPT(test_column, 'lesser-spotted-mountain-squid') AS top_secret FROM test_table;
| top_secret |
| this is a super-secret message |
1 row in set (0.00 sec)
The security lesson
This is rather obvious but your security is only as strong as the weakest link in the chain. In this specific case I did not want to have clear text data in the db and achieved that amicably.

Because my secret is in clear text in two different systems I am rather exposed if those systems are not as secure as they could be. Lucky for my they are pretty much locked away from daylight so I'm not too concerned.

Tuesday, May 6, 2008

Ubuntu 8.04 and PAM SMB Password

For those of you who have taken the plunge to Ubuntu v8.04 (Hardy Heron) you may have noticed that your auth.log is being filled with the following:
May  4 03:17:01 example CRON[10796]: PAM adding faulty module: /lib/security/
May 4 04:17:01 example CRON[10799]: PAM unable to dlopen(/lib/security/
May 4 04:17:01 example CRON[10799]: PAM [error: /lib/security/ cannot open shared object file: No such file or directory]
What's up?
For some reason the Ubuntu gods have decided by default to include PAM configuration for the PAM SMB password module without actually installing the PAM SMB password module.

Hence the complaints in your logs.

Make it go away!
Sure, simply install the libpam-smbpass package or edit two config files on your system like this:

$ perl -p -i -e 's/(password\s+optional\ nullok use_authtok use_first_pass)/#$1/' /etc/pam.d/common-password
$ perl -p -i -e 's/(auth\s+optional\ migrate)/#$1/' /etc/pam.d/common-auth

You can find some more info on this boog here.

Saturday, April 19, 2008

Baking CakePHP with nginx

I've switched all my web server related infrastructure to nginx which is a high performance HTTP server (amongst other things) which blows apache out of the water (IMNSHO).

If you're a CakePHP user you'll know that it has some special rewriting requirements which can be found in $ROOT/.htaccess:

$ cat .htaccess

RewriteEngine on
RewriteRule ^$ app/webroot/ [L]
RewriteRule (.*) app/webroot/$1 [L]

Unfortunately this .htaccess file is an apache mechanism to achieve the URL rewriting that is required to get CakePHP to play nice. The apache rewrite rules do not translate directly to something nginx can use though so some work is required to get things going.

Being lazy (and believing that no problem I discover is unique to me) I had a look a round and found this article by Chris Hartjes which needed a some mods for it to work for my setup:

# CakePHP rewrite rules
location / {
root /opt/local/html/live_site;
index index.php;

# Serve static page immediately
if (-f $request_filename) {

if (!-f $request_filename) {
rewrite ^/(.+)$ /index.php?url=$1 last;

Here's my complete nginx.conf to give you an idea of how everything fits together:

user nobody;
worker_processes 1;

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/;

events {
worker_connections 1024;

http {
include etc/nginx/mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 128;

sendfile on;
keepalive_timeout 20;
tcp_nodelay on;

server {
listen 80;
server_name localhost;
rewrite_log on;

#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root share/nginx/html;

# Serve static content directly with some caching goodness
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ {
root /opt/local/html/live_site/app/webroot;
access_log off;
expires 1d;

# CakePHP rewrite rules
location / {
root /opt/local/html/live_site;
index index.php;

# Serve static pages immediately
if (-f $request_filename) {

# Rewrite all other URLs
if (!-f $request_filename) {
rewrite ^/(.+)$ /index.php?url=$1 last;

# Pass PHP scripts to FastCGI server listening on
location ~ \.php$ {
root /opt/local/html/live_site;
fastcgi_index index.php;
include /opt/local/etc/nginx/fastcgi_params;

Friday, April 18, 2008

PHP5 SimpleXML and CDATA

I am in the process of porting a rather large PHP4 application to PHP5 (just in time for PHP6, yes, yes, I know) for one of my customers. Most of the application is pure imperative programming so the switch has been rather painless.

Unfortunately (for me) they have made rather judicious use of the PHP4 domxml extension libraries which no longer exist in PHP5 (where they have been replaced with the dom extension).

Moving from domxml to dom was straight forward (looping over tags, tags, attributes and text inside tags are addresses differently) so I chose to simply re-write the code to use the new dom extensions instead of opting for a translation library like the one provided by Alexandre Alapetite.

One exception to this was the use of < !--[CDATA[...]]--> blocks which SimpleXML simply seemed to discard when creating a new object.

A quick look around Google (here, here and here) and I found what needed to be done to address SimpleXML's ignorant behaviour.

The SimpleXML constructor allows you to pass in extra libxml2 parameters which allow you to get further functionality out of the library. The one I was interested was of course:

Merge CDATA as text nodes

So, simply changing my constructor from:
$xml = new SimpleXMLElement($text)
$xml = new SimpleXMLElement($text, LIBXML_NOCDATA)
was all that was required for me to gain access to those < !--[CDATA[...]]--> structures.

Tuesday, April 1, 2008

Aspell, PSPELL, PHP and OS X

Until quite recently there was no way to use Aspell (via the PSPELL PHP libs) on an OS X host that was using the MacPorts system for package management.

This was simply because of the lack of a pspell variant for the php{45} packages.

Port of call
The ports system has the notion of a variant for packages that are conditional modifications of port installation behaviour.

There are two types of variants: user-selected variants and platform variants.

User-selected variants are options selected by a user when a port is installed while platform variants are selected automatically by MacPorts' base according to the OS or hardware platform (Darwin, FreeBSD, Linux, i386, PPC, etc.).

More stuff and less fluff
There are several ways of working this. The first would be to log a ticket at the MacPorts trac with a request to add the variant.

Depending on the package maintainer's load you may be a response pretty quickly. I've generally gotten something back within days of logging the ticket.

In the meantime your universe cannot come to a halt waiting for someone else to add the next greatest thing as a variant to your favourite package. So here's some manual steps to get things up and running in the meantime:

  • Install Aspell
  • Recompile and install PHP with the required PSPELL support
  • Check php to ensure the new PSPELL libs are active
  • Do a simple test to see if everything is working as it should

Install Aspell
I'll assume you're using MacPorts for your your package management on your OS X host.

Grab the aspell application, libs and whichever dictionaries catch your fancy:

$ sudo port install aspell aspell-dict-en

Be sure to install at least one dictionary or your spell checking days will be rather deflated.

Recompile and install PHP with the required PSPELL support
To manually add a compilation flag you need to edit the Portfile that comes with your installed PHP version. Mine is located at:


Edit the Portfile and add “–with-pspell=${prefix}” to configure.args. You can then re-ininstall PHP and it should now use the modified Portfile to compile PHP.

Check php to ensure the new PSPELL libs are active
Use the following script from the command line to determine if your PHP was re-installed with the required PSPELL bits enabled:

$ php -r 'phpinfo();' | grep PSpell
PSpell Support => enabled

Do a simple test to see if everything is working as it should
You should be able to just use the example from the PHP documentation page to ensure everything is fine:

$ cat /tmp/t_pspell.php

$ php -f /tmp/t_pspell.php
Sorry, wrong spelling

The packaging gods exist!
A few days after creating this ticket on the MacPorts trac I got a response from the package maintainer informing me that the variant had been added to all the relevant PHP packages.


If your ports distribution files are up-to-date you should now be able to do the following to see which variants are available for your PHP version of choice:

$ port info php5
php5 5.2.5, Revision 2, www/php5 (Variants: universal, darwin_6, darwin_7, macosx, apache, apache2, fastcgi, gmp, imap, pspell, tidy, mssql, snmp, macports_snmp, mysql3, mysql4, mysql5, oracle, postgresql, sqlite, ipc, pcntl, pear, readline, sockets)

PHP is a widely-used general-purpose scripting language that is especially suited for developing web sites, but can also be used for command-line scripting.

Library Dependencies: libxml2, libxslt, openssl, zlib, bzip2, libiconv, expat, gettext, tiff, mhash, libmcrypt, curl, pcre, jpeg, libpng, freetype
Platforms: darwin freebsd

Now that pspell is listed as a variant you can install PHP with this variant by doing the following, after removing previous PHP version:

$ sudo port install php5 +pspell

Friday, February 22, 2008

Pre-queue content-filter connection overload

In the last while I've been seeing the following error pop up in my logwatch report for postfix:

*Warning: Pre-queue content-filter connection overload

At first I was concerned that the pre-queue content-filtering subsystem of postfix was somehow being overwhelmed and I was possibly loosing mail. Digging around a bit more lead to these types of log entries that seemed like they were the subject of the logwatch report:

Feb 21 13:01:45 pyxidis postfix/smtpd[5994]: connect from unknown[unknown]
Feb 21 13:01:45 pyxidis postfix/smtpd[5994]: lost connection after CONNECT from unknown[unknown]
Feb 21 13:01:45 pyxidis postfix/smtpd[5994]: disconnect from unknown[unknown]

From what I can glean here the log entries above indicate that a SMTP connection was established with the kernel but the connecting host hot potatoed it before postfix was able to process the connection.

When postfix tries to process the connection there's nobody home because the kernel had already removed the connection and it dumps something like the lines above to the mail log.

Here's the scoop from the logwatch docs:

This sometimes occurs in reaction to a portscan or broken bots, or when postfix is overloaded, due to excessive header_checks / body_checks content filtering, or even too few smtpd processes to service the demand. One could reduce the number of header_checks and body_checks, and possibly set smtpd_timeout to 60 (seconds). The key is that existing clients are overloading the number of smtpd daemons. The postfix-logwatch section configuration variable is postfix_ConnectionLostOverload, and the command line option is --connectionlostoverload. If you consider this sub-section to be meaningless, set the level limiter value to 0 and the sub-section will be suppressed.

I was not going to change my header or body checks (because they keep the unwashed spammers at bay) so I opted for tuning my smtpd_timeout down to 30 seconds in

Because this looks like it is a possible resource issue you could also up the amount of smtpd processes allowed to service the pre-queue content-filtering subsystem.

Wednesday, February 20, 2008

Firefox 3 (beta x) and Firebug

18 February 2008

Firefox 3 (beta x) and Firebug

My favourite extension, by far, for Firefox is Firebug. It is a combination of strace and tcpdump for web applications. It allows you to drill down into all aspects of the send-response loop between your browser and a web app.

If you're doing any JavaScript/AJAX development this tool will be invaluable to you!

Living on the edge though, as you do, I upgraded the version of Firefox I was running to v3.0b3 and too my annoyance Firebug now no longer worked.


Lucky for me the crew at Fireclipse have enhanced Firebug 1.05 by Joe Hewitt with enhancements and bug fixes. I simply grabbed the XPI from here and installed it from within Firefox.

Et Voilà!

Another Reality Dysfunction is averted and life continues unimpeded.

Dovecot time Machine

05 January 2008

Dovecot time Machine

I have a XEN virtual machine which was complaining about time. Dovecot seemed to be the most vocal with errors like this in the logs:

dovecot: IMAP(charl): Time just moved backwards by 1 seconds. I'll sleep now until we're back in present.

I run openntpd (OpenBSD NTP daemon) on the box but that was not seemingly keeping the date, well, up-to-date. Manually running ntpdate was also not providing the sync I sought and because this is a XEN box I have no access to hwclock.

Travelling back in time
A bit of digging on my provider's forums though showed that they were controlling the time syncing and had forgotten to turn the time sync back on after some troubleshooting they were doing.

That's all good and well but my box was still not syncing.

The Dovecot linked article in the log error message simply states:

With Xen you should run ntpd only in dom0. Other domains should synchronize time automatically (see this Xen FAQ).

XEN wisdom
The XEN FAQ has the following relevant things to say about time on a XEN box:

Q: My xen machines doesn't accept setting its time. Basically, it's stuck to RTC time. ntpdate, date, hwclock all "seem" to work, but they don't actually change the system time. The only way I have to change it right now is to change it in the BIOS.
A: Only affects 1.0. Fixed in newer versions.

Q: Where does a domain get its time from?
A: Briefly, Xen reads the RTC at start of day and by default will track that with the precision of the periodic timer crystal. Xen's estimate of the wall-clock time can only be updated by domain 0. If domain 0 runs ntpdate, ntpd, etc. then the synchronised time will automatically be pushed down to Xen every minute (and written to the RTC every 11 minutes, just as normal x86 Linux does). All other domains always track Xen's wall-clock time: setting the date, or running ntpd, on these domains will not affect their wall-clock time. Note that the wall-clock time exported by Xen is UTC --- all domains must have appropriate timezone handling (i.e. a correct /etc/localtime file).

Q: Is there is some cross-domain time synchronization : are they always in perfect sync, or should I run some kind of ntp in each subdomain ? Or only domain 0 would be enough ?
A:If you want each domain to keep its own time, there are two ways to cause a domain to run its wallclock independently from Xen:
1. Specify 'independent_wallclock' on the command line.
2. 'echo 1 >/proc/sys/xen/independent_wallclock'

To reenable tracking of Xen wallclock:
1. 'echo 0 >/proc/sys/xen/independent_wallclock'

"Shut it down, Shut it down forever!"
Following the FAQ I modified /proc/sys/xen/independent_wallclock and added it to /etc/sysctl.conf ("xen.independent_wallclock = 1") so that the change would survive a reboot.

I also changed my timezone to the same local as where I am working currently by doing the following:

$ sudo cp /usr/share/zoneinfo/Australia/Sydney /etc/localtime

Time marches on
openntpd now keeps time synced and Dovecot no longer complains about running in the future.

About Me

My photo
I love solving real-world problems with code and systems (web apps, distributed systems and all the bits and pieces in-between).