Monthly Archives: September 2008

GSM / SMS Pager?

The signal to noise ratio on my cellphone / pda / smartphone is too high. I want a pager just for nagios notifications so I can leave my phone on vibrate. Apparently pagers have dissappeared. I toiled on AT&T/Cingular’s site for a while, and found a few articles from years past about how pagers were going away.

So the logical conclusion? Grab another SIM card and get another device to use as a pager. I can’t find one though. I can’t believe that.

  • Rugged – Mil-spec whatever. No worries about dropping it.
  • Long battery life – Weeks to months
  • Small Form Factor – Think the old Motorola Advisors, must disappear onto a belt clip.
  • Simple UI – Getting the text messages must be a single button.
  • Sound/Vibrate – Should have a switch to go from vibrate to audible pretty easily.

I don’t really care if it has a qwerty keyboard and can do two-way communciation. I’d be okay with that. Really I just need the above features. It can have more if it so desires.

Any ideas?

munin plugins for jboss monitoring

I grabbed the tomcat plugins from Ticket #74 for munin, specifically tomcat-plugins.tar.2.gz. I then made small changes to the URL and xml lines to work with our jboss install.

-my $URL      = exists $ENV{'url'}      ? $ENV{'url'}      : "http://%s:%s\@127.0.0.1:%d/manager/status?XML=true";
+my $URL      = exists $ENV{'url'}      ? $ENV{'url'}      : "http://%s:%s\@127.0.0.1:%d/status?XML=true";
-if($xml->{'connector'}->{'http'.$PORT}->{'requestInfo'}->{'bytesSent'}) {
-    print "volume.value " . $xml->{'connector'}->{'http'.$PORT}->{'requestInfo'}->{'bytesSent'} . "\n";
+if($xml->{'connector'}->{'http-0.0.0.0-'.$PORT}->{'requestInfo'}->{'bytesSent'}) {
+    print "volume.value " . $xml->{'connector'}->{'http-0.0.0.0-'.$PORT}->{'requestInfo'}->{'bytesSent'} . "\n";

Do this for each xml entry and you’ll be all set:

$ for file in `ls` ; do ./$file ; done
accesses.value 550
free.value 201360024
used.value 317947240
max.value 1037959168
busy.value 4
idle.value 5
volume.value 4574821

how big is puppet’s envelope?

More and more I run into problems with puppet’s DSL. Today a coworker came to me with problems with a munin plugin definition we have. Normally if you want to add a munin plugin that isn’t in the standard base, you use our munin_plugin_file definition, which calls the remotefile definition that simplifies copying files via puppet, and also calls the munin_plugin definition which essentially makes the symlink to enable the plugin.

Today we wanted to do this with wildcard plugins, but more than one call to munin_plugin_file would fail, because the remotefile would get defined multiple times and puppet can’t handle that.

err: Could not retrieve catalog: Puppet::Parser::AST::Resource failed with error ArgumentError: Duplicate definition: Remotefile[munin-plugin-slapd_] is already defined in file /etc/puppet/site-modules/munin/manifests/definitions/munin_plugin_file.pp at line 10; cannot redefine at /etc/puppet/site-modules/munin/manifests/definitions/munin_plugin_file.pp:10 on node

The solution is to use puppet’s immature conditionals to test against if the type was already defined and not redfine it.

define munin_plugin_file($plugin_config = “/etc/munin/plugins”, $plugin_dir = “/usr/share/munin/plugins”, $plugin) {

if defined(Remotefile[“munin-plugin-$plugin”]) {
debug (“$munin-plugin-$plugin already defined”)
} else {
remotefile { “munin-plugin-$plugin”:
path => “$plugin_dir/$plugin”,
module => “munin”,
source => “plugins/$plugin”,
owner => root,
group => root,
mode => 755,
require => Package[“munin-node”]
}
}
munin_plugin { $name:
plugin_config => $plugin_config,
plugin_dir => $plugin_dir,
plugin => $plugin,
require => Remotefile[“munin-plugin-$plugin”]
}
}

Note that the debug line is there because puppet conditionals can’t have empty blocks, see bug #1109 (tracker is down now, I’m guessing at that link).

I’m really wondering because I’ve had these sorts of problems twice now today. Normally it’s every once in a while. In shorter form:

Bryan Mclellan [10:59 AM]:
production-sites includes apache::module::php4, which includes the package, and runs apache_module. i wanted the php4-ldap package, which the php4 class installs. so I added an include for php4 in the production-sites.
but php4 also installs the apache2 php4 module, so there was a naming conflict.
so I removed the package from apache::module::php4 and added an include to php4 there, but it simply wouldn’t do the include. perhaps too many levels deep.

You have to put a lot of thought into your design if it’s going to scale. Especially when you put everything in puppet like we do. Someone told me recently that our puppet code base was much larger than most.

~/puppet$ find site-modules/ -name ‘*.pp’ -exec cat ‘{}’ \; | wc
4166   10820  101647
~/puppet$ find site-modules/ -name ‘*.erb’ -exec cat ‘{}’ \; | wc
3565   12773  112231
$ grep -R class site-modules/ | wc
152     578   12264

modules and site-modules have a lot of overlap. As others are picking up puppet, I wonder how long it takes them until they start running into this. Of course, if you avoid nesting definitions, and keep all of your classes separate, you won’t see this. But you’re doing a lot of work too.

ldap auth for request-tracker3.6 on ubuntu hardy

A while back I posted about ‘ldap auth for request-tracker3.6 on debian etch‘. I upgraded the old server from debian etch to ubuntu hardy recently, here is an update:

I’ve recently rebuilt request-tracker and there is a newer method for handling LDAP using ExternalAuth. You can find it on CPAN. I created a deb for it:

# get librt-extension-commandbymail for some dependencies
wget http://mjj29.matthew.ath.cx/debian-upload/librt-extension-commandbymail-perl/librt-extension-commandbymail-perl_0.05-1.dsc
wget http://mjj29.matthew.ath.cx/debian-upload/librt-extension-commandbymail-perl/librt-extension-commandbymail-perl_0.05.orig.tar.gz
wget http://mjj29.matthew.ath.cx/debian-upload/librt-extension-commandbymail-perl/librt-extension-commandbymail-perl_0.05-1.diff.gz
dpkg-source -x librt-extension-commandbymail-perl_0.05-1.dsc
wget http://www.cpan.org/authors/id/Z/ZO/ZORDRAK/RT-Authen-ExternalAuth-0.05.tar.gz
tar -xvzf RT-Authen-ExternalAuth-0.05.tar.gz
dh-make-perl RT-Authen-ExternalAuth-0.05
cp librt-extension-commandbymail-perl-0.05/debian/RT.pm RT-Authen-ExternalAuth-0.05/debian/
# add -Idebian to RT-Authen-ExternalAuth-0.05/debian/rules
# $(PERL) -Idebian Makefile.PL INSTALLDIRS=vendor \
cd RT-Authen-ExternalAuth-0.05/
dpkg-buildpackage -rfakeroot

The take the examples (RT-Authen-ExternalAuth-0.05/etc/RT_SiteConfig.pm) and add them to your RT_SiteConfig.pm like:

Set($ExternalAuthPriority, [ ‘My_LDAP’
]
);
Set($ExternalInfoPriority, [ ‘My_LDAP’
]
);
Set($ExternalServiceUsesSSLorTLS, 0);
Set($AutoCreateNonExternalUsers, 0);
Set($ExternalSettings, { # AN EXAMPLE DB SERVICE
‘My_LDAP’ => { ## GENERIC SECTION
# GREAT BIG SNIP HERE
}
}
}
);

munin-cgi-graph with fastcgi on debian etch

We use munin a lot. Consequently munin-graph takes up more than 5 minutes every time, breaking munin-cron, and loosing data. Since we graph a lot more data than we normally look at, because most of it only matters when we’re planning something or when something breaks, we don’t need new graphs every five minutes. So I switched munin-graph to use munin-cgi-graph. The basic instructions are in the munin CgiHowto. It’s pretty easy.

But each node page has a lot of graphs, so it’s annoying to wait for them all to get created. FastCGI helps, so I went about setting that up too.

First, package ‘libapache2-mod-fastcgi’ is in non-free, so you may not find it. I started using ‘libapache2-mod-fcgid’ for a bit, but since I was having trouble, downloaded the fastcgi package from non-free and added it to the local repository. munin-cgi-graph is in /usr/lib/cgi-bin on debian so I added this to my apache config:

ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory /usr/lib/cgi-bin/>
AllowOverride None
SetHandler fastcgi-script
Options ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
<Location /cgi-bin/munin-cgi-graph>
SetHandler  fastcgi-script
</Location>

I had munin-cgi-graph working if I removed the SetHandler line, but when I left it, I’d get a lot of errors and no graphs like:

FastCGI: incomplete headers (0 bytes) received from server

[error] [client 10.0.0.60] Premature end of script headers: munin-cgi-graph

Warning: Request for graph without specifying domain. Bailing out.

I had to apply a diff to mod-cgi-graph for fastcgi support. For whatever reason I skipped this, perhaps assuming it had gotten into the deb already. Download the diff to your home directory then apply it.

cd /usr/lib/cgi-bin/
cp munin-cgi-graph munin-cgi-graph.in
patch -p0 munin-cgi-graph.in ~/munin-cgi-graph_fastcgi.diff
mv munin-cgi-graph munin-cgi-graph.orig
cp munin-cgi-graph.in munin-cgi-graph

The action specified by the URL is not recognized by the wiki

A while back I setup mediawiki on debian etch. Recently I upgraded to MediaWiki 1.11.2-2 (ubuntu) from 1.7 (debian) and started having problems logging in. clicking on “login / create account” would return to an ‘index.php’ wiki page. Going to Special:Userlogin would give me the login page, but would say “No such action”, “The action specified by the URL is not recognized by the wiki” after submitting.

A few people have seen this in different circumstances. The most related thread I found to my problem mentions wgUsePathInfo.

I kept getting an error about a ‘redirect loop’ though, and finally realized my Apache rewrite configuration was using an absolute file path. I settled on this apache configuration combined with setting wgUsePathInfo to false in my LocalSettings.php:

RewriteEngine On
RewriteCond %{REQUEST_URI} !^/(index.php|skins|images|icons|opensearch_desc.php|api.php|~.*)
RewriteRule ^/(.*)$ /index.php?title=$1 [L]

Wrangling RT CommandByMail Error Messages

Request-tracker is more useful with the CommandByMail extension because you can interact with tickets in less steps. It likes to parse everything in the form ‘word: word’ as a command though, so ‘http://somewhere’ causes an error email to be generated.

When dealing with people that are emailing RT for support, you’re already usually dealing with a fairly non tech save crowd, who can be confused by the error message. I modified TakeAction.pm as such:

diff TakeAction.pm.orig TakeAction.pm
663c663,668
<     my $ErrorsTo = RT::Interface::Email::ParseErrorsToAddressFromHead( $args{'Message'}->head );
---
>     my $ErrorsTo = '';
>     if ( defined( $RT::ErrorEmailAddress ) ) {
>       $ErrorsTo = $RT::ErrorEmailAddress;
>     } else {
>       $ErrorsTo = RT::Interface::Email::ParseErrorsToAddressFromHead( $args{'Message'}->head );
>     }

Then add a line below to your RT_SiteConfig.pm and restart

Set($ErrorEmailAddress, 'noc@example.com');

converting a vmware image to kvm

First I converted the disk, which was a flat file:

qemu-img convert webapp02-flat.vmdk -O qcow2 webapp02.qcow2

Then I grabbed vmware2libvirt. Initially I got an error:

./vmware2libvirt -f webapp02.vmx -b eth0
Traceback (most recent call last):
File “./vmware2libvirt”, line 255, in <module>
</disk>”’ + get_network(vmx,  bridge, netmodel) + ”’
File “./vmware2libvirt”, line 70, in get_vmx_value
raise V2LError(“Bad value for ‘” + key + “‘”)
__main__.V2LError: “Bad value for ‘displayName'”

The -b was because I use bridging instead of the kvm ‘default’ NAT networking. Removing all the whitespace from the vmx file fixed that. I ran ‘virsh’ and used define to import the configuration. I had to go back an edit the config in /etc/libvirt/qemu to change the target dev to ‘sda’ and bus to ‘scsi’, as well as modify the source file to match my path. I also had to change the ‘eth0’ to ‘br0’, I wasn’t sure initially how smart the python script was. Then used define again in virsh to load the changes and everything was pretty happy.

Later I started getting

sd 0:0:0:0: ABORT operation started.

sd 0:0:0:0: ABORT operation timed-out.

Eventually the SCSI bus would reset and things would work, but it was annoying as hell. I changed the fstab and grub menu.lst entries from sda to hda, went back into the xml file and switched back to IDE and hda. I saw this thread about issues with the scsi driver and 2.6.21. It may not have been related, IDE worked fine. I was running debian etch with 2.6.18-6-686 but am in the process of migrating to ubuntu hardy.

bicycle commuting

I hate politics. Bike Portland has an article about the Pro Walk/Pro Bike conference that’s here in Seattle this year. I didn’t know this was coming, so it’s pretty awesome that we got our green strips on 2nd and 4th avenue just before it. Of course just about every time I’m in a green strip, especially riding home down 2nd avenue, cars still turn into me and I have to evade or push off them. The most ironic was that the last time I had to push off a car was the first day I road the green strips on 2nd avenue.

I like the Seattle bicycle master plan, but I often question how long it’s going to take to get there. From the

On transportation, Nickels declared that in Seattle, “We recognize that the age of the automobile has passed,” and he said they’re working toward a balanced transportation system. A major part of that balance is the 25 mile Burke-Gilman Trail that was opened in 1978.

Man, weren’t things progressive back in.. whoa, 1978? What’s our big accomplishment now? Well, props to the Chief Sealth Trail. While the Interurban is nice everywhere but downtown (north, south), shame on the downtown portion of the trail being in a dirty dumpster filled alley where the homeless pass out and the road surface is uneven brick.

In august I learned about the trail along the SODO light rail that goes from the end of the bus tunnel to Forest. The master plan wants to extend it, but good luck. And of course there’s the missing link from 2nd ave to the light rail.

Looks like the Burke-Gilman trail will get some work done on it’s missing link at least:

Backing up those words, Nickels announced that he’ll include $8.6 million in his upcoming budget to complete a major missing link of the trail.