svn / ldap / apache / active directory

We do the WebDAV SVN/Apache bit around these parts. In the apache config there’s the bit:

AuthLDAPURL ldap://dc.example.com/CN=Users,DC=example,DC=com?sAMAccountName?one?(objectClass=user)

This works with a flat tree, but I recently moved things around and needed the tree to be searched so we moved to:

AuthLDAPURL ldap://dc.example.com/DC=example,DC=com?sAMAccountName?sub?(objectClass=user)

Noting that we’re not looking in the Users folder anymore, and ‘one’ is now ‘sub’.

Unfortunately, everything broke when the change was made so I played around on it for a bit on another box and found that the ldap client was getting confused due to referrals being provided by the ldap server (active directory).

Notes in bug #26538 point to using the global catalog instead on port 3268. There was work on building an option to ignore referrals but it looks like it didn’t get made.

Instead, I put “REFERRALS off” in /etc/openldap/ldap.conf. Note that I tried /etc/ldap.conf and it didn’t work, and I didn’t bother researching the difference.

It may be worth noting that I saw some references to the DNS Zone application partitions when I used wireshark to monitor the ldap requests and that led me down this road.

Linux Professional Institute – LPI

I finished LPI-201 and LPI-202 today. These are the two tests for the Linux Professional Institute Level 2 certification. If you don’t partake in the certification treadmill for fun and profit, you may want to check out LPI‘s website for more information. Basically there are three levels which are more difficult as you go higher. There are a pair of tests per level, with the third level having a Core exam, and eventually a number of electives which you’ll need to take one of. As of now, there’s only one elective. It may be worth noting to the Ubuntu fans out there that the Ubuntu Certified Professional / UCP is the LPIC-1 (LPI-101 + LPI-102) + LPI-199, a Canonical sponsored exam.

As with all of the proctored CBT tests, you agree not to talk about said tests. Since a number of people wrote an entire book about it (which I studied with) I think I’m safe to rant a bit.

After studying last night, I was laying around thinking about rdev/rootflags, trying to remember if I had used such arcane beasts ever. LPI wants you to know how to compile and patch kernels. I never do this on a regular basis anymore, except maybe on gentoo when I’m bored. It’s worth knowing for sure, but kernel patching seems so 90s to me, as long as you’re not a developer. And if you are, why aren’t you using git?

I realized I’ve been using Linux for over 10 years now. I hate saying something like that on principle, but it was a strange thought. It brought me back to installing slackware from floppy disks on Jason’s box (whose site appears down right now) because he didn’t have a cdrom. And hand soldering a PLIP cable because we couldn’t afford network cards. The first kernel I compiled, must have been 1.2.13 or so, compiled for three days straight before it failed. The next attempt succeeded, so I have no idea what went wrong. We always compiled on my box because it had the fast 486DX in it, a present from my parents.

Anyways, I was disappointed by the LPI format. I would have preferred simulations like Cisco or Microsoft have pulled off so well. LPI is mostly (totally) multiple choice and fill in the blanks. The latter amounts to questions like enter pieces of bind semi-obscure bind configs (or worse, innd) and asking you to type the full command with options for some disk function with fsck. Some of these I happen to know. Some I’ll go along with being reasonable questions, like tar options or maybe even cpio or the likes. But start asking me about innd flags, and you’re getting ‘man innd’ as my FITB answer. I realize Linux administration is a broad category, and there needs to be coverage, but I’d prefer to testing done on comprehension, rather than how many flags I can memorize.

I’m not going to bother with the new LPIC-3 for a while. There’s no study materials out there yet AFAIK as its always so new and you’re always playing roulette with tests you’re not prepared for. Like taking the MS SQL 2000 Administration exam only to find no materially on installation. That was a shock, especially with so much of the Microsoft Press book dedicated to the subject. Back to the Cisco treadmill it looks like.

Creating DEBs from scratch

If you’ve ever made a deb, you’ve likely noticed the confusing file of helper apps and scripts. I initially fell back on just using dpkg-deb. For a current project though, I needed to make the deb completely from scratch.

I’m attempting to make a deb for Oracle Database. The touted “Oracle Universal Installer” is a huge pile of shit, that is, a X based java program. Even when I run it in silent mode with a scripted response file, it still tends to spawn itself out as a new parent so I can’t keep my scripts around it. My solution has become to perform a scripted install on a box, then package the completed install (the whole whopping 1.5GB of it) into a deb. I don’t want to move the 1.5GB binary tree into my deb build folder, so I decided to create the deb by hand. This is simple up to a point.

Of all the discussion out there about the deb format, the best reference is simply the deb man page. I couldn’t find much in the Debian Policy Manual or New Maintainers Guide.

A deb is an ‘ar’ archive containing debian-binary, control.tar.gz and data.tar.gz.

These files should be in this order. debian-binary should contain a single line with the text “2.0” to specify the new deb version. control.tar.gz should be a tar file, gzipped, containing the control file and other scripts as specified in the the aforementioned guides. data.tar.gz should contain the files you want the package to install.

So:

echo “2.0” > debian-version
ar r newpackage.deb debian-version control.tar.gz data.tar.gz

Control.tar.gz should be created from within your standard DEBIAN directory, ie:

cd DEBIAN
tar -cvzf ../control.tar.gz .

Data.tar.gz should be created at the root of a file system in the same manner, obviously only including the paths you want to be included.

I’m still unable to please all the deb packages with this format. There was discussion years back amongst debian developers to stick to the “bsd” format for ar and not use the “gnu” format which supported spaces in filenames by adding a slash to the end of the filename as a terminator in the format. Best I can tell, OpenBSD and FreeBSD have since switched to using gnu binutils as well, so I can’t even find source for a reasonable modern version of non-gnu ar to compile.

Apt has it’s own ar code in the source that does things it’s way.

All of this was done to make debian packaging portable. Granted, the idea was more that you could access debian packages anywhere, rather than create them. Other than using sed or such to go in and modify the deb afterwards, I’m out of ideas.

Update: My Sed-foo is poor, but I tried to get sed to match the old bsd style for me with: sed -i ‘s/^\([A-Za-z.-]*\)\//\1 /’ file.deb, searching for text (filename with a slash) at the beginning of a line and replacing the slash with a space. This worked as far as apt-extracttemplates & ar was concerned but as was to be expected, somewhere within the data.tar.gz was corruption as a result. I’m sure a better regex would work.

Instead of spending more time on this, I went to the dpkg world. After the oracle software is installed, I tar the folders I want and pipe that back into tar in the build directory (ie cd / ; tar -c /apps/stuff /opt/stuff /etc/configstuff | tar -xC /home/build/build ; cd /home/build ; dpkg-deb -b build . ). Looks like it works okay. The real tests begin soon as the developers start using the package.

ubuntu feisty netboot / alternatives install with lvm bug 105623

Ubuntu LP Bug 105623 has to do with the lvm device nodes taking a while to show up when doing a network / alternatives install with ubuntu. It appears that lvm waits to sync with udev and udev doesn’t do anything. Eventually lvm times out and makes the device nodes, but it’s something like three minutes later. This happens for every logical volume. If you’re dealing with many logical volumes, this is annoying. If you deal with many logical volumes every day, this is impossible.

Ubuntu splits devmapper into two packages: dmsetup and libdevmapper. The installer (debian-installer) uses udebs and anna instead of debs and apt (because their more lightweight). If you check out the different between dmsetup-udeb_1.02.08-1ubuntu10_amd64.udeb and dmsetup-udeb_1.02.18-1ubuntu6_amd64.udeb in which this bug is supposed to be fixed, a udev rule has been added (/etc/udev/rules.d/65-dmsetup.rules). I tried backporting these packages to my edgy install (this fix is only in gutsy as of this writing) as I didn’t think anyone else was going to. On the plus side I’ve learned a bit about d-i, but it’s taken quite a bit of time as there doesn’t appear to be much in the way of official documentation.

I ended up taking this file and building it into the feisty netboot initrd. However, it didn’t appear to fix anything. Upon closer examination it runs “dmsetup export” which isn’t in my feisty documentation, so I think it’s something new. I couldn’t find a sane way to backport all of libdevmapper without redoing the repository, which would mean resigning the release files and adding keys to the keyring in the initrd, or removing the keyring from the initrd. I wanted to avoid manging my mirror as much as possible. However /etc/udev/rules.d/25-dmsetup.rules on a functional feisty box appears to do something, so I built that into the initrd, and the problems were fixed (LVM creation is once again immediate).

Note that initially I was using preseed/run to download/run a script (before the udebs are unpacked) to install this file but I didn’t feel like udev was reading it as I didn’t have udevcontrol to send udev the read_rules command. While playing around and running udevd with –verbose, it looked like it would periodically recheck for rules, but I’m not going to take the time to test this. All I’m saying is that wgetting 25-dmsetup.rules to /etc/udev/rules.d with -P would probably work and be easier than recreating the initrd.

Of course, this “works for me”, YMMV. For the trusting, my patched feisty initrd is here.

multicast bridging on openbsd to monitor ospf

I’ve been working on getting ospf setup between a Cisco PIX 515E and a Netgear 7324 (Which I despise by the way). It just wasn’t working so I stopped working on it last night, with intentions to setup a sniffing bridge today.

For whatever reason, www.openbsd.org is giving 403s right now. It turns out openbsd.org works, but regardless I grabbed openbsd 4.1 from a mirror and threw it on the pxe server. Network installs are getting old hat at this point, so I figured it’d be good to have around. The key to this being to take the pxeboot file, and rename it to pxeboot.0 (or openbsd.0) and choose this in the KERNEL line in the pxelinux.cfg/default file. This will try to boot /bsd.rd from the tftp server. It’s worth noting that I fell back on the i386 files over the amd64, as I was getting an error from pxelinux regarding the amd64 boot image.

Anyways, with openbsd 4.1 in hand I did the usual bridging configuration. I used one interface for management. I had sshd running on it and it had an IP, all configured during the install. The other two interfaces re0 and re1 I left alone during the install.

ifconfig bridge0 create
ifconfig re0 up
ifconfig re1 up
brconfig bridge0 add re0 add re1 up

I saw a ton of vlan traffic and wandered through the netgear gsm7324 config for a bit to clean things up. Once I was reaccquainted with their wierd vlan configuration, progress stopped. There was no ospf traffic going across the link (I had since connected re0 and re1 between the two devices). I could belive that the PIX might be filtering the ospf traffic, and I could believe I had misconfigured ospf on the gsm7324, so I spent a bunch of time tweaking these. Eventually I was out of ideas though, and I hadn’t seen any ospf traffic at all.

I decided to give the interfaces ip address and run tcpdump against them instead of against the bridge to look for the multicast ospf traffic and I immediately started seeing ospf traffic across the bridge.

I rebooted the openbsd box and reconfiguring the bridge. No ospf traffic. I checked net.inet.ip.forwarding and net.inet.ip.mforwarding which I was pretty sure had to do with routing and not briding, and verified their settings didn’t effect anything. I had spent a bit of time starting at the ifconfig output looking for any variance, and this time noticed that there was an inet6 line but not an inet line. “ifconfig interface inet up” did nothing so I ran “ifconfig interface inet6 ipv6address delete” and I started seeing the ospf multicast traffic.

Make whatever assumptions you want from that. Annoying, but ospf is up now, and I’m moving on.

front panel audo on asus a8n-e and a8n-sli (AC’97)

I around a bit getting audio working on a couple desktops here in the office. Most of the engineering workstations are Asus A8N-E or A8N-SLI boards. There’s a jumper block labeled FP_AUDIO on board for the front panel audio. You may have noticed in the past that some computers will disable the rear speaker output when you connect something to the front output, such as to automatically turn off the speakers when you plug in headphones. I find this nice. The catch is that this is usually done by routing the audio to the front panel and then back, using a mechanical headphone switch that allows the electrical path to continue back to the motherboard when there is no connector in the front panel, but opens the connection when you plug in.

There’s some decent specs here showing the connections. Basically, if you don’t have the right type of front panel audio connected, then your rear audio connector is disconnected. Alternatively you can jumper pins 9/10 and 5/6 to force audio back to the rear connector when not using a front panel. I have yet to see this done on any of my boards, but I get the impression that this is default.

cisco console cables for synaccess nc08

I inherited a Synaccess NC-08 serial console switch. It’s rack mountable, although only front mount. I did take the time a while back to drill new mounts so it would be rear mount.

It came, I suppose, with a number of RJ-45 to DE-9 (DB9) Female adapters (see npman.pdf page 47). This is convenient as I plug this adapter into a serial console connector on a switch, and can use the existing patch cabling to color code and work the connection back to the rack the console switch is in. Unfortunately, Cisco had the same bright idea. I tried connecting their two RJ45-DE9 adapters together with a male-male gender changer but that didn’t work. I then tried putting a null modem cable in there as well, but that didn’t work either.

Giving up, I emailed synaccess support hoping they’d have an easy answer so I wouldn’t have to think about it. They called me back -right- away. I was shocked. I spent a lot of time trying to explain what I was trying to do though. The idea that I’d hook a switch to a serial device seemed to confuse them. I wonder what their normal customers use these for. They’re cheap I suppose, but I don’t figure they make a good scrambled egg or anything. They had no answer (about the cable, it’s possible it makes a scrambled egg still).

So I stared at the two pinouts for a while and drew up my own cable. Cisco seemed to ignore DCD in their console cable, so I did here hoping it would work. In the past I’ve seen DCD tied to DSR and I didn’t want to have to be splicing wires. I also dropped the second ground on the cisco side, figuring the two would be tied together in the cisco connector and that I didn’t have to worry too much about electrical physics in this small little cable. And aha! it works. Now if I could just buy six of these instead of having to make them.

Cisco to Synaccess rj45 adapter cable pinouts (for the search engines)
1 CTS : RTS 2
2 DTR : DSR 1
3 TXD : RXD 5
4 GND : GND 3
5 GND X DCD 6
6 RXD : TXD 4
7 DSR : DTR 8
8 RTS : CTS 7

hosting multiple domains with exchange

This was tough to find a concrete answer for. I don’t know why I didn’t just try it, although I was getting there. Once DNS (MX records) are all set up, and my smtp gateway was configured to forward a new domain, I was having trouble convincing exchange to use the new domain. There were plenty of examples where you add the domain to the default recipient policy in ESM. In course of doing this though, it was made clear to me by a popup that exchange wanted to give every user an email at this new domain if I checked the check box to enable it. I left the box empty, manually added an email at the new domain in ADUC, but got relay errors. I figured I could use a new recipient policy and a group with an ldap search filter to apply the new domains to those users who worked with that project, but I really only wanted one or two emails and they’d be special aliases anyways so I didn’t want to have another group kicking around.

I started to try this yesterday, but confirmation this morning when I did some more reading here that you can add recipient policies and apply no filter. This appears to have the correct effect of allowing me to use the domain, but recipient update services doesn’t try to do anything automatically on me.

It’s funny that I keep posting about exchange. I’ve been working with a lot of other cool software, but most of that makes sense as I learn it. It’s only been exchange where I’ve said “what the fuck?” and felt the frustration that leads to a post, hoping to shorten someone else’s googling.

GFI MailEssentials and NDR messages

I inherited GFI MailEssentials and MailSecurity recently.

I was troubleshooting a problem today where an SMTP sender was getting an NDR when emailing one of my users, but the exchange message tracking center claimed the message was delivered to the store.

Enter GFI MailEssentials, which optionally sends an NDR when it thinks something is spam. Here’s the fun catch though, it sends a 5.1.1 “email account does not exist”. In hopes of convincing the spammer the account doesn’t exist anymore? As if bulk mailers use legitimate return addresses.

It’s certainly not to inform the legitimate user their mail was rejected, as the NDR is a farce. It’s not signaling exchange to send an NDR, but rather taking these actions itself, so make sure logging is on. Fortunately there’s a template file in MailEssentials\templates called ndr.xml. Open it up in notepad, change the 5.1.1 references to 5.5.0 and put in your own custom anti-spam message instead of “this user does not exist”.

Not that this software should be sending NDRs. I’m sure I’m flooding the net with NDRs, but it looks like it’s hooking after the smtp service, not into or before. I’ll replace it with SA eventually.

Update 07/2007:
The NDR template just wasn’t working and GFI never replied the last time I sent them the requested tech support logs. I ran into an issue a couple of weeks ago where messages would go to GFI (sent to advanced queuing in Exchange System Manager) and never come back. Stopping GFI would get the messages back. I just deinstalled GFI and I’m replacing it with a traditional SpamAssassin installation.

forwarding email with microsoft exchange contacts

Also known as, where is /etc/aliases in exchange and why again is point and click “easier”…

I’ve seen a ton of howto’s on how to do this, and I wouldn’t say anything about this if they weren’t all so damn round about or not to the point.

This article has you creating a contact with the external email address, then using another user account to forward mail to that contact. The mentioned exchange 2000 and not exchange 2003, and I don’t want a user account kicking around as well, I just want an internal email address to forward to an external one.

Microsoft recommends something similar in kb 281926.

But for the sake of our sanity, let’s try something simpler.
1) In ADUC with ESM, right click, new, contact
2) Enter the names, next
3) set alias to the internal name. click modify, choose smtp, type the external address. next.
4) finish.
5) right click the contact. properties.
6) Exchange Advanced tab, click “hide from exchange address lists”
7) email address tab. the external address should be bold (primary) check that you have an email alias for all the domains you want (if your ad domain is not your email domain, add another smtp address)
8) done.

y2dst and exchange

Computers and servers seemed to update okay. I’m still tracking down a few boxes as I realized kerb isn’t working on them, but for the most part everything took it’s updates and everything else required the normal flip of the dst hour by hand (such as pbx’s).

OTOH, there’s this little tool we sometimes use in companies called exchange. This tool contains calendars, with appointments, which have times. These times, are affected by DST. So be it.

Fortunately for us, microsoft has PILES of documentation. DST Home base, KB 931836, 926666, 930879, 931667. They also revealed late in the already late process (many tools didn’t come out until Feb 2007) after some people had already prepared for this that it totally screws up resource mailboxes, which require another process.

Fine. The server patch and exchange patch when okay, but the calendar update tool? Take a close look at this kb article. How long does it take you to figure out the order of steps you should follow? The TOC is pretty useless and misleading.

I eventually figured out that “How to manually configure and run Msextmz.exe” is the “how to be an exchange hacker because we didn’t see this issue ever coming up”, also known as: “How to use some scripts to make tab delimited text files of all your DNs, matching up a load of timezones in a crapshoot fashion and hoping it all works out in the end.”

I instead used “How to run Msextmzcfg.exe” which is this little vb looking app that does some of the above work for you, dumping out a bunch of text files everywhere (mostly in a hostname folder, btw, use netbios names). I checked the “extract recurring meeting information” box even though it warns of the increased overhead. We have < 100 users. Be aware of the serious list of “things this shit does not do right” in this article:

“A time zone may be ambiguous”

Our tool often doesn’t do shit in PST

“There is a limit on the number of mailboxes that can be processed per server”

This can only do 65,535, obviously that’s because of a variable, but we’re parsing tens of thousands of DNs from a text file at this point, you’re already screwed.

“There may be conflicts with conference room assignments”

this shit totally screws up resource rooms, use a bunch of other utilities to fix this. i really only have one room that matters, so I just opened it up in outlook myself.

Unclear caveats:

1) you can’t install these tools on an exchange server, or even a machine that has the exchange management tools installed, which it considers and exchange server.
2) the tools tie into outlook, have outlook installed.
3) tzmove.exe which is needed, isn’t really referenced. I believe this is what actually ties into outlook and you download this separately. If when you run the batch script, which you’ve pointed to tzmove, you get an error 0x80004005, it’s because tzmove is an installer, not the real tzmove. Run the installer, cancel the program when it’s ready to do something, and then point the config file back at: “C:\Program Files\Microsoft Office\Office12\Office Outlook Time Zone Data Update Tool”.
4) the grant permissions script at the end of the file wasn’t interested in working for me. it just kept spewing out syntax until I realized it wanted an input file. check this out instead.
5) in case you didn’t notice, this doesn’t scale at all. Microsoft’s idea of scaling this small utility appears to be splitting up the work on a bunch of VMs on whatever hardware you have kicking around. VMs provided here. Note I thought this was a great laugh, and didn’t download it. Maybe you’re supposed to download it, and it isn’t just a joke.

Hopefully you’ve already lived through this, but if not, good luck. I’m still waiting for emails this morning asking what the hell I did to everyone’s calendars over the weekend.

seamless rdp on ubuntu edgy eft (outlook)

I got edgy installed on my work desktop recently. I got beryl working on the regular x server with the nvidia binary drivers. I hear that feisty fawn is going to have the binary drivers in the default install to better support this sort of thing, but it was pretty easy. I used directions here that look like they also appear here but more cleaned up. I’m unsure of the performance impact of this route, but so far the only slowdown has been when running glxgears on the edge of a cube while keeping the cube rotated. I’m also running with twinview support, which I configured using the -twinview option for nvidia-xconfig, but I had to manually change the modes to get the resolution I wanted.

I wanted seamless rdp support and rdesktop 1.5 is in the feisty repository, but has not been backported to edgy.

I added the following to /etc/apt/sources.list:

deb-src http://us.archive.ubuntu.com/ubuntu/ feisty main restricted multiverse universe

Then:

sudo apt-get update
sudo apt-get source rdesktop
cd rdesktop-1.5.0
dpkg-buildpackage -rfakeroot

If you get an error about fakeroot, then you need to install that (sudo apt-get install fakeroot). There’s possibly a number of build dependencies that you’ll get an error for, mostly development stuff. I simply installed the packages recommended using apt-get.

There’ll be a .deb file now one level up the tree.

sudo dpkg -i ../rdesktop_1.5.0-1_i386.deb

fr recommended using prevu to modify the package version so that my install wouldn’t conflict with a future install. I skipped this step, as I’m generally a reckless individual.

i built a standard 2k3 install on a vm, turned on remote desktop, installed office, then unzipped the seamless rdp package from cendio. back on my workstation I ran:

rdesktop -A -s “c:\seamlessrdp\seamlessrdpshell.exe C:\Program Files\Microsoft Office\OFFICE11\outlook.exe” servername &

Outlook popped up and happily allowed me to setup my account. It doesn’t wobble well at all in beryl, but I just put it full screen on one desktop and never move it anyways and it’s moving okay. There’s some more notes available here on the process. I do worry that every application takes a ts session. This seems like some overhead. There’s a similar project here for windows that looks like it might handle this better, maybe something will show up in the future.

screen shot available here.

edit:
to make the beryl+rdesktop collaboration a little less annoying. I’ve wrapped rdesktop in Xnest based on the ideas here. I can now move the window around without the weird half-wobble and without every rollover causing a popup and subsequent burn of said popup.

#!/bin/bash
Xnest -ac -terminate -geometry 1280×1024+0+0 :4 &
DISPLAY=:4 rdesktop -u user -d domain -A -s “c:\seamlessrdp\seamlessrdpshell.exe C:\Program Files\Microsoft Office\OFFICE11\outlook.exe” host &

Note that -ac on Xnest may have security implications. I haven’t researched it as of this writing. I also pulled it IE and Outlook icons out of their .exe’s and dropped them into a pixmap folder, creating shortcuts on the gnome applet bar that connect to the wrapper scripts. This is pretty satisfying at this point. The Xnest window is the same size as my desktop, so the beryl seams make it a little larger. I moved it to the desktop it’s going to live on and maximize, which clears the excess seams.

swnhacknight

hacknight
hacknight was off to a slow start this week due to an excursion to thai go on broadway. once we got going there was a wide range of discussions, such as:

the trusty old horse thinkpad
MitM attacks against video surveillance systems with wrt’s
opening convenience stores
minipci options for soekri’ such as vga video
more eye-fi demonstrations
that we can use galan’s laptop camera to spy on eric
building cameras out of scanners

the meeting was adjourned with a discussion about creating a new front page on the swn website. there’s some chatter that perhaps the website is less than inviting. I realized after we left that we’re probably just scary looking, and nobody dares to disturb us. except maybe saucer dude.

electrical knowledge for data center geeks?

I’m in the process of purchasing a data center UPS at work. Looking at an APC SmartUPS VT currently. I was looking at something larger from Liebert, but the vendor wanted $18k for the install, more than the cost of the hardware itself, and I have a hard time justifying $18k for what should be a couple days worth of work. In the process of all this, has been a lot of attention to power. I’m at a junction right now, I basically don’t have enough power for the UPS I’m looking at, but the UPS is larger than I need, as we plan on building a new data center in the not too distant future. This has lead me to an electrical code question. In the end, I’m probably going to have our electrical contractor do the work over a vendor, because despite not having the confidence of the vendors experience with their own equipment, electrical contractors generally have names, like John, or Bob, and I can chat with them for five minutes respect their work from the conversation. That and they do a good job without charging $4k/hr or whatever the vendor’s project costs come out to be. But yeah. I like small shops. If I can’t find someone with a first name to talk to who can spend ten minutes explaining the engineering of the situation to me, I’m not going to trust their judgement and I’m going to find someone else. Of course, I’m certainly not going to try to do it myself. I’ll worry about vendor inter-operable LACP, they can worry about harmonics. It’s what we both get paid for.

But still, I’ve been communicating with our electrical contractor and a couple vendors all along, but I’m not really satisfied until I understand the mechanics, or perhaps the electrics, of the situation. Tonight I posted my question on an electricians forum. It’s currently up for debate as to if I’m allowed to ask questions there, as they have a policy against answering “how-to” questions to avoid laymen killing themselves, doing illegal electrical work, etc. Hopefully they side with me. As I got thinking about their choice though, I realized how much I think about electricity. Sure, I’ve got all these outlets in the ceiling of my data center, all I really have to do is plug my PDU’s from my racks in and not worry about it, right? I’m IT, that’s facilities. Well, there’s no such thing as facilities in my company, and I previously come from even smaller companies where the concept of departments didn’t even exist, so I might be a semi-rare case here. But I think about electricity a lot. I wonder what the current and peak current of my racks and PDUs are, ensuring I’m not only not overloading a breaker, but evenly balanced across phases. Then when the UPS comes into the picture, I further get to worry about the load on the UPS, run times, etc. All this leads to spending a lot of time figuring out how 120V single phase power relates to 208V three phase power, the difference between KVA and KW for UPS sizing, and why the hell my datacenter was built with NEMA 5-20 plugs instead of something rugged and locking like an L5-30.

Maybe that’s why there are specialized vendors out there getting $18k for an install. But people I work for seem to want me to know whats going on, and more importantly, I don’t sleep at night if I don’t get it anyways. So, other admin folk, how does power affect your daily life (besides windpocalypse 2k6 and the fact that casey lives in the sticks)?

stupid sql 2005 notes

I went to move a db from a sql 2000 to a sql 2005 developer edition database yesterday. I detached the database from enterprise manager then attached it using the new do it all app whose name I forget right now. Next time I opened the configuration app I got an XML related error with the message “Object reference not set to an instance of an object”. Some searching on the net only found solutions related to visual studio. I noticed some recommendations to run “aspnet_regiis.exe -i” in the %windir%\Microsoft.NET\Framework\v2.0.50727 folder, but that didn’t do much. There were a ton of .NET results though. I checked windows update and saw that automatic updates hadn’t been installing because windows installer 3.1 wasn’t installed (this is a bad, bad thing), ran through updates which included a .net update, a reboot, and everything was fine again.