Category Archives: Uncategorized

Microsoft Azure on Ubuntu 12.10

I had to upgrade to 12.10 Quantal Quetzal to get a recent enough set of nodejs packages for this to work.

# First install nodejs and the the nodejs package manager
sudo apt-get install nodejs npm
# The Amateur Radio 'node' package installs /usr/sbin/node, so nodejs installs /usr/bin/nodejs on Debian/Ubuntu. Node scripts don't like this.
sudo ln -s /usr/bin/nodejs /usr/bin/node
#Now install the azure tools using NPM
sudo npm install azure-cli -g

# Get your azure credentials with the link provided
azure account download
# Import these credentials
azure account import foo-credentials.publishsettings
# Get a list of VM images
azure vm image list
# Get the list of hosting locations
azure vm location list
# Create an instance. Change UNIQUE_SERVERNAME and USERNAME to your fitting.
azure vm create UNIQUE_SERVERNAME CANONICAL__Canonical-Ubuntu-12.04-amd64-server-20120924-en-us-30GB.vhd USERNAME --location "East US" --ssh
azure vm start UNIQUE_SERVERNAME
ssh USERNAME@UNIQUE_SERVERNAME.cloudapp.net

I found this document most useful once I got the tools working.

Stubbing class constants with rspec and Ruby

I had some Ruby code that utilized File::SEPARATOR and File::PATH_SEPARATOR to run on both unix and windows, so I wanted to stub these values to test for both platforms. There are couple examples out there, building on each other. This example adds a feature that saves and recalls the former value and this example builds on that to support class constants. Both expect Activerecord, so there’s a little working around that added here. I’m ripping this directly from my spec_helper.rb before I throw it away because it feels over-engineered and complicated.

def with_warnings(flag)
  old_verbose, $VERBOSE = $VERBOSE, flag
  yield
ensure
  $VERBOSE = old_verbose
end

# http://missingbit.blogspot.com/2011/07/stubbing-constants-in-rspec_20.html
def parse_constant(constant)
  source, _, constant_name = constant.to_s.rpartition('::')

  [constantize(source), constant_name]
end

def with_constants(constants, &block)
  saved_constants = {}
  constants.each do |constant, val|
    source_object, const_name = parse_constant(constant)

    saved_constants[constant] = source_object.const_get(const_name)
    with_warnings(nil) {source_object.const_set(const_name, val) }
  end

  begin
    block.call
  ensure
    constants.each do |constant, val|
      source_object, const_name = parse_constant(constant)

      with_warnings(nil) { source_object.const_set(const_name, saved_constants[constant]) }
    end
  end
end
####################

# File activesupport/lib/active_support/inflector/methods.rb, line 209
def constantize(camel_cased_word)
  names = camel_cased_word.split('::')
  names.shift if names.empty? || names.first.empty?

  constant = Object
  names.each do |name|
    constant = constant.const_defined?(name) ? constant.const_get(name) : constant.const_missing(name)
  end
  constant
end

Then you can perform:

  it "does something when running on Windows" do
    with_constants "::File::PATH_SEPARATOR" => ";" do
      # code
    end
  end

Downloading All The Github Repositories

I had a need to grab all of the Github repositories for Cookbooks, which is a Github user maintained by the Chef community for collecting many cookbooks in one place for development. All of these cookbooks should be on the Opscode Community site, which is where you should go if you’re browsing for cookbooks to use yourself. But I needed to grep through a large number of cookbooks to develop statistics on Chef Cookbook usage patterns, so I needed All The Things.

#!/usr/bin/env ruby
# 2012-01-11 Bryan McLellan <btm@loftninjas.org>
# Fetch the list of repositories from a Github user and 'git clone' them all

require 'rubygems'
require 'json'
require 'net/http'

url = "http://github.com/api/v2/json/repos/show/cookbooks"
dir = "cookbooks"

if File.basename(Dir.getwd) != dir
 if File.exists?(dir)
   puts "Target directory of '#{dir}' already exists."
   exit 1
 end

 Dir.mkdir(dir)
 Dir.chdir(dir)
end

resp = Net::HTTP.get_response(URI.parse(url))
data = resp.body

result = JSON.parse(data)

result['repositories'].each { |repo|
 puts "Fetching #{repo['url']}"
 system "git clone #{repo['url']}"
}

Generating entropy in the cloud

Virtual machines don’t produce a lot of entropy on their own. Typically the need for additional entropy triggers talk of hardware based entropy generators or network based entropy distribution protocols. Sometimes you just need a little bit of entropy for a moment.

$ sbuild-update --keygen
Generating archive key.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 279 more bytes)

Disk tends to be one of the only remaining sources of entropy on virtual systems. I usually do something like this:

$ while true ; do cat /proc/sys/kernel/random/entropy_avail  ; \
    sudo find / > /tmp/find.log ; sync ; done

This numbers printed should go up and down as your application consumes the entropy. Hit CTRL+C when you’ve got enough. This is probably a bad source of entropy, but the world is inherently dangerous.

Disabling Firefox shortcuts on OS X

I joined a startup, and they gave me a MacBook Pro. It was bound to happen eventually; all the cool kids use MBPs and startups are cool, right?

The great period of adaption began, as I learned I couldn’t have simple technology like sloppy focus. One of the greatest inconveniences is the keyboard. I have a hard time using the keyboard on the laptop because special keys are in different places than I’m used to. Even with a Unicomp Spacesaver M (for those of us attached to the Model M), some change is apparent, like Apple using “delete” when they mean “backspace” (The Unicomp uses “delete ->” when they mean “delete”).

Most frustrating of this set of issues is that in Firefox the home and end keys go to the top and bottom of the page, whereas you have to use cmd+left and cmd+right to go to the beginning and end of a line in a textbox. However sometimes these keys represent page forward and page back, and sometimes they don’t (usually in a flash app, I believe). The solution is to install the keyconfig extension. After you restart firefox, you will find it in the Tools menu where you can disable “GoBackKb” and “GoForwardKb”. Then these keys work as expected in a text box and you no longer find yourself going back a page unintentionally, possibly losing a textbox full of input along the way.

require-rubygems.overrides and gem2deb 0.2.2

For those working on moving debian ruby library packaging to gem2deb, you can exempt specific hits from the slick built in ‘require rubygems’ test by adding the path to debian/require-rubygems.overrides.

For instance, to exempt this:

debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems'
debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems/version'
debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems/dependency'
debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems/spec_fetcher'
debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems/platform'
debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems/format'
debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems/dependency_installer'
debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems/uninstaller'
debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb: require 'rubygems/specification'
debian/chef/usr/lib/ruby/vendor_ruby/chef/providers.rb: require 'chef/provider/package/rubygems'
Found some 'require rubygems' without overrides (see above).
ERROR: Test "require-rubygems" failed. Exiting.
dh_auto_install: dh_ruby --install /«BUILDDIR»/chef-0.10.0/debian/chef returned exit code 1
make: *** [binary] Error 1
dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2

debian/require-rubygems.overrides should contain:

debian/chef/usr/lib/ruby/vendor_ruby/chef/provider/package/rubygems.rb
debian/chef/usr/lib/ruby/vendor_ruby/chef/providers.rb

locale errors on debian

I received the following error while working on a Debian sid box:

$ schroot -l
terminate called after throwing an instance of 'std::runtime_error'
  what():  locale::facet::_S_create_c_locale name not valid
Aborted

With debconf + locales already installed, I ran ‘export | grep LANG’ to discover that my locale was set to ‘en_US.UTF-8’. Then I ran ‘dpkg-reconfigure locale’ and checked that locale and set it to the default.

Creating a Debian sid emi for Eucalyptus

For the most part, this is the same as any post about creating an image for Eucalyptus, but I had a hard time figuring out exactly how to put it all together. You need an up to date Debian sid system nearby to take the kernel and ramdisk from. I found having a sid VM easier than discovering the commands to build a sid initrd on my Ubuntu workstation.

# First, the prerequisites. You need debootstrap and the eucalyptools tools installed.
sudo apt-get install debootstrap euca2ools

# Export your eucalyptus variables to use the tools.
source ~/.euca/eucarc

# Create an empty disk image. You can adjust the count to change the root disk size. 1000 is about a GB.
dd if=/dev/zero of=image count=1000 bs=1M

# Put a filesystem on the new disk image
mkfs.ext3 -F image

# Mount the filesystem
mkdir chroot
sudo mount -o loop image chroot

# Install debian sid to the chroot. Notice that the ssh server and curl are included here
sudo debootstrap --include=openssh-server,curl,vim --arch amd64 sid chroot/ http://ftp.debian.org

# chroot into the image
sudo chroot chroot

# Setup basic networking and disk configurations
echo -e 'auto lo\niface lo inet loopback\nauto eth0\niface eth0 inet dhcp' >> /etc/network/interfaces
echo -e '/dev/sda1 / ext3 defaults 0 1\n/dev/sda2 swap swap defaults 0 0' > /etc/fstab

# Set a default root password if you want
# passwd

# Set up the image to automatically install ssh keys
mkdir /root/.ssh
cat <<EOS > /etc/rc.local
echo >> /root/.ssh/authorized_keys
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key | grep 'ssh-rsa' >> /root/.ssh/authorized_keys
echo "AUTHORIZED_KEYS:"
echo "************************"
cat /root/.ssh/authorized_keys
echo "************************"
exit 0
EOS

# Leave the image
exit

# Unmount the image
sudo umount chroot

# After you've copied the latest /boot/vmlinuz* and /boot/initrd* from your sid system, upload the kernel + ramdisk
euca-bundle-image --image vmlinuz-2.6.38-2-amd64 --kernel true
euca-upload-bundle --bucket sid --manifest vmlinuz-2.6.38-2-amd64.manifest.xml
euca-register sid/vmlinuz-2.6.38-2-amd64.manifest.xml
euca-bundle-image --image initrd.img-2.6.38-2-amd64 --ramdisk true
euca-upload-bundle --bucket sid --manifest initrd.img-2.6.38-2-amd64.manifest.xml 
euca-register sid/initrd.img-2.6.38-2-amd64.manifest.xml

# Prepare the image for upload, use the values given by euca-register above here
euca-bundle-image -i image --kernel eki-XXXXXXXX --ramdisk eri-XXXXXXXX

# Rename to manifest to something descriptive and upload it
mv image.manifest.xml `date +%Y%m%d`.sid.manifest.xml
euca-upload-bundle -b sid -m `date +%Y%m%d`.sid.manifest.xml

# Register the image to get an EMI
euca-register sid/`date +%Y%m%d`.sid.manifest.xml

You should be able to use euca-run-instance on the emi that is returned by the last command. Remember to pass an ssh key (that eucalyptus knows about) using -k. If there are any issues, use euca-get-console-output to monitor the instance startup and tail the eucalyptus/nc.log file on the node controller for any errors. Building the initrd this way is a little hackish, because it is actually generated for your sid system, not for the one running in eucalyptus. Chicken, or the egg?

LVM errors with sbuild

Here is a strange one that I fixed but I’m not sure why. Roughly using the SBuildLVM Howto, and the Chef sbuild cookbook, I have an sbuild server. It was working alright for me, but another user was seeing this:

schroot -c lucid
E: 05lvm: File descriptor 3 (socket:[460392]) leaked on lvcreate invocation.
E: lucid-40c0e109-2d5d-4103-bf92-a44288595dcc: Chroot setup failed: stage=setup-start

When he ran with verbose mode, this line was particularly interesting:

E: 05lvm:

When I su’d to his user, it worked fine for me without verbose but failed similarly with the verbose flag.

In the course of debugging, I started trying to redirect output and I found that these changes to /etc/schroot/setup.d/05lvm fixed the problem. Unfortunately I’m running behind on work so I can’t track down the root cause right now.

--- 05lvm.orig	2011-03-10 19:28:17.000000000 +0000
+++ 05lvm	2011-03-10 19:37:54.000000000 +0000
@@ -36,10 +36,10 @@
 
 	if [ "$AUTH_VERBOSITY" = "verbose" ]; then
 	    lvcreate $VERBOSE --snapshot --name "$CHROOT_LVM_SNAPSHOT_NAME" \
-		"$CHROOT_DEVICE" $CHROOT_LVM_SNAPSHOT_OPTIONS
+		"$CHROOT_DEVICE" $CHROOT_LVM_SNAPSHOT_OPTIONS 2>&1 
 	else
 	    lvcreate $VERBOSE --snapshot --name "$CHROOT_LVM_SNAPSHOT_NAME" \
-		"$CHROOT_DEVICE" $CHROOT_LVM_SNAPSHOT_OPTIONS > /dev/null
+		"$CHROOT_DEVICE" $CHROOT_LVM_SNAPSHOT_OPTIONS 2>&1 > /dev/null
 	fi
 
     elif [ $1 = "setup-stop" ]; then
@@ -57,9 +57,9 @@
 		--pid=$PID || true
 
 	    if [ "$AUTH_VERBOSITY" = "verbose" ]; then
-		lvremove $VERBOSE -f "$CHROOT_LVM_SNAPSHOT_DEVICE" || true
+		lvremove $VERBOSE -f "$CHROOT_LVM_SNAPSHOT_DEVICE" 2>&1 || true
 	    else
-		lvremove $VERBOSE -f "$CHROOT_LVM_SNAPSHOT_DEVICE" > /dev/null || true
+		lvremove $VERBOSE -f "$CHROOT_LVM_SNAPSHOT_DEVICE" 2>&1 > /dev/null || true
 	    fi
 	else
 	    # The block device no longer exists, or was never created,

munin-cgi-graph with fcgid on ubuntu lucid

Two and a half years have passed since I wrote about running Munin with fastcgi triggered graphs on Debian etch. Unfortunately, not a lot has changed since then. A revolution in trending would have been nice. When I started here munin was triggering graph generation using CGI and was painfully slow to use. I switched over to cron triggered graph generation and was happy. After a data center migration, drawing the munin graphs for that cluster from cron was taking about 130 seconds. As a precaution I wanted to get this down a bit.

Someone asked me why munin-graph would have caused data loss because munin-update collects the data and I couldn’t remember. I had problems with both munin-update and munin-update taking over five minutes in certain circumstances back then. The latter was primarily from the slow response time of the SNMP queries I was doing against MSSQL servers. That was back during Munin 1.2 as well and a few things have changed since then, most relevant is that you no longer have to patch Munin for fastcgi support.

This time around I used fcgid instead of fastcgi. There are less licensing hurdles for fcgid, which was written to be compatible with fastcgi. Provided you already have munin running, install the prerequsites first.

sudo apt-get install libcgi-fast-perl libdate-manip-perl libapache2-mod-fcgid

The packaging should restart Apache as required to load the new module we just installed, but we need to configure our Munin site a bit to link our CGI script to fcgid. Add this to or update the VirtualHost block for your Apache configuration and reload Apache.

  ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/

  <Directory /usr/lib/cgi-bin/>
    AllowOverride None
    Options ExecCGI -MultiViews +SymLinksIfOwnerMatch
    Order allow,deny
    Allow from all
  </Directory>

  <Location /cgi-bin/munin-fastcgi-graph>
    SetHandler  fastcgi-script
  </Location>

Add the following lines to your munin.conf. This causes the munin-graph that is run from cron to not generate any graphs (noops) and munin-html will update the img src links to use the CGI script to generate the graphs rather than linking directly to files. You’ll need to wait for the cron job to run once or run munin-html yourself to trigger this.

graph_strategy cgi
cgiurl_graph /cgi-bin/munin-fastcgi-graph

Triggering munin-html manually:

sudo -s
sudo -u munin /usr/share/munin/munin-html --debug

Remember that Apache needs to be able to write the graphs out. You will get no graphs and HTTP 500 errors in your Apache logs if the munin-cgi-graph script cannot write the graphs out. My Munin data directory, /var/www/munin/ is owned by ‘munin’ while Apache runs as ‘www-data’. The following commands fix this, but Apache is going to change the user ownership to ‘www-data’ when it saves a file by default, so if you try to switch back to munin-graph via cron, you’ll need to fix permissions again.

sudo chgrp -R www-data /var/www/munin
sudo chmod -R g+w /var/www/munin
sudo chgrp www-data /var/log/munin /var/log/munin/munin-graph.log
sudo chmod g+w /var/log/munin /var/log/munin/munin-graph.log

After the switch to fcgid generated munin graphs, generating all the graphs for a single node would take minutes and was quite painful. I gave the node more CPU resources, but it still took two minutes to draw a page of graphs. I ended up switching back to cron based graph generation. The additional CPU resources cut about forty seconds off the munin-graph time from cron, which is progress. Having the graphs immediately available when you need them is worth the cost of the CPU resources you could otherwise share that you would save from demand based graph generation via CGI. For the time being I intend to keep giving Munin more CPU until I find settle on a better way to do trending.

Monitoring Unicorn connections with munin

Unicorn doesn’t have any monitoring hooks. Typically folks either put nginx in front and monitor response time, do some backlog magic and track errors or make guesses based on other available information. I’ve been using a modified version of the unicorn_status munin plugin for a while. It tracks CPU time for a thread and considers that thread idle if it hasn’t changed after sleeping for a second. This doesn’t pan out under load. Still, here it is.

#!/usr/bin/env ruby
#
# unicorn_status - A munin plugin for Linux to monitor unicorn processes
#
#  Copyright (C) 2010 Shinji Furuya - shinji.furuya@gmail.com
#  Copyright (C) 2010 Opscode, Inc. - Bryan McLellan <btm@loftninjas.org>
#    - Specify pid file via environment variable
#    - Do not assume process names
#  Licensed under the MIT license:
#  http://www.opensource.org/licenses/mit-license.php
#

module Munin
  class UnicornStatus

    def initialize
      @pid_file = ENV['UNICORN_PID']
    end

    def master_pid
      File.read(@pid_file).to_i
    end

    def worker_pids
      result = []
      ps_output = `ps w --ppid #{master_pid}`
      ps_output.each_line do |line|
        chunks = line.strip.split(/\s+/, 5)
        pid = chunks[0]
        result << pid.to_i if pid =~ /\A\d+\z/
      end
      result
    end

    def worker_count
      worker_pids.size
    end

    def idle_worker_count
      result = 0
      before_cpu = {}
      worker_pids.each do |pid|
        before_cpu[pid] = cpu_time(pid)
      end
      sleep 1
      after_cpu = {}
      worker_pids.each do |pid|
        after_cpu[pid] = cpu_time(pid)
      end
      worker_pids.each do |pid|
        result += 1 if after_cpu[pid] - before_cpu[pid] == 0
      end
      result
    end

    def cpu_time(pid)
      usr, sys = `cat /proc/#{pid}/stat | awk '{print $14,$15 }'`.strip.split(/\s+/).collect { |i| i.to_i }
      usr + sys
    end
  end
end

case ARGV[0]
when "autoconf"
  puts "yes"
when "config"
  puts "graph_title Unicorn - Status"
  puts "graph_args -l 0"
  puts "graph_vlabel number of workers"
  puts "graph_category Unicorn"
  puts "total_worker.label total_workers"
  puts "idle_worker.label idle_workers"
else
  m = Munin::UnicornStatus.new
  puts "total_worker.value #{m.worker_count}"
  puts "idle_worker.value #{m.idle_worker_count}"
end

And the configuration file:

$ sudo cat /etc/munin/plugin-conf.d/unicorn
      [unicorn_*]
      user root
      env.UNICORN_PID /etc/sv/opscode-chef/supervise/pid

I wrote another plugin today that uses raindrops to collect information about the active and queued connections. It is interesting how greatly active connections fluctuates. Thus, active connections don’t produce a stable munin graph, but having the queue depth recorded is pretty useful for tracking down latency issues.

#!/usr/bin/env ruby
#  Copyright: 2011 Opscode, Inc.
#  Author: Bryan McLellan <btm@loftninjas.org>
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

require 'rubygems'
require 'raindrops'

def collect(port)
  # raindrops requires an array of strings, even if it denies this 
  addr = [ "0.0.0.0:#{port}" ]
  stats = Raindrops::Linux.tcp_listener_stats(addr)

  puts "active.value #{stats[addr[0]].active}"
  puts "queued.value #{stats[addr[0]].queued}"
end

if ARGV[0] == "config"
  puts "graph_title Unicorn - connections"
  puts "graph_args -l 0"
  puts "graph_printf %6.0lf"
  puts "graph_vlabel connections"
  puts "graph_category Unicorn"
  puts "active.label active"
  puts "queued.label queued"
  exit 0
end

if $0 =~ /.*_(\d+)/
  # the munin wildcard format of plugin_value
  port = $1
elsif ARGV.size > 0
  port = ARGV[0]
else
  usage = "Usage: #$0 port or #{$0}_port"
  abort usage
end

collect(port)

Usage is the same as any wildcard munin plugin.

  1. Install the raindrops gem
  2. Drop the above code in “/usr/share/munin/plugins/unicorn_connections_”
  3. Create a link from “/etc/munin/plugins/unicorn_connections_UNICORNPORT” to the above script
  4. killall -HUP munin-node

Graphs should start showing up in five or ten minutes. You can always test like so:

$ nc localhost 4949
# munin node at unicorn.example.org
fetch unicorn_connections_6880
active.value 5
queued.value 0
.
quit

Of course, I use the Chef and the munin cookbook’s munin_plugin definition, so my application’s cookbook has this additional code:

# required for unicorn_connections_ munin plugin
gem_package "raindrops"

munin_plugin "unicorn_connections_" do
  plugin "unicorn_connections_6880"
  create_file true
end

Wrangling 32bit debs on a 64bit system

Typically directions for downloading a i386 version of a library for a x86_64 system link to a specific deb package and tell you to download it with wget. A new release of that package often breaks the link, so I wanted to document how to do this using apt. Unfortunately, it looks like apt won’t download a single deb if it can’t resolve dependencies, but aptitude will, so we use them together.

I use a separate sources.list here just to speed up the process, as we need to correct apt when we’re finished.

# Download 32bit list files from the mirror specified in /tmp/sources.list
apt-get -o=APT::Architecture="i386" -o=Dir::Etc::sourcelist="/tmp/sources.list" -o=Dir::Etc::sourceparts="/dev/null" update
# Download the single library. Set libstdc++5 to whatever library you want
aptitude -o Apt::Architecture=i386 download libstdc++5
# Return apts lists to their preconfigured state
apt-get update
# Optionally, install the package
dpkg --force-architecture -i libstdc++5_1%3a3.3.6-20~lucid1_i386.deb

Note that if you install the package, it would overwrite the 64bit version of the library if it is installed. 32bit packages meant for 64bit systems, like the ia32-libs package, install to /lib32 and /usr/lib32 to avoid this. You could also extract the package with ‘dpkg -x libstdc++5_1%3a3.3.6-20~lucid1_i386.deb’ and copy the libraries to where you like, then run ‘ldconfig’. The getlibs tool will try to repack debs more appropriately for you, if you like.

libvirtError: monitor socket did not show up

Sometimes errors don’t float to the top of stacks well.

Our virtualization stack is pretty automated wherein we have a custom script that uses vmbuilder to create the guest, register it with libvirt, create first boot scripts that will have it register with a chef server, and start the VM. We saw this error today libvirtError: monitor socket did not show up.: Connection refused, and I commented that my memory contained a lot of libvirt/kvm errors, and many resolutions, but the two don’t always stay connected. I checked the libvirt logs in /var/log/libvirt and even ran libvirt with LIBVIRT_DEBUG=1 libvirtd -v. When I tried running kvm by hand using the syntax in the logs, but with the -net options removed from the command line, kvm just spouted Aborted. After starting at it for a bit, I noticed that instead of -m 1024 KVM was trying to run with -m 1073741824 (1024^3). This was due to a small conversion bug in our custom script.

Amazon EC2 Network Subnets

For a project that exists both in Amazon Web Services EC2 US-EAST-1b and another cloud, I wanted to block network traffic between the two to ensure they didn’t affect each other. I started by doing an whois looking via ARIN for all of the IP addresses we are currently assigned in EC2, and I ultimately got the same list that I found registered to the AMAZO-4 contact with ARIN, with the exception of AMAZON-AES, which I presume is for Amazon Enterprise Solutions. I couldn’t tell you offhand if the same IP blocks are used in other AWS zones.

Network CIDR Netmask ARIN Name
72.44.32.0 /19 255.255.224.0 AMAZON-EC2-2
67.202.0.0 /18 255.255.192.0 AMAZON-EC2-3
75.101.128.0 /17 255.255.128.0 AMAZON-EC2-4
174.129.0.0 /16 255.255.0.0 AMAZON-EC2-5
204.236.128.0 /17 255.255.128.0 AMAZON-EC2-6
184.72.0.0 /15 255.254.0.0 AMAZON-EC2-7
50.16.0.0 /14 255.252.0.0 AMAZON-EC2-8

Here are the IOS commands:

name 72.44.32.0 EC2-2 description AMAZON-EC2-2
name 67.202.0.0 EC2-3 description AMAZON-EC2-3
name 75.101.128.0 EC2-4 description AMAZON-EC2-4
name 174.129.0.0 EC2-5 description AMAZON-EC2-5
name 204.236.128.0 EC2-6 description AMAZON-EC2-6
name 184.72.0.0 EC2-7 description AMAZON-EC2-7
name 50.16.0.0 EC2-8 description AMAZON-EC2-8
object-group network ec2-us-east
   network-object 174.129.0.0 255.255.0.0
   network-object 184.72.0.0 255.254.0.0
   network-object 204.236.128.0 255.255.128.0
   network-object 50.16.0.0 255.252.0.0
   network-object 67.202.0.0 255.255.192.0
   network-object 72.44.32.0 255.255.224.0
   network-object 75.101.128.0 255.255.128.0