Tag Archives: chef

knife, or, my tool is actually a library

The Chef site starts out with, “Chef is a systems integration framework, built to bring the benefits of configuration management to your entire infrastructure.” There is an important point hidden in that description; Chef is not a CM tool. Of course it can be used as one, and many do, but from its beginning it has been leveraged by others such as Engine Yard inside of their own infrastructure. You can safely bet it will be an integral part of the Opscode platform when released as well.

While I was off dealing with my startup’s acquisition this fall, a chef user wrote knife. It was a command line client for interacting with a chef server. An impressively simple prototype, made possible and powered by the chef libraries and API. This has happened before with chef; for instance a while ago in the 0.6 era, an OS X GUI client called Casserole was written by another chef user with an itch to scratch. However, something has happened with knife that is interesting enough I’d like to specifically point out; it got mainlined and heavily expanded.

This happened for a handful of reasons. For one, it was simply a great idea. The kind of user who would be attracted to chef as a tool is very likely to be a power user who would rather not spend their time clicking around a graphical interface. It’s much easier to script a command line tool where needed, passing data in and out for quick hacks to your infrastructure. The folks at Opscode saw this, agreed, and set out to add full functionality to it for the upcoming 0.8 release.

What I think is most important is the planning a full API from the start. From hearing Adam talk about other tools being “first-class citizens” in chef-land, and knowing his experience writing iClassify as an early open source external node tool for puppet, I know this design was intentional. Using iClassify to tell puppet about your nodes was great, but puppet assumed that you only wanted this tool to classify nodes in the way puppet thought about nodes. Consequentially, when you wanted to to use data in iClassify about your nodes to make decisions about your infrastructure on the fly, you were forced to do it in templates. This created the expected repetition of loading the iClassify library and accessing iClassify in many templates, but also required you at times to do some fancy footwork to get data between templates when you really wanted puppet to know about the data itself.

Reductive Labs recently announced a dashboard for puppet. I was hoping this meant those barriers had been removed. It certainly creates really nice graphs from your puppet report data. However from the README it looks like you’re still pushing limited data into puppet using the external node interface. Reductive is going to have to expand this interface greatly if dashboard is to have any meaningful node integration benefits that we didn’t already have two years ago with iClassify.

Just as you can see some concepts from other configuration management tools in chef, you can see parts of iClassify. It was a great start and it was disappointing that the puppet community didn’t engage it further. Perhaps it was simply before its time, but I believe it was that there were too few doors into puppet-land to let you really take advantage of and grow external tools.

I think this was the lesson that Opscode learned, and consequently chef was born with an API. With it we can accomplish nearly anything we dream up. What is most exciting about this is that we can do whatever everyone else dreams up. I can’t wait to see what that is.

Installing Chef 0.8 alpha on Ubuntu Karmic

Theres push to get Chef 0.8 out the door because we’re all anxious for its awesome list of features and fixes, so we’re all hunkering down on fixing bugs. Scott Likens has similar notes and theres some more to be found in Dan Deleo’s 08boot bootstrap recipe. This should help get you going.

On a fresh Ubuntu Karmic install (a VM makes this easy of course):
# Add the Canonical Ubuntu ‘multiverse’ repository for Java.
sudo vi /etc/apt/sources.list # add multiverse to your ‘deb’ lines if it is not there
sudo apt-get update

# Start Chef Gem bootstrap with some notes
# note that I don’t like to install rubygems from source and use the packages instead. this adds a step or two.
sudo apt-get install ruby ruby1.8-dev libopenssl-ruby1.8 rdoc ri irb build-essential wget ssl-cert rubygems git-core -y
sudo gem sources -a http://gems.opscode.com
sudo gem sources -a http://gemcutter.org # for nanite
sudo gem install ohai chef json –no-ri –no-rdoc

We now have enough chef to bootstrap ourselves
# Create ~/chef.json:

{
  "bootstrap": {
    "chef": {
      "url_type": "http",
      "init_style": "runit",
      "path": "/srv/chef",
      "serve_path": "/srv/chef",
      "server_fqdn": "localhost"
    }
  },
  "recipes": "bootstrap::server"
}
# End of file

# Create ~/solo.rb:

file_cache_path "/tmp/chef-solo"
cookbook_path "/tmp/chef-solo/cookbooks"
# End of file

mkdir /tmp/chef-solo
cd /tmp/chef-solo
# Get kallistec’s 08boot bootstrap cookbook
git clone git://github.com/danielsdeleo/cookbooks.git
cd cookbooks
git checkout 08boot
# Bootstrap chef
sudo /var/lib/gems/1.8/bin/chef-solo -j ~/chef.json -c ~/solo.rb
# If the bootstrap hangs for more than a minute after “Installing package[couchdb] version 0.10.0-0ubuntu3” then hit ctrl+c and run again

Now prepare to install the development versions
# install some development tools
sudo apt-get install rake librspec-ruby -y
sudo gem install cucumber merb-core nanite jeweler uuidtools
# install missing dependencies
sudo apt-get install libxml-ruby thin -y
# get chef from the repository
mkdir ~/src
cd ~/src
git clone git://github.com/opscode/chef.git
cd chef
rake install
# remove the old version of chef
sudo gem uninstall chef -v0.7.14
# patch up some runit paths
sudo sed -i s_chef-_/var/lib/gems/1.8/gems/chef-solr-0.8.0/bin/chef-_ /etc/sv/chef-solr*/run
# allow access to futon for development purposes (http://IPADDRESS:5984/_utils)
sudo sed -i ‘s/;bind_address = 127.0.0.1/bind_address = 0.0.0.0/’ /etc/couchdb/local.ini
sudo apt-get install psmisc # for killall
sudo /etc/init.d/couchdb stop
sudo killall -15 couchdb # stubborn
sudo killall -15 beam.smp # yup
# shut it all down
sudo /etc/init.d/chef-solr stop
sudo /etc/init.d/chef-solr-indexer stop
sudo /etc/init.d/chef-solr-client stop
sudo /etc/init.d/chef-client stop
sudo /etc/init.d/chef-server stop
sudo killall -15 chef-server

Build some data and start up Chef
# start up the integration environment
cd ~/src/chef
sudo rake dev:features
# this will create a database
# now hit ctrl+c
sudo mv /var/lib/couchdb/0.10.0/chef_integration.couch /var/lib/couchdb/0.10.0/chef.couch
sudo chown couchdb:couchdb /var/lib/couchdb/0.10.0/chef.couch
# start it all up
sudo /etc/init.d/couchdb start
sudo /etc/init.d/rabbitmq-server start
sudo /etc/init.d/chef-solr start
sudo /etc/init.d/chef-solr-indexer start
sudo /etc/init.d/chef-server start

Start the web server
# the web server is now a separate application and uses the API to reach the server
sudo cp /tmp/chef_integration/webui.pem /etc/chef
cd ~/src/chef/chef-server-webui
sudo /var/lib/gems/1.8/bin/slice -p 4002

Using knife
From the user interface you can create a client keypair to use knife from the web interface. I recommend using ‘view source’ to copy the private key, and remember to save it without any leading whitespace and run knife like so:

OPSCODE_USER=’btm’ OPSCODE_KEY=’/home/btm/btm.key’ /var/lib/gems/1.8/bin/knife

If you can’t get it to work, you can always use the webui’s key:

sudo OPSCODE_USER=’chef-webui’ OPSCODE_KEY=’/etc/chef/webui.pem’ /var/lib/gems/1.8/bin/knife

Hopefully that is enough to get you going. Jump on #chef on irc.freenode.net or join the chef list if you have any problems. Tickets/bugs/features are tracked in JIRA, and all sorts of other useful information is in the wiki.

Using for loops in Chef

One of the great features of Chef is that you write configurations in Ruby. When wanting to push a number of configuration files out for nagios, I initially turned to the Remote Directory resource. However this could interfere with configuration files created and owned by the debian package, so I needed to be more specific. In the past with Puppet I’ve had a remote file definition (that uses the file type) for each configuration file. This works fine, but gets to be repetitive when it doesn’t need to be. With Chef, you can combine a little ruby with the Remote File resource like so:

for config in [ "contacts.cfg", "contactgroups.cfg" ] do
  remote_file "/etc/nagios3/#{config}" do
    source "#{config}"
    owner "root"
    group "root"
    mode 0644
    notifies :restart, resources(:service => "nagios"), :delayed
  end
end

The benefit of this approach is that it makes your configuration management cleaner and more DRY. This is perhaps at the cost of a little complexity, albeit at a degree that I think is pretty easily understood by reading the code.

Chef 0.5.2 and Ohai 0.1.4 released

We contributed a lot of work to the latest Chef release, which made it out over the weekend. Most notably we got a lot of FreeBSD support in, and it looks like a few people are going to give that shot. The release notes are the best source of information about what was added. As we’re moving puppet recipes over to chef we stumble across pieces of configuration that we’d rather be a resource, and try to add that support. We’re really excited about what we’re getting into Ohai. I tested support that Ben Black is working on for reading libvirt data through their ruby API, and it’s just going to be awesome. With puppet+iclassify I had some convoluted ways of getting guest information, but this implementation is going to be first class enterprise stuff.

Building scalable operations infrastructure with OSS

I’m the lead systems administrator at Widemile and run operations here. Sometimes I do other things, but operations is most interesting. My linkedin profile may give you an idea of where I’m coming from, but it misses all those late nights working with OSS because I enjoy it. I usually blog on my own site, but it often serves more as a technical journal than what we are up to at Widemile, which will be the differentiator. As a rule, I’m not a developer. Certain facts may convince you otherwise, but I try to stay out of our product. You’ll start to hear from Peter Crossley , our lead software architect, soon enough. Perhaps some other developers after that. I’ll christen this blog with a round-up of how we’re building our infrastructure at Widemile.

Most recently I’ve been in heavy development on Chef and Ohai. We’ve been using puppet for about a year and a half now. Check out my Infrastructure Engineering slide deck for where we were with that a few months ago. I’ve been happy with it except for a few issues which ended up being mostly major architectural changes to fix. Adam at Opscode has a good post entitled 9 things to like about Chef, that outlines some of these difference. There’s a lot of e-drama around Opscode’s decision to write a new product rather than usher changes into puppet. I won’t touch that other than to say that we had problems with puppet that chef fixes.

Almost all of our servers are in configuration management. Which means that no one-off work is done on the production servers so that all changes are self-documenting. Granted, we’re a small shop and sometimes I’ll do minor debugging on a production server, but any changes do end up in CM.

Our servers are almost all kvm guests under libvirt running on Dell blades. There’s some information about how we got here in a slidedeck I made for GSLUG entitled Virtual Infrastructure. Apparently using kvm in production isn’t all that heard of, but since we’re a small shop we’re able to leverage new technology very quickly to make the best of it. With the use of vmbuilder, libvirt, kvm and capistrano, we build out new nodes in a matter of minutes. More importantly, it’s just a couple commands.

Once Chef is intergrated with the libvirt API we expect to be able to further simplify our deployment. The idea behing is that it will be a ghetto version of Solo, which EY built using Chef. Eventually we’ll pull out capistrano. While it’s nice for interacting with multiple machines at once, it really was written for a different purpose than what we use it for. There will be replacement functionality in Chef shortly.

Learning to cook

The chef satire will never die. Adam posted 9 things to like about chef today, which is an expanded and much better version of my original blog post on chef. AJ had an intermediate post that tried to summarize a lot of contraversy and drama. Hopefully that silliness is settling down.

I’ve been coding a lot lately, contributing to both chef and ohai. We’ve been talking about trying to use chef in the NOC at Shmoocon so that next year we can reuse the recipes rather than build the servers again by hand. Most everything runs on borrowed hardware at Shmoocon, so you’re not guaranteed everything is the way you left it a year later. We use FreeBSD for some monitoring at Shmoocon, so I’ve been spending a lot of time getting chef/ohai ready for FreeBSD.

I don’t think I’ve ever contributed to a project to this degree before. Ohloh doesn’t think so either. The last time I can recall really adding code to a project that was more than a couple files was at an ISP in Maine back in the early 00’s. It was called Panax, and there’s the usual pile of silly isp shop history. It’s funny that while it’s been sucked into an ISP conglomerate the old color scheme has been maintained. We had an in-house system for user/account management, written in Perl. It had a web front end so none of the tech support folks had to log in to any of the systems to add, remove or manage users. Usually I’m just writing glue scripts, like a good SA. Regardless, it’s been fun and I’ve been learning a lot about Ruby and rspec.

An SE at my last job (who subscribes to python and I still haven’t convinced that CM will change his live) said going into development would be a natural move as I got bored of SA work. Is it that, or is this a shift in being an SA will mean? Configuration Management is still young, despite cfengine being out for some time now, and puppet getting a good following. It may take time for the old SAs to retire and the new deal to take hold. I think more and more as people work in shops with CM implemented, they’ll start to find how hard it is to live without it once you’ve had it. I noticed recently that Slashdot lacks any coverage on Configuration Management in the last few years, but I realize Slashdot is mostly fluffy news these days. While Slashdot is still talking about SCO every day, there is of course talk of new technologies in the new mediums.

The next few months will be exciting to see people pick up chef. There’s a few very helpful individuals in #chef on freenode who want to see this used and are perfectly willing to fix any bugs you find. So give it a shot and let me know what you think.

configuration management with chef announced

Chef has been announced. Listen to this podcast at Cloud Cafe. There’s no way around comparing puppet and chef. Sure, they’re both configuration management tools. It’s simplest to put it this way:

We’re replacing puppet with chef.

And why? A little while ago I wrote about problems I’ve been having scaling puppet. Off the top of my head, the biggest issues for me working with puppet have been:

  1. Dependencies graphs
  2. Limited capabilities of the language (DSL)
  3. Templates are evaluated on the server

Dependency Graphs

There’s a talk about vertically scaling puppet, but not a lot of it about horizontally scaling. I tend to run everything under puppet. People argue that it’s too much work to put single servers in puppet, and you should only use it for systems you intend to clone. I disagree. Puppet recipe’s are self documenting. The same people who don’t want to take the time to write puppet recipes for the single services are the people you have to beat with a sucker rod to get to document anything. Sometimes if I don’t have the time to put into fully testing a puppet recipe for a new machine, I’ll at least write the recipe as I’m working to server both as documentation and a starting point for if/when I come back to it.

The point is that as I scale out in this fashion, more often puppet will fail with a dependency problem on one run, and be fine on the next.  I asked Luke about this at a BoF at OSCON 2008, and he basically told me that he really only focuses on the problems his paid customers have and was anxious to leave and get a beer. That’s fine, I understand it, but since it does nothing to fix my problem it drove me away from the puppet community.

While in theory having puppet do all this work to resolve depency issues seems fine, it is more complexity and trouble than it is worth. As a systems administrator I know what the dependancies are. As you build a system you simply write your recipe in the same order as the steps you’re taking.

Chef takes this idea and runs with it. Recipes are parsed top to bottom. If a package needs to be installed before a service is started, you simply put the package in the recipe first. This not only makes a lot of sense, it makes depencies in a complex recipe visually understandable. With puppet you can end up with spaghetti code remincisent of “goto”, jumping around a number of recipes in an order that’s difficult to understand.

Language

Before the recent 0.24.6, you could not even do:

if $ram > 1024 {
    $maxclient = 500
}

The support for conditionals was rudimentary. I run into a lot of languages and the biggest problem I have is remembering how to do the same thing in each language. The puppet language does not do what a lot of lot of other languages do. I didn’t need another language to learn, let alone one written from scratch. It was just silly doing something like:

  # iclassify script addes vmware-guest tag based on facter facts
  $is_vmware = tagged('vmware-guest')
  if $is_vmware {
    include vmware
  }

Chef uses ruby for it’s recipes. This makes the above stupidly simple with something like:

include_recipe "vmware" if node[:manufacturer] =~ /VMware/

Templates
Puppet evaluates recipes and templates on the server. I ended up with this block of code once when I need to specify the client node’s IP Address in a configuration file:

require '/srv/icagent/lib/iclassify'
ic = IClassify::Client.new("https://iclassify", iclassify_user, iclassify_password)
query = [ "hostname:", hostname].to_s
mip = nil
nodes = ic.search(query)
nodes.each do |node|
  # node.attribs is an array of hashes. keys is 'name' value is 'values'
  node.attribs.each do |attrib|
    if attrib[:name].match(/ipaddress/)
      ip = attrib[:values].to_s
      if ip.match(/10.0.0./)
        mip = ip
        break
      elsif ip.match(/10.0.1./)
        mip = ip
        break
      end
    end
  end
end

This was so much work. Of course with chef you can easily get this information in the recipe because it’s parsed on the node, let alone the ease of doing it in the template if that’s more appropriate. Since the template’s parsed on the client, you grab the information out of variables or directly from the system.

As time goes on I’ll surely write more about using chef. We’re using it production now, and happy with it. In the interim, come to #chef on freenode if you have any questions.