mikebabineau engineer

Repo Relocation

Since it looks like EA2D's GitHub account has been deleted (the group doesn't exist anymore), I've moved the (kinda) maintained versions of several open source projects to my personal account.

The includes: - pingdom-python - Python library for Pingdom's REST API - loggly-python - Python library for Loggly's REST API - loggly-cookbook - Chef coookbook for Loggly

Chef Cookbook for Loggly

(originally written for EA2D's engineering blog)

As mentioned in a previous post, we aggregate and store our logs using a service called Loggly.

Since we wrote a library for programmatically managing Loggly inputs and devices, it was only natural for us to integrate it with our Chef deployment.

We've written and open sourced a Chef cookbook for Loggly.

From the README:

Installs the loggly-python library and provides a definition for the configuration of Loggly logging.

More specifically, the logglyconf definition will configure rsyslog to watch a log file and send its lines to a Loggly input. When first run, logglyconf will create the input and authorize the server to publish events to that input.

Developed for and tested on Ubuntu 10.10 LTS

You can grab it from the Opscode site:

$ knife cookbook site vendor loggly

Or from our GitHub repository:


Continuous Deployment of Ops Configs

A core tenant of DevOps is the notion of "Infrastructure as Code." Provisioning and deployment should be done programmatically, potentially with a configuration management (CM) system such as Chef or Puppet.

Jesse Robbins describes the goal well:

“Enable the reconstruction of the business from nothing but a source code repository, an application data backup, and bare metal resources”

While CM tools facilitate this reconstruction, there are some tricks for getting the most of your implementation. Below are the key points from a lightning talk I gave on this at the last ArchCamp.

The wrong way

Using the configuration management server as the system of record. A Chef example is creating and modifying roles directly on the Chef server:

$ knife node create myrole
$ knife node edit myrole

This decreases visibility and introduces unnecessary risk. Losing the Chef server is now a major event. Yes, you can mitigate this risk with regular backups, but you still lack visibility into changes.

A better approach

Place your CM data into source control and treat that as your system of record. Changes are committed to source control, then deployed to the CM server. Under Jesse's description, this buckets CM configs as source code rather than application data.

Going back to the Chef example, your workflow would instead look like:

$ git commit -am "added newfeature to myrole"
$ knife role from file roles/myrole.json

This gives you an auditable history of every operational config change.

Deploying via git

But what if instead of deploying Chef changes via knife, you did so using git? Changes to multiple roles or cookbooks could be pushed simultaneously, and with one command.

In other words, this:

$ git commit -am "added newfeature to mycookbook, integrated it with myrole, and added supporting data to mydatabag"
$ knife cookbook upload mycookbook
$ knife role from file roles/myrole.json
$ knife data bag from file mydatabag newitem

Would become:

$ git commit -am "added newfeature to mycookbook, integrated it with myrole, and added supporting data to mydatabag"
$ git push origin master

This type of deployment can be implemented via a process that monitors the git repo for changes and deploys any changes to the Chef server.

A simplified version of the upload code is:

for cookbook in $(git_diff("cookbooks")); do
    knife cookbook upload $cookbook

for role in $(git_diff("roles")); do
    knife role from file $role

for bag in $(git_diff("databags")); do
    for item in $(git_diff("items", $bag)); do
        knife data bag from file $bag $item

By putting this on a continuous integration server (Jenkins, BuildBot, etc.) and detecting repository changes through polling or post-commit hooks, you implement continuous deployment of your operational configs.

That wasn't so bad, was it?

Don't forget to check out the slides.


Pingdom is one of several monitoring tools we use at EA2D. Besides alerting us when things go down, we query Pingdom's API to include check status in our dashboards.

The old Pingdom SOAP API was unwieldy and slow. Fortunately, Pingdom released a new, JSON-ified REST API that remedied the problems of its predecessor.

I've written a Python library for this new API and released it as open source. For now, it supports only a subset of available resources, but the framework is there for others to be added easily.


pingdom-python in action

Set up a Pingdom connection:

>>> import pingdom
>>> c = pingdom.PingdomConnection(PINGDOM_USERNAME, PINGDOM_PASSWORD)  # Same credentials you use for the Pingdom website

Create a new Pingdom check:

>>> c.create_check('EA2D Website', 'ea2d.com', 'http')
Check:EA2D Website

Get basic information about a Pingdom check:

>>> check = c.get_all_checks(['EA2D Website'])[0]   # Expects a list, returns a list
>>> check.id
>>> check.status

Get more detailed information about a Pingdom check:

>>> check = c.get_check(210702)  # Look up by check ID
>>> check.lasterrortime

Delete a Pingdom check:

>>> c.delete_check(302632)
{u'message': u'Deletion of check was successful!'}

Go check it out on GitHub, or install it directly from PyPI:

sudo easy_install pingdom

If you have any questions, drop me a line: michael.babineau@gmail.com.


(originally written for EA2D's engineering blog)

For centralized logging, we use a service called Loggly. We forward our logs to Loggly and aggregate them by application and environment. This gives us a handy web interface for viewing logs across all servers within an application group, and provides some great tools for search, comparison, and alerting.

We send events to Loggly using syslog over TCP, and these events are bucketed based on destination port. For security, Loggly locks down each port to a list of authorized IP addresses.

Since we make heavy use of EC2 Auto Scaling Groups, we simply can not maintain this authorized IP list manually. Additionally, we're constantly launching new applications and environments, so our list of buckets (Loggly "inputs") is in constant flux.

Fortunately, Loggly has exposed a set of administration APIs for managing inputs and authorized devices. Since no library was available, we ended up writing one ourselves (in Python) and releasing it as open source.

Getting the library

You can find it on GitHub:


Or install it from PyPI:

sudo easy_install loggly


This package includes scripts for managing inputs and devices. To use them, simply set up your credentials:

export LOGGLY_USERNAME='someuser'
export LOGGLY_PASSWORD='somepassword'
export LOGGLY_DOMAIN='somesubdomain.loggly.com'

Create an input:

$ loggly-create-input -i testinput -s syslogtcp
Creating input "testinput" of type "syslogtcp"

Add a device to an input:

$ loggly-add-device -i testinput -d
Adding device "" to input "testinput"

Delete a device:

$ loggly-remove-device -d
Removing device "" from all inputs

Delete an input:

$ loggly-delete-input -i testinput
Deleting input testinput