mikebabineau engineer

List Params and Nested Stacks on CloudFormation

While creating a nested stack in CloudFormation, you may see a failure with this cryptic message:

Value of property Parameters must be an object

The full event will look something like this:

        {
            "StackId": "arn:aws:cloudformation:us-west-2:760937633930:stack/my_nested_stack1/32499563-d49c-21e3-a914-302bfc8340a6", 
            "EventId": "MyNestedStack-CREATE_FAILED-1399325327000", 
            "ResourceStatus": "CREATE_FAILED", 
            "ResourceType": "AWS::CloudFormation::Stack", 
            "Timestamp": "2014-05-05T21:28:47Z", 
            "ResourceStatusReason": "Value of property Parameters must be an object", 
            "StackName": "my_nested_stack1", 
            "PhysicalResourceId": null, 
            "LogicalResourceId": "MyNestedStack"
        }

Not very helpful. And a Google search turned up nothing.

Chances are, you're trying to pass a list as a parameter for a child stack. Parameter objects can only accept Strings and Numbers, not lists.

You may be passing a list inadvertently with a CommaDelimitedList parameter in the parent:

{
  "AWSTemplateFormatVersion" : "2010-09-09",

  "Description" : "Launches a nested stack",

  "Parameters" : {
    "TemplateUrl" : {
      "Description" : "URL for S3-hosted CloudFormation template",
      "Type" : "String"
    },
    "FooList" : {
      "Description" : "List of Foos (subnets, availability zones, whatever)",
      "Type" : "CommaDelimitedList"
    }
  }

  "Resources" : {
    "MyNestedStack" : {
      "Type" : "AWS::CloudFormation::Stack",
      "Properties" : {
        "TemplateURL" : { "Ref" : "TemplateUrl" },
        "Parameters" : {
          "FooList" : { "Ref" : "FooList" }
        }
      }
    }
  }

}

The problem here is that CloudFormation has already converted FooList from a string to a list.

In this case, you can simply defer parsing by treating FooList as a String in the parent:

{
  "AWSTemplateFormatVersion" : "2010-09-09",

  "Description" : "Launches a nested stack",

  "Parameters" : {
    "TemplateUrl" : {
      "Description" : "URL for S3-hosted CloudFormation template",
      "Type" : "String"
    },
    "FooList" : {
      "Description" : "List of Foos (subnets, availability zones, whatever)",
      "Type" : "String"
    }
  }

  "Resources" : {
    "MyNestedStack" : {
      "Type" : "AWS::CloudFormation::Stack",
      "Properties" : {
        "TemplateURL" : { "Ref" : "TemplateUrl" },
        "Parameters" : {
          "FooList" : { "Ref" : "FooList" }
        }
      }
    }
  }

}

A more general solution is to assemble the list into a string with Fn::Join and pass that through instead:

{
  "AWSTemplateFormatVersion" : "2010-09-09",

  "Description" : "Launches a nested stack",

  "Parameters" : {
    "TemplateUrl" : {
      "Description" : "URL for S3-hosted CloudFormation template",
      "Type" : "String"
    },
    "FooList" : {
      "Description" : "List of Foos (subnets, availability zones, whatever)",
      "Type" : "CommaDelimitedList"
    }
  }

  "Resources" : {
    "MyNestedStack" : {
      "Type" : "AWS::CloudFormation::Stack",
      "Properties" : {
        "TemplateURL" : { "Ref" : "TemplateUrl" },
        "Parameters" : {
          "FooList" : {"Fn::Join" : [ ",", { "Ref" : "FooList" }] }
        }
      }
    }
  }

}

Hope this helps!

Building Docker-Capable Machine Images

Docker allows you to create lightweight and portable containers that encapsulate any application. Your app and its runtime environment are packaged together. Starting your app requires only Docker and your container.

Docker installation is simple, but takes a non-trivial amount of time to complete. Baking Docker into your machine image has the desired effect of minimizing provisioning time, but image creation is typically a hassle.

Enter Packer. Packer simplifies the creation of machine images for EC2, Digital Ocean, Vagrant, and many other virtual environments. By defining a basic Packer template, creating Docker-capable images can be done with a single command.

Here is our Packer template:

{
  "variables": {
    "docker_version": "0.9.1"
  },

  "builders": [
    {
      "type": "amazon-ebs",
      "region": "us-west-2",
      "source_ami": "ami-c8bed2f8",
      "instance_type": "m1.small",
      "ssh_username": "ubuntu",
      "ami_name": "ubuntu-12.04-docker-{{isotime | clean_ami_name}}",
      "tags": {
        "Release": "12.04 LTS"
      }
    }
  ],

  "provisioners": [
    {
      "type": "shell",
      "inline": [
        "# Source: http://docs.docker.io/en/latest/installation/ubuntulinux/#ubuntu-precise",
        "sudo apt-get update",
        "sudo apt-get install -y linux-image-generic-lts-raring linux-headers-generic-lts-raring",
        "sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9",
        "sudo sh -c 'echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list'",
        "sudo apt-get update",
        "sudo apt-get -y install lxc-docker={{user `docker_version`}}"
      ]
    }
  ]
}

We take a base Ubuntu 12.04 LTS image and install Docker on it (as per the official guide) using the script defined in provisioners. Other provisioners are supported: you could swap the shell script out for (or append) Chef, Ansible, or another supported provisioner.

This template will create an EC2 AMI. To create other images, simply replace/append the builder with one for another provider.

Note we specify the Docker version in a variable. Variables can be used throughout the template with {{user `var_name`}}.

Before we build our image on EC2, we'll need to export our AWS credentials as environment variables:

$ export AWS_ACCESS_KEY_ID="your_access_key"
$ export AWS_SECRET_ACCESS_KEY="your_secret_key"

To kick off the build, we invoke packer build:

$ packer build docker.json
amazon-ebs output will be in this color.

==> amazon-ebs: Creating temporary keypair: packer 5234c1f7-b8df-871e-9617-61982c79fe01
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing SSH access on the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
    amazon-ebs: Instance ID: i-bef803c2
==> amazon-ebs: Waiting for instance (i-bef803c2) to become ready...
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Provisioning with shell script: /var/folders/pk/v470lvv12bl_7f3r041ccbc40000gn/T/packer-shell287742457
    amazon-ebs: Get:1 http://security.ubuntu.com precise-security Release.gpg [198 B]
    amazon-ebs: Hit http://archive.ubuntu.com precise Release.gpg
    amazon-ebs: Get:2 http://security.ubuntu.com precise-security Release [49.6 kB]
[... CUT ...]
    amazon-ebs: docker start/running, process 11064
    amazon-ebs: Setting up lxc-docker (0.9.1) ...
    amazon-ebs: Processing triggers for libc-bin ...
    amazon-ebs: ldconfig deferred processing now taking place
==> amazon-ebs: Stopping the source instance...
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating the AMI: ubuntu-12.04-docker-2014-03-26T00-27-35Z
    amazon-ebs: AMI: ami-db237fec
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Adding tags to AMI (ami-db237fec)...
    amazon-ebs: Adding tag: "Release": "12.04 LTS"
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:

us-west-2: ami-db237fec

For EC2, you can copy the AMI to other regions using the AWS Console or awscli:

$ aws ec2 copy-image --source-image-id ami-db237fec --source-region us-west-2 --region us-east-1
{
    "ImageId": "ami-ca232495"
}

Typical build times are ~5-15min, but this could be improved by using a newer release of Ubuntu. (Docker requires a newer kernel than is shipped with Ubuntu 12.04). Cross-region copy times are quick, typically under a minute.

With your new AMI, you should now be able to provision Docker hosts in just a minute or two.

Multi Region Gotcha on Elastic Beanstalk

It's not in the current Elastic Beanstalk documentation, but you can't create a new application version from an S3 file hosted in a different region. Attempts to do so will return this error:

$ aws elasticbeanstalk create-application-version --region ap-southeast-1 --application-name myapp --version-label myversion --source-bundle '{"S3Bucket":"mybuilds", "S3Key":"myapp-myversion.war"}'
{
    "Errors": [
        {
            "Message": "Unable to download from S3 location (Bucket: mybuilds  Key: myapp-myversion.war). Reason: Moved Permanently", 
            "Code": "InvalidParameterCombination", 
            "Type": "Sender"
        }
    ], 
    "ApplicationVersion": {}, 
    "ResponseMetadata": {
        "RequestId": "bfaf70b6-0aae-11e3-ae62-0d8638135266"
    }
}

If you want to create an application version in multiple regions, you'll need a location-constrained bucket for each region. It's a good pattern to include the region in the bucket name:

$ aws s3 create-bucket --bucket mybuilds-ap-southeast-1 --create-bucket-configuration '{"LocationConstraint":"ap-southeast-1"}'
$ aws s3 create-bucket --bucket mybuilds-eu-west-1 --create-bucket-configuration '{"LocationConstraint":"eu-west-1"}'
$ aws s3 create-bucket --bucket mybuilds-sa-east-1 --create-bucket-configuration '{"LocationConstraint":"sa-east-1"}'
...

Now, just use the regional bucket for each create-application-version call:

$ aws elasticbeanstalk create-application-version --region ap-southeast-1 --application-name myapp --version-label myversion --source-bundle '{"S3Bucket":"mybuilds-ap-southeast-1", "S3Key":"myapp-myversion.war"}'
{
    "ApplicationVersion": {
        "ApplicationName": "myapp", 
        "VersionLabel": "myversion", 
        "SourceBundle": {
            "S3Bucket": "mybuilds-ap-southeast-1", 
            "S3Key": "myapp-myversion.war"
        }, 
        "DateUpdated": "2013-08-21T22:12:32.738Z", 
        "DateCreated": "2013-08-21T22:12:32.738Z"
    }
}

Icinga-PagerDuty Integration via Chef

I wrote a Chef recipe for enabling PagerDuty support in Icinga. With luck, it'll be merged into Marius Ducea's icinga cookbook. The code is here.

Usage

Configure PagerDuty

Add a service for Icinga: 1. Go to https://your-domain.pagerduty.com/services/new 1. Set the service type to "Nagios" 1. Add the service

Configure your monitoring node

  1. Add the icinga::pagerduty recipe to your role

    name "monitoring"
    description "Monitoring server"
    run_list(
      "recipe[icinga]",
      "recipe[icinga::pagerduty]"
    )    
    
  2. Get the new PagerDuty service's API key api key

  3. Copy it into your node attributes:

    default_attributes({
      :icinga => {
        :pagerduty => {
          :service_key => "318e318e318e318e318e318e318ead29cf"
        }
    })
    
  4. Run chef-client

Watch alerts show up in PagerDuty

You should now see alerts like these: api key

PagerDuty will automatically resolve these incidents as Icinga sends recovery notifications. More details here.

Delay Queues in Redis (with Grails example)

I needed a way to handle delayed processing of messages in a distributed system. Since I already had Redis running (but no message queue other than Kafka), I used a sorted set as a simple delay queue.

The basic approach is to insert each message into the sorted set with a score equal to the [unix] time the message should become available for processing ("ready").

redis> ZADD delayqueue <future_timestamp> "messsage"

Get all "ready" messages with a range query from (time) zero to now, then delete the messages. To avoid multiple processing and lost messages, run this in a transaction:

redis> MULTI
redis> ZRANGEBYSCORE delayqueue 0 <current_timestamp>
redis> ZREMRANGEBYSCORE delayqueue 0 <current_timestamp>
redis> EXEC

Note that this implementation requires messages to be unique per queue.

Here's a quick implementation as a Grails service:

import redis.clients.jedis.Jedis

/**
 * Handles the delaying of queued messages for later retrieval.
 */
class DelayQueueService {
    def redisService
    
    /**
     * Queue a message for later retrieval. Messages are unique per queue and 
     * are deleted upon retrieval. If a given message already exists, it is 
     * updated with the new delay.
     *
     * @param queue Queue name
     * @param message
     * @param delay Time in seconds the message should be delayed
     */
    def queueMessage(String queue, String message, Integer delay) {
        def time = System.currentTimeMillis()/1000 + delay

        redisService.withRedis { Jedis redis ->
            redis.zadd(queue, time, message)
        }
    }

    /**
     * Retrieve messages that are no longer delayed. Deletes messages on read.
     *
     * @param queue Queue name
     */
    def getMessages(String queue) {
        def startTime = 0
        def endTime = System.currentTimeMillis() / 1000

        redisService.withRedis { Jedis redis ->
            def t = redis.multi()
            def response = t.zrangeByScore(queue, startTime, endTime)
            t.zremrangeByScore(queue, startTime, endTime)
            t.exec()
            response.get()
        }
    }
}