Categories
Cloud Developer Tips

Storing AWS Credentials on an EBS Snapshot Securely

Thanks to reader Ewout and his comment on my article How to Keep Your AWS Credentials on an EC2 Instance Securely for suggesting an additional method of transferring credentials: via a snapshot. It’s similar to burning credentials into an AMI, but easier to do and less prone to accidental inclusion in the application’s AMI.

Read on for a discussion of how to implement this technique.

How to Store AWS Credentials on an EBS Snapshot

This is how to store a secret on an EBS snapshot. You do this only once, or whenever you need to change the secret.

We’re going to automate as much as possible to make it easy to do. Here’s the command that launches an instance with a newly created 1GB EBS volume, formats it, mounts it, and sets up the root user to be accessible via ssh and scp. The new EBS volume created will not be deleted when the instance is terminated.

$ ec2-run-instances -b /dev/sdf=:1:false -t m1.small -k \
my-keypair -g default ami-6743ae0e -d '#! /bin/bash
yes | mkfs.ext3 /dev/sdf
mkdir -m 000 /secretVol
mount -t ext3 -o noatime /dev/sdf /secretVol
cp /home/ubuntu/.ssh/authorized_keys /root/.ssh/'

We have set up the root user to be accessible via ssh and scp so we can store the secrets on the EBS volume as the root user by directly copying them to the volume as root. Here’s how we do that:

$ ls -l
total 24
-r--r--r-- 1 shlomo  shlomo  916 Jun 20  2010 cert-NT63JNE4VSDEMH6VHLHBGHWV3DRFDECP.pem
-r--------  1 shlomo  shlomo   90 Jun  1  2010 creds
-r-------- 1 shlomo  shlomo  926 Jun 20  2010 pk-NT63JNE4VSDEMH6VHLHBGHWV3DRFDECP.pem
$ scp -i /path/to/id_rsa-my-keypair * root@174.129.83.237:/secretVol/

Our secret is now on the EBS volume, visible only to the root user.

We’re almost done. Of course you want to test that your application can access the secret appropriately. Once you’ve done that you can terminate the instance – don’t worry, the volume will not be deleted due to the “:false” specification in our launch command.

$ ec2-terminate-instance $instance
$ ec2-describe-volumes
VOLUME	vol-7ce48a15	1		us-east-1b	available	2010-07-18T17:34:01+0000
VOLUME	vol-7ee48a17	15	snap-5e4bec36	us-east-1b	deleting	2010-07-18T17:34:02+0000

Note that the root EBS volume is being deleted but the new 1GB volume we created and stored the secret on is intact.

Now we’re ready for the final two steps:
Snapshot the volume with the secret:

$ ec2-create-snapshot $secretVolume
SNAPSHOT	snap-2ec73045	vol-7ce48a15	pending	2010-07-18T18:05:39+0000		540528830757	1

And, once the snapshot completes, delete the volume:

$ ec2-describe-snapshots -o self
SNAPSHOT	snap-2ec73045	vol-7ce48a15	completed	2010-07-18T18:05:40+0000	100%	540528830757	1
$ ec2-delete-volume $secretVolume
VOLUME	vol-7ce48a15
# save the snapshot ID
$ secretSnapshot=snap-2ec73045

Now you have a snapshot $secretSnapshot with your credentials stored on it.

How to Use Credentials Stored on an EBS Snapshot

Of course you can create a new volume from the snapshot, attach the volume to your instance, mount the volume to the filesystem, and access the secrets via the root user. But here’s a way to do all that at instance launch time:

$ ec2-run-instances un-instances -b /dev/sdf=$secretSnapshot -t m1.small -k \
my-keypair -g default ami-6743ae0e -d '#! /bin/bash
mkdir -m 000 /secretVol
mount -t ext3 -o noatime /dev/sdf /secretVol
# make sure it gets remounted if we reboot
echo "/dev/sdf /secretVol ext3 noatime 0 0" > /etc/fstab'

This one-liner uses the -b option of ec2-run-instances to specify a new volume be created from $secretSnapshot, attached to /dev/sdf, and this volume will be automatically deleted when the instance terminates. The user-data script sets up the filesystem mount point and mounts the volume there, also ensuring that the volume will be remounted if the instance reboots.
Check it out, a new volume was created for /dev/sdf:

$ ec2-describe-instances
RESERVATION	r-e4f2608f	540528830757	default
INSTANCE	i-155b857f	ami-6743ae0e			pending	my-keypair	0		m1.small	2010-07-19T15:51:13+0000	us-east-1b	aki-5f15f636	ari-d5709dbc	monitoring-disabled					ebs
BLOCKDEVICE	/dev/sda1	vol-8a721be3	2010-07-19T15:51:22.000Z
BLOCKDEVICE	/dev/sdf	vol-88721be1	2010-07-19T15:51:22.000Z

Let’s make sure the files are there. SSHing into the instance (as the ubuntu user) we then see:

$ ls -la /secretVol
ls: cannot open directory /secretVol: Permission denied
$ sudo ls -l /secretVol
total 28
-r--------  1 root root   916 2010-07-18 17:52 cert-NT63JNE4VSDEMH6VHLHBGHWV3DRFDECP.pem
-r--------  1 root root    90 2010-07-18 17:52 creds
dr--------  2 root   root   16384 2010-07-18 17:42 lost+found
-r--------  1 root root   926 2010-07-18 17:52 pk-NT63JNE4VSDEMH6VHLHBGHWV3DRFDECP.pem

Your application running the instance (you’ll install it by adding to the user-data script, right?) will need root privileges to access those secrets.

Categories
Cloud Developer Tips

Creating Consistent Snapshots of a Live Instance with XFS on a Boot-from-EBS AMI

Eric Hammond has taught us how to create consistent snapshots of EBS volumes. Amazon has allowed us to use EBS snapshots as AMIs, providing a persistent root filesystem. Wouldn’t it be great if you could use both of these techniques together, to take a consistent snapshot of the root filesystem without stopping the instance? Read on for my instructions how to create an XFS-formatted boot-from-EBS AMI, allowing consistent live snapshots of the root filesystem to be created.

The technique presented below owes its success to the Canonical Ubuntu team, who created a kernel image that already contains XFS support. That’s why these instructions use the official Canonical Ubuntu 9.10 Karmic Koala AMI – because it has XFS support built in. There may be other AKIs out there with XFS support built in – if so, the technique should work with them, too.

How to Do It

The general steps are as follows:

  1. Run an instance and set it up the way you like.
  2. Create an XFS-formatted EBS volume.
  3. Copy the contents of the instance’s root filesystem to the EBS volume.
  4. Unmount the EBS volume, snapshot it, and register it as an AMI.
  5. Launch an instance of the new AMI.

More details on each of these steps follows.

1. Run an instance and set it up the way you like.

As mentioned above, I use the official Canonical Ubuntu 9.10 Karmic Koala AMI (currently ami-1515f67c for 32-bit architecture – see the table on Alestic.com for the most current Ubuntu AMI IDs).

ami=ami-1515f67c
security_groups=default
keypair=my-keypair
instance_type=m1.small
ec2-run-instances $ami -t $instance_type -g $security_groups -k $keypair

Wait until the ec2-describe-instances command shows the instance is running and then ssh into it:

ssh -i my-keypair ubuntu@ec2-1-2-3-4.amazonaws.com

Now that you’re in, set up the instance’s root filesystem the way you want. Don’t forget that you probably want to run

sudo apt-get update

to allow you to pull in the latest packages.

In our case we’ll want to install ec2-consistent-snapshot, as per Eric Hammond’s article:

codename=$(lsb_release -cs)
echo "deb http://ppa.launchpad.net/alestic/ppa/ubuntu $codename main" | sudo tee /etc/apt/sources.list.d/alestic-ppa.list
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BE09C571
sudo apt-get update
sudo apt-get install -y ec2-consistent-snapshot
sudo PERL_MM_USE_DEFAULT=1 cpan Net::Amazon::EC2

2. Create an XFS-formatted EBS volume.

First, install the XFS tools:

sudo apt-get install -y xfsprogs

These utilities allow you to format filesystems using XFS and to freeze and unfreeze the XFS filesystem. They are not necessary in order to read from XFS filesystems, but we want these programs installed on the AMI we create because they are used in the process of creating a consistent snapshot.

Next, create an EBS volume in the availability zone your instance is running in. I use a 10GB volume, but you can use any size and grow it later using this technique. This command is run on your local machine:

ec2-create-volume --size 10 -z $zone

Wait until the ec2-describe-volumes command shows the volume is available and then attach it to the instance:

ec2-attach-volume $volume --instance $instance --device /dev/sdh

Back on the instance, format the volume with XFS:

sudo mkfs.xfs /dev/sdh
sudo mkdir -m 000 /vol
sudo mount -t xfs /dev/sdh /vol

Now you should have an XFS-formatted EBS volume, ready for you to copy the contents of the instance’s root filesystem.

3. Copy the contents of the instance’s root filesystem to the EBS volume.

Here’s the command to copy over the entire root filesystem, preserving soft-links, onto the mounted EBS volume – but ignoring the volume itself:

sudo rsync -avx --exclude /vol / /vol

My command reports that it copied about 444 MB to the EBS volume.

4. Unmount the EBS volume, snapshot it, and register it as an AMI.

You’re ready to create the AMI. On the instance do this:

sudo umount /vol

Now, back on your local machine, create the snapshot:

ec2-create-snapshot $volume

Once ec2-describe-snapshots shows the snapshot is 100% complete, you can register it as an AMI. The AKI and ARI values used here should match the AKI and ARI that the instance is running – in this case, they are the default Canonical AKI and ARI for this AMI. Note that I give a descriptive “name” and “description” for the new AMI – this will make your life easier as the number of AMIs you create grows. Another note: some AMIs (such as the Ubuntu 10.04 Lucid AMIs) do not have a ramdisk, so skip the --ramdisk $ramdisk arguments if you’ve used such an AMI.

kernel=aki-5f15f636
ramdisk=ari-0915f660
description="Ubuntu 9.10 Karmic formatted with XFS"
ami_name=ubuntu-9.10-32-bit-ami-1515f67c-xfs
ec2-register --snapshot $snapshot --kernel $kernel --ramdisk $ramdisk '--description=$description' --name=$ami_name --architecture i386 --root-device-name /dev/sda1 --block-device-mapping /dev/sda2=ephemeral0

This displays the newly registered AMI ID – let’s say it’s ami-00000000.

5. Launch an instance of the new AMI.

Here comes the moment of truth. Launch an instance of the newly registered AMI:

ami=ami-00000000
security_groups=default
keypair=my-keypair
instance_type=m1.small
ec2-run-instances $ami -t $instance_type -g $security_groups -k $keypair

Again, wait until ec2-describe-instances shows it is running and ssh into it:

ssh -i my-keypair ubuntu@ec2-5-6-7-8.amazonaws.com

Now, on the instance, you should be able to see that the root filesystem is XFS with the mount command. The output should contain:

/dev/sda1 on / type xfs (rw)
...

We did it! Let’s create a consistent snapshot of the root filesystem. Look back into the output of ec2-describe-instances to determine the volume ID of the root volume for the instance.

sudo ec2-consistent-snapshot --aws-access-key-id $aws_access_key_id --aws-secret-access-key $aws_secret_access_key --xfs-filesystem / $volumeID

The command should display the snapshot ID of the snapshot that was created.

Using ec2-consistent-snapshot and an XFS-formatted EBS AMI, you can create snapshots of the running instance without stopping it. Please comment below if you find this helpful, or with any other feedback.

Categories
Cloud Developer Tips

Cool Things You Can Do with Shared EBS Snapshots

I’ve been awaiting this feature for a long time: Shared EBS Snapshots. Here’s a brief intro to using the feature, and some cool things you can do with shared snapshots. I also offer predictions about things that will appear as this feature gains adoption among developers.

How to Share an EBS Snapshot

Really, it’s easy. The first thing you’ll need to know is the Account Number of the user with whom you want to share the snapshot. If you want to make the snapshot public then you don’t need this. The account number can be found in the Your Account > Account Activity page. It’s in small numbers in the top-right of the page (so small you may need to click on the image below to see it in full size):


The person with whom you want to share the snapshot (you are the sharer, they are the “sharee”?) should tell you this 12-digit number. Don’t worry, sharee, it’s not a secret.

Once you have the sharee’s account number you, the sharer, go into the AWS Management Console and choose the Snapshots item. Find the snapshot you want to share and right-click on it, choosing “Snapshot Permissions”. You’ll get the following dialog:

Fill in the sharee’s account number, without the separating dashes, into the dialog, and hit “Save”. It should only take a few seconds and… presto! The snapshot should be visible in the sharee’s AWS Management Console Snapshots page.

Cool Things You Can Do with Shared Snapshots

Update 27 September 2009: Before you share snapshots publicly, read Eric Hammond’s warning about the dangers of doing so.

Easily move data between development, testing, and production

You’ve been keeping separate AWS accounts for your production environment, your testing environment, and your development environment, right? Right? Well, in case you haven’t, you no longer have any excuse not to do so. You can now share your database, your HDFS volumes (if you use Cloudera’s Hadoop distribution with EBS support), and anything else of significant size between these separate accounts. No more “tar, gzip, split into < 5GB chunks, upload to S3” and “download from S3, concatenate, untar-gzip”. Your data is ready to go with the newly-created volume.

Share entire setups for troubleshooting and support

If you support a product that is deployed in EC2 you no longer need to jump through hoops to get access to your customer’s files when there’s a problem. Simply have them put the relevant files into an EBS volume, snapshot it, and share the snapshot with you.

Deliver your application in a more granular manner

Until today you delivered your application as an AMI – perhaps even a DevPay AMI – and you may not have given your customers root access. But, if your application used less than 100% of an instance’s CPU, the customer was stuck paying for an entire CPU. Now, you can distribute your applications as a shared snapshot instead, and your customers will be free to use the rest of the instance’s CPU. You’ll just need to build a way to manage access, only allowing authorized customers to see the snapshot.

Deliver you customer’s results in a more usable format

If you run a service that provides large amounts of data, you no longer need to use S3 to share the results. Until today you had to store the results in S3, and your customer needed to retrieve the results from S3 in order to use them. No longer: now you can provide a shared snapshot of the results, and the customer can access them via their filesystem more simply. “The shared snapshot is the new bucket.”

Mount a volume created from a shared snapshot at startup

In a previous article I explained how to automatically mount an EBS volume created from a snapshot during the instance’s startup sequence. I provided a script that gets the snapshot ID via the user-data and does all the rest automatically. Now you can also use snapshots that have been shared.

Update 25 September 2009: Share entire machines

Reader Robert Staveley (Tom) comments below about his use for shared snapshots: Sharing entire machines – boot code and everything – between development, testing, and production accounts. Using the technique to boot an instance from an EBS volume he points out that the entire bootable hard drive and all applications (even beyond 10GB) can be shared between these accounts.

Things to Expect in the Future

Shared snapshots are still a very new feature, but here are some things I expect to happen now that this is possible.

  • The AWS Management Console is the only UI that allows you to share a snapshot. ElasticFox will be adding this capability Real Soon Now, and I am sure others will as well.
  • Alternatives to AMIs. AMIs have many limitations, such as the 10GB maximum size, that can be circumvented using a technique I described to boot from an EBS volume. I expect to see OS distributions packaged as a shared EBS snapshot. These distributions could all share a common AMI containing just enough code to create a volume from the shared distribution snapshot, mount it, and boot from it. No more headaches bundling an AMI – just share a new bootable EBS snapshot.
    Update 3 December 2009: This prediction has come true, with AWS’s release of EBS-backed AMIs.
  • Payment gateway services for managing access to shared snapshots. Now that you’re distributing software as a shared snapshot you’ll need to manage access to the snapshot, limiting it to authorized customers. You might build that system yourself today, but soon we’ll see third-party services that do this for you.
    Update 19 April 2012: This prediction has come true, with AWS’s release of the AWS Marketplace.

Do you have other cool uses or predictions for shared snapshots? Please comment!

Categories
Cloud Developer Tips

Copying ElasticFox Tags from One Browser to Another

The ElasticFox Firefox extension allows you to tag EC2 instances, EBS volumes, EBS snapshots, Elastic IPs, and AMIs. ElasticFox’s tags are stored locally within Firefox, so if you use ElasticFox from more than one browser your tags from one browser are not visible in any other browser. In this article I demonstrate how to copy tags from one ElasticFox installation to another.

Where Are ElasticFox Tags Stored?

My article about Tagging EC2 Instances Using Security Groups shows a brief walkthrough of using ElasticFox to tag instances. Under the hood, ElasticFox stores tags in the prefs.js file within your Firefox Profile directory. But instead of poking around in there directly, take a look at Firefox’s built-in interface to these preferences. It’s the about:config screen, and you get there by typing about:config into the address bar:

The Firefox about:config screen

Click “I’ll be careful, I promise!” Up comes the about:config screen, with all sorts of interesting-looking settings. We’re interested in the ElasticFox tags only, so type ec2ui.*tag in the Filter field, and the result looks like this:

Viola! The ElasticFox tags are stored here.

Viola! Those are the ElasticFox tags that I’ve set up.

Copying ElasticFox Tags to Another Browser Manually

You can copy ElasticFox tags from the about:config screen, via the clipboard, into a text file. First, open the about:config screen and use the Filter as described above to see the relevant settings. Create an empty text file and copy each setting (right-click on the entry and choose “Copy”) and paste it into the text file. Add each setting you want to transfer into the text file. You can email the text file to yourself (or put it on your USB key) and then use the text file to help you change the values of these settings in the target browser’s about:config screen.

Once copied setting looks like this:

ec2ui.eiptags.MyDrifts.us-east-1;({'75.101.15.8':'mydrifts.com'})

The setting’s name is ec2ui.eiptags.MyDrifts.us-east-1 and its value is ({'75.101.15.8':'mydrifts.com'}), separated from the name by a semicolon (;). For each setting you want to transfer to the target browser, select the value (after the semicolon) from the text file and copy it to the clipboard. Then, in the target browser’s about:config screen, right-click on the corresponding setting entry and choose “Modify”, and paste in the new value.

All that selecting and copying and pasting is annoying, and you might make a mistake when entering the values in the target browser. I feel your pain. There must be a better way.

Copying ElasticFox Tags to Another Browser with OPIE

OPIE is a Firefox extension that allows you to import and export preference settings. The good thing about it is that it can be used to export only some settings – and we only want to export the ElasticFox settings. Unfortunately, it does not support ElasticFox!

Please help lobby the OPIE developer to add ElasticFox support by posting in this forum thread – OPIE support would make copying ElasticFox tags (and indeed, entire ElasticFox setups) between browsers much easier.

Copying ElasticFox Tags to Another Browser with Shell Scripts

I’ve put together two shell scripts that can be used to directly export ElasticFox settings from the Firefox prefs.js file and to directly import them back in. These scripts really export and import all your ElasticFox settings, not just tags. Read on for directions on how to use these scripts, which work on Linux, Mac OS X, and Solaris, for Firefox 3.x and above.

A word of caution before you export and import via prefs.js directly: make sure Firefox is not running when you export or import. Firefox overwrites the prefs.js file upon exiting, and reads it upon starting up. In order to make sure all your ElasticFox tags (and all other settings, too) are in the export, Firefox must be closed before exporting. Likewise, in order to make sure Firefox picks up your imported settings you need to make sure it is closed before importing.

How to Export ElasticFox Settings

Here is a handy script to export ElasticFox preferences from Firefox. You can download it, or copy it from here and save it to your local machine:

#! /bin/bash

# the base name of the extension preferences
# that are to be imported
PREF_BASE=ec2ui
# ec2ui is the extension base name of ElasticFox

# backslash-escape special characters as you
# would in a regular expression
# the following command will only escape
# periods (".") but that should suffice
# for firefox extensions
ESCAPED_PREF_BASE=$(echo $PREF_BASE | sed -e "s/\./\\\\./g")

# export desired prefs
egrep "^user_pref\(\"$ESCAPED_PREF_BASE\." prefs.js

Once you save the script locally, make it executable:

chmod +x exportFirefoxPrefs.sh

The script expects to be run in the same directory as the prefs.js file, but it’s most convenient to keep this script in your home directory and create a soft-link there to the prefs.js file. On my machine, the command to make a soft-link to the prefs.js file is this:

ln -s /Users/shlomo/Library/Application\ Support/Firefox/Profiles/f0psntn6.default/prefs.js prefs.js

You’ll need to specify the location of your Firefox profile directory (instead of mine).

Remember to exit Firefox. Now you’re ready to run the export script.

./exportFirefoxPrefs.sh > savedPrefs

This will create a file called savedPrefs containing the ElasticFox settings. Move this file to the target browser’s machine, where we will import it.

How to Import ElasticFox Settings

Here is a script to import to import ElasticFox settings into Firefox. Download it, or copy it from here and save it to your local machine:

#! /bin/bash

WORKFILE=workfile.txt
OUTFILE=newPrefs.js
XFERFILE="$1"

# the base name of the extension preferences
# that are to be imported
PREF_BASE=ec2ui
# ec2ui is the extension base name of ElasticFox

if [ -z "$1" ]
then
  echo "Please provide the filename of the file containing the prefs to import."
else
# backslash-escape any  characters as you would
# in a regular expression
# this command only handles periods
# but that should suffice for firefox extensions
ESCAPED_PREF_BASE=$(echo $PREF_BASE | sed -e "s/\./\\\\./g")

# backup the existing prefs
cp prefs.js prefs.js.old

# get the file header
egrep -v "^user_pref\(" prefs.js > $OUTFILE

# add all other prefs
egrep "^user_pref\(" prefs.js | egrep -v "^user_pref\(\"$ESCAPED_PREF_BASE\." > $WORKFILE

# add desired prefs
cat $XFERFILE >> $WORKFILE

# sort all prefs into output
sort $WORKFILE >> $OUTFILE

# clean up
rm $WORKFILE

# uncomment this when you trust this script
# to replace your prefs automatically
#cp $OUTFILE prefs.js
#rm $OUTFILE
fi

After you save the script locally, make it executable:

chmod +x importFirefoxPrefs.sh

As with the export script above, the import script expects to be run in the same directory as the Firefox prefs.js file, so make a soft-link to the prefs.js file in the same directory as the script (see above).

Once you’ve made sure that Firefox is not running, you’re ready to run the import script:

./importFirefoxPrefs.sh savedPrefs

This will create a file called newPrefs.js in the same directory. newPrefs.js will contain the imported ElasticFox settings from the savedPrefs file, merged with the other already-existing Firefox settings from prefs.js. The script will also create a backup copy of prefs.js called prefs.js.old. All you need to do is to replace your Firefox’s prefs.js with newPrefs.js, as follows:

cp newPrefs.js prefs.js
rm newPrefs.js

Now, restart Firefox. It should contain the ElasticFox settings from the source browser. If anything doesn’t work, simply exit Firefox and restore the previous prefs.js:

cp prefs.js.old prefs.js

Once you are confident that the script works consistently, you can uncomment the last few lines of the script and let it automatically replace Firefox’s prefs.js file by itself – then you do not need to perform that step manually.

This method of copying ElasticFox settings using shell scripts is relatively easy, but not perfect. OPIE would be a friendlier way, and would support Windows as well (so please lobby the developer to support ElasticFox). If you’re looking for your ElasticFox tags to be available via the EC2 APIs, check out my article on using security groups as a way to tag instances (but unfortunately not Elastic IPs, EBS volumes and snapshots, or AMIs).