Categories
Cloud Developer Tips

Creating Consistent Snapshots of a Live Instance with XFS on a Boot-from-EBS AMI

Eric Hammond has taught us how to create consistent snapshots of EBS volumes. Amazon has allowed us to use EBS snapshots as AMIs, providing a persistent root filesystem. Wouldn’t it be great if you could use both of these techniques together, to take a consistent snapshot of the root filesystem without stopping the instance? Read on for my instructions how to create an XFS-formatted boot-from-EBS AMI, allowing consistent live snapshots of the root filesystem to be created.

The technique presented below owes its success to the Canonical Ubuntu team, who created a kernel image that already contains XFS support. That’s why these instructions use the official Canonical Ubuntu 9.10 Karmic Koala AMI – because it has XFS support built in. There may be other AKIs out there with XFS support built in – if so, the technique should work with them, too.

How to Do It

The general steps are as follows:

  1. Run an instance and set it up the way you like.
  2. Create an XFS-formatted EBS volume.
  3. Copy the contents of the instance’s root filesystem to the EBS volume.
  4. Unmount the EBS volume, snapshot it, and register it as an AMI.
  5. Launch an instance of the new AMI.

More details on each of these steps follows.

1. Run an instance and set it up the way you like.

As mentioned above, I use the official Canonical Ubuntu 9.10 Karmic Koala AMI (currently ami-1515f67c for 32-bit architecture – see the table on Alestic.com for the most current Ubuntu AMI IDs).

ami=ami-1515f67c
security_groups=default
keypair=my-keypair
instance_type=m1.small
ec2-run-instances $ami -t $instance_type -g $security_groups -k $keypair

Wait until the ec2-describe-instances command shows the instance is running and then ssh into it:

ssh -i my-keypair ubuntu@ec2-1-2-3-4.amazonaws.com

Now that you’re in, set up the instance’s root filesystem the way you want. Don’t forget that you probably want to run

sudo apt-get update

to allow you to pull in the latest packages.

In our case we’ll want to install ec2-consistent-snapshot, as per Eric Hammond’s article:

codename=$(lsb_release -cs)
echo "deb http://ppa.launchpad.net/alestic/ppa/ubuntu $codename main" | sudo tee /etc/apt/sources.list.d/alestic-ppa.list
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BE09C571
sudo apt-get update
sudo apt-get install -y ec2-consistent-snapshot
sudo PERL_MM_USE_DEFAULT=1 cpan Net::Amazon::EC2

2. Create an XFS-formatted EBS volume.

First, install the XFS tools:

sudo apt-get install -y xfsprogs

These utilities allow you to format filesystems using XFS and to freeze and unfreeze the XFS filesystem. They are not necessary in order to read from XFS filesystems, but we want these programs installed on the AMI we create because they are used in the process of creating a consistent snapshot.

Next, create an EBS volume in the availability zone your instance is running in. I use a 10GB volume, but you can use any size and grow it later using this technique. This command is run on your local machine:

ec2-create-volume --size 10 -z $zone

Wait until the ec2-describe-volumes command shows the volume is available and then attach it to the instance:

ec2-attach-volume $volume --instance $instance --device /dev/sdh

Back on the instance, format the volume with XFS:

sudo mkfs.xfs /dev/sdh
sudo mkdir -m 000 /vol
sudo mount -t xfs /dev/sdh /vol

Now you should have an XFS-formatted EBS volume, ready for you to copy the contents of the instance’s root filesystem.

3. Copy the contents of the instance’s root filesystem to the EBS volume.

Here’s the command to copy over the entire root filesystem, preserving soft-links, onto the mounted EBS volume – but ignoring the volume itself:

sudo rsync -avx --exclude /vol / /vol

My command reports that it copied about 444 MB to the EBS volume.

4. Unmount the EBS volume, snapshot it, and register it as an AMI.

You’re ready to create the AMI. On the instance do this:

sudo umount /vol

Now, back on your local machine, create the snapshot:

ec2-create-snapshot $volume

Once ec2-describe-snapshots shows the snapshot is 100% complete, you can register it as an AMI. The AKI and ARI values used here should match the AKI and ARI that the instance is running – in this case, they are the default Canonical AKI and ARI for this AMI. Note that I give a descriptive “name” and “description” for the new AMI – this will make your life easier as the number of AMIs you create grows. Another note: some AMIs (such as the Ubuntu 10.04 Lucid AMIs) do not have a ramdisk, so skip the --ramdisk $ramdisk arguments if you’ve used such an AMI.

kernel=aki-5f15f636
ramdisk=ari-0915f660
description="Ubuntu 9.10 Karmic formatted with XFS"
ami_name=ubuntu-9.10-32-bit-ami-1515f67c-xfs
ec2-register --snapshot $snapshot --kernel $kernel --ramdisk $ramdisk '--description=$description' --name=$ami_name --architecture i386 --root-device-name /dev/sda1 --block-device-mapping /dev/sda2=ephemeral0

This displays the newly registered AMI ID – let’s say it’s ami-00000000.

5. Launch an instance of the new AMI.

Here comes the moment of truth. Launch an instance of the newly registered AMI:

ami=ami-00000000
security_groups=default
keypair=my-keypair
instance_type=m1.small
ec2-run-instances $ami -t $instance_type -g $security_groups -k $keypair

Again, wait until ec2-describe-instances shows it is running and ssh into it:

ssh -i my-keypair ubuntu@ec2-5-6-7-8.amazonaws.com

Now, on the instance, you should be able to see that the root filesystem is XFS with the mount command. The output should contain:

/dev/sda1 on / type xfs (rw)
...

We did it! Let’s create a consistent snapshot of the root filesystem. Look back into the output of ec2-describe-instances to determine the volume ID of the root volume for the instance.

sudo ec2-consistent-snapshot --aws-access-key-id $aws_access_key_id --aws-secret-access-key $aws_secret_access_key --xfs-filesystem / $volumeID

The command should display the snapshot ID of the snapshot that was created.

Using ec2-consistent-snapshot and an XFS-formatted EBS AMI, you can create snapshots of the running instance without stopping it. Please comment below if you find this helpful, or with any other feedback.

59 replies on “Creating Consistent Snapshots of a Live Instance with XFS on a Boot-from-EBS AMI”

@Anders Vinther,

Thanks for pointing that out. I’ve fixed it.

[An aside: When I moved from Blogger to self-hosted WordPress, all the code formatting got messed up. I fixed it up manually, but I missed some spots. It was completely worth it, though, because the WordPress authoring tools are much more flexible and capable.]

Hmmm…. not quite sure what’s going on… but it seems as if an instance I am running with a scheduled create-consistent-snapshot died just after the cron job ran…

Now I tried kicking off a manual snapshot and that seemed to work fine before I set up the cron job… However now I can’t seem to get on to the dead instance at all… It is quite possible this is not related to create-consistent-snapshot at all, and I don’t have a lot of info at this stage. I will look into it when I get some more time…

Just wanted to know:
– are you running this succesfully?
– am I correct in assuming that an instance can take a snapshot of itself?

Sorry for not being able to provide more info… am a bit strapped for time as we have a web site just gone live…

Once again thanks a bunch for your very informative site!!

Cheers,

Anders

@Anders Vinther,

It’s possible the instance’s sudden shutdown was unrelated to the snapshot. In the manual snapshot you tried, did you use ec2-consistent-snapshot from on the instance, or did you use another method to initiate the snapshot?

It’s also possible that your instance is still running – I assume you know to look in the AWS Management Console and, if it’s there, try to reboot it.

There’s no issue in general with initiating a snapshot from within an instance. Many, including myself, have been using this technique, made easier ever since Eric Hammond distributed the ec2-consistent-snapshot utility.

Hope this helps.

Hi Shlomo,

Sorry to drop the ball on this one completely… I have simply been too busy with the site going live, so haven’t actually done anything further with this…

The instance died completely. Tried rebooting and I still couldn’t get to it. Not sure what went on. In the end I reverted to the instance I created the XFS from, and am still running with it now…

When I get some more time I’ll give it another shot…

Thanks for letting me know that it should work…

And, yes, you are right… the manual snapshot I did do from within the instance itself…

Once again thanks for a great resource!

Cheers,

Anders

What would the restore procedure look like? Let’s say I want to recover a point in time consistent snapshot, and create an instance from it? Make Volume from snapshot, then register the ebs volume as a new AMI, then launch an instance of it?

@Francois,

You can simply register the snapshot as an AMI and launch it. For example:

ec2-register --snapshot snap-12345678 --kernel aki-kkkkkkkk --ramdisk ari-rrrrrrrr --root-device-name /dev/sda1 --architecture x86_64 --name name_of_ami --description 'Long description of this AMI'

substitute the values as follows:

snap-12345678 : your snapshot ID
aki-kkkkkkkk : the kernel ID from the original instance as it was launched. You can check this via ec2-describe-instances.
ari-rrrrrrrr : the ramdisk ID from the original instance as it was launched. You can check this via ec2-describe-instances.
/dev/sda1 : the root device name of the instance . You can check this via the output of ‘mount’ on the instance. Usually it will be /dev/sda1.
x86_64 : ‘i386’ for 32-bit instance types (m1.small, c1.medium), ‘x86_64’ for 64-bit instances (all other instance types)

The output of the command will display the new AMI ID. This can be launched directly, using the AWS Management Console or the ec2-run-instances command.

Shlomo I see the article and the instructions apply to instances running some version of Linux/unix.
Do you know if for Windows Instances there is something similar?
I mean right now I shut down the instance. Create a new AMI from it, and snapshots have the root + the EBS attached to it.
My problem however is I have a SPOT instance which as you know can not be put in a stop state. So I did all this while the AMI was running but I am not sure if the new AMI will be ok or not… and I kind of a need to be sure of this if it holds important data.
So are you aware of anything like this but for the Windows environment, running say server 2008 and Ms SQL Server (2005 or 2008)?

thanks,
Remus

@Remus,

I don’t know of a way to create a consistent snapshot of an EBS volume in Windows. Any readers know how?

It looks like you’re trying to create an AMI from the snapshot you’ve taken. This can’t be done for Windows AMIs, you can only create them via the API.

But you might not need an AMI if you have the volume you want to use as the root. You can use this technique described by Eric Hammond to swap out the root partition of another Windows instance and replace it with the desired Windows EBS root volume. I’m curious to know if this works when the desired swapped-in root volume was created from a snapshot – if you try it, please let me know.

For spot EBS-backed instances you can create the spot request with the flag --block-device-mapping /dev/sda1=:false to prevent the volume from being deleted when the instance terminates. Then, after the spot instance terminates, you can use Eric Hammond’s technique to use that volume as the root volume for another instance.

Well for the record stated I am still a beginner with Amazon 🙂 and I tend to use the Console as much as I can.
I do not try to create an AMI from a snapshot. What I do I go in Console Management and under instances if you have a Windows one using EBS as root if you right click on it, it has this option “Create Image (EBS AMI).
So the way I see it is: I have the initial instance I want to backup. I select it and run that. What it does it creates a new AMI for me, I see it listed after that. And under snapshots it creates the snapshots of the EBS used by that image.
And I think when you want to launch that AMI it will create its own EBS from those snapshots.
Now the big question is: When you do tha Create Image in Windows will it create a workable AMI? Meaning the SQL and WIndows on this one will they be ok?… I would hope so… otherwise it really sucks.

Remus

I followed all the steps and created the AMI from snaphot. When I launch the new instance, and try to connect via SSH, and returned “Connection refused”.

I checked the firewall and is correct.

In the log of the instance appears:
[1.237071] sda1 XFS filesystem MOUNTING
[2.640028] VFS: Mounted root (xfs filesystem) readonly device on 8:1.

Is it normal to mount the filesystem in readonly mode?

Any suggestions.

@jm.

In the course of booting the instance the root filesystem is first mounted read-only and then, later on, remounted read-write. There should be a console output line later on saying something like this:

Remounting root filesystem in read-write mode: [ OK ]

Does your console output have that line later? Does the console output show your opensshd server launching, and show the host key’s fingerprint?

This is the next console output

[ 1.535295] Filesystem “sda1”: Disabling barriers, trial barrier write failed
[ 1.535640] XFS mounting filesystem sda1
[ 1.612446] VFS: Mounted root (xfs filesystem) readonly on device 8:1.
[ 1.613459] devtmpfs: mounted
[ 1.613598] Freeing unused kernel memory: 228k freed
[ 1.613735] Write protecting the kernel read-only data: 6416k
init: console-setup main process (81) terminated with status 1
%Ginit: plymouth main process (63) killed by SEGV signal
init: plymouth-splash main process (228) terminated with status 2
mountall: Disconnected from Plymouth

In the console output not appears sshd init or host key’s fingerprint.

Thanks for your help

@jm,

I’m trying to reproduce the issue you’re seeing.

Using the Ubuntu 9.10 AMI, AKI, and ARI originally specified in this article (ami-1515f67c, aki-5f15f636, ari=0915f660) works without a problem.

Using that same AMI/AKI/ARI and performing an apt-get upgrade before copying into the EBS volume also produces a bootable AMI.

Using the latest 9.10 AMI/AKI/ARI (ami-bb709dd2, aki-5f15f636, ari-d5709dbc) also works without a problem, both when skipping the apt-get upgrade and when performing it.

What Ubuntu AMI, AKI, ARI are you using?

@jm,

Now I understand – you’re using the Ubuntu 10.04 Lucid AMI as a starting point. See my response to @Michael Glass below – you need to register the AMI having an ephemeral drive device mapping. I’ve updated the article to include this in the ec2-register-image command.

Hey. I believe that I have the same problem (can’t ssh, but pinging to public dns is ok). I did everything as stated above (I only skipped ari id at ec2-register, because I don’t know it – it is blank in aws mgmt console).
http://pastebin.com/Nn8QwxCF – these are the last lines in console.

Any idea? Anyway, thanks for tutorial.

… sorry I didn’t read the comment history. I had similar errors before I specified /sda2 and /sda3 as specified below. Try it out when launching our ami or when registering your snapshot as an ami? (also, before you randomly assign /sda2 and /sda3 check out your /etc/fstab as to know what’s going where)

good luck!

I didn’t realize I had to also set up ephemeral storage from swap. others running into this, use a command similar to the following:

ec2-register -n ami-name-here -d “description here” –block-device-mapping /dev/sda2=ephemeral0 –block-device-mapping ‘/dev/sda3=ephemeral1’ –snapshot snap-d03587b8 –architecture i386 –kernel aki-754aa41c

for more info, go here:

http://www.webadminblog.com/index.php/2010/03/23/amazon-ec2-ebs-instances-and-ephemeral-storage/

(enjoy)

@Michael Glass,

Thanks for discovering the need for the 10.04 Lucid AMIs to have an ephemeral drive mount specified in the AMI registration. When you skip this step you get the strange behavior that @jm, @Mike, and you reported, where SSH doesn’t seem to start.

I’ve edited the article to include the block-device-mapping specification for the ephemeral drives when registering the AMI.

Hi Shlomo and Michael,

Using the Lucid EBS boot ami-ab4d67df (the latest 64bit from Canonical) and copying Shlomo´s directions exactly, I still get the issue with ssh and connection is refused. As a second step, I added the extra block device sda3 as shown in Michael´s hint.

This was my code to register the AMI


ec2-register --snapshot snap-90b07bf9 --kernel aki-cb4d67bf --region eu-west-1 '--description=Ubuntu 10 Lucid formatted with XFS' --name=ubuntu-10.4-64-bit-ami-ab4d67df-xfs --architecture x86_64 --root-device-name /dev/sda1 --block-device-mapping /dev/sda2=ephemeral0 --block-device-mapping /dev/sda3=ephemeral1

Note that I had to add the region as well as my instances are in eu-west. There was no issue with the AMI (ami-43103a37 – its private, not sure how to make it public!).

I attached an Elastic IP to the running instance and attempted to connect.

Here is the output from my terminal:


ssh -i /Users/aruna/mykeys/mysecurity ubuntu@79.xxx.xxx.xxx
ssh: connect to host 79.xxx.xxx.xxx port 22: Connection refused

Here is the final lines from the console


[ 1.523075] Filesystem "sda1": Disabling barriers, trial barrier write failed
[ 1.569026] XFS mounting filesystem sda1
[ 1.845787] VFS: Mounted root (xfs filesystem) readonly on device 8:1.
[ 1.846784] devtmpfs: mounted
[ 1.846899] Freeing unused kernel memory: 228k freed
[ 1.847008] Write protecting the kernel read-only data: 6416k
init: console-setup main process (81) terminated with status 1
%Ginit: plymouth main process (63) killed by SEGV signal
init: plymouth-splash main process (233) terminated with status 2
Generating locales...
en_GB.UTF-8... up-to-date
Generation complete.
mountall: Disconnected from Plymouth

These two lines indicate that its mounted on an xfs volume (Hurrah!) but is yet read-only

1.569026] XFS mounting filesystem sda1
1.845787] VFS: Mounted root (xfs filesystem) readonly on device 8:1.

I have refreshed the console a few times to see whether it would change status – no luck. Also rebooted the instance – same status.
What am I doing wrong here…
Grateful for any leads!

Please disregard above. The issue was that I did not check the /etc/fstab to determine where the ephemeral storage should be mounted. It appears that on 64 bit ami´s its /dev/sdb mounted in /mnt, whereas I had specified /dev/sda2 and /dev/sda3. May I suggest that you also add some comment to to edit /etc/fstab and include the following two lines after editing out existing references to ephemeral storage.

/dev/sda2 /mnt auto defaults,comment=cloudconfig 0 0
/dev/sda3 /mnt auto defaults,comment=cloudconfig 0 0

With this, the following works very well for EBS booted 64 bit Lucid image ami-ab4d67df.
Please substitute where I have XXXXX.


ec2-register --snapshot snap-XXXXXXXX --kernel aki-cb4d67bf --region XXXXXX '--description=Ubuntu 10 Lucid formatted with XFS' --name=ubuntu-10.4-64-bit-ami-ab4d67df-xfs --architecture x86_64 --root-device-name /dev/sda1 --block-device-mapping /dev/sda2=ephemeral0 --block-device-mapping /dev/sda3=ephemeral1

Many thanks and much appreciated.

A restart of the instance (not terminate – but stop and start ) breaks the above. It appears that etc/fstab is reset to its original values, so my changes to /dev/sda2 and /dev/sda3 does not persist in /etc/fstab. I am investigating more. I think I might have to add the DeleteOnTermination flag to false for both the ephemeral storage devices. Will update once this is working

Hi,

I followed the article but stumbled upon a problem on the boot of the newly created ami:
Kernel panic – not syncing: VFS: Unable to mount root fs on unknown-block(8,1)
I tried to used this on a 10.04 Lucid (ami-a94d67dd from Alestic)
I think maybe apt-get upgrade is breaking something but I can’t figure out what.
I will try without doing the upgrade and post the results.

Thanks

Ivan

Ok got it working with 10.04 when not doing apt-get upgrade.

Will try again because I see no reason why it should not work.

Thanks.

I followed your article, its great!
I built a new AMI to template my database servers (Postgres 8.4.4 + all GIS features) and this worked perfect. After I restarted the AMI the server offered me some updates, I ran the apg-get upgrade command to check and during the upgrade it told me it wanted to write a new GRUB … I figured what the heck lets test, it re-wrote the boot sector after the upgrade and … it actually works fine 🙂 no problems on my end. It would be nice to know how I can spawn new servers resizing the volume… I know I can spawn a new volume from the AMI and give it a new size, but once the server comes up, whats the right process to allocate the space and not screw up my root file system?

Thanks 1000000 again, and keep up the great blog 🙂 super useful.

Thanks for pointing me to those articles Shlomo that resolves the question I had 🙂
I have been reading your other posts, I was very interested in your post: How to Keep Your AWS Credentials on an EC2 Instance Securely back in August 2009 which was very comprehensive and educational for me, did you ever got to write a follow up article demonstrating the scripts to automate the procedures? I couldn’t find it in your previous posts, so I was curious. Thanks for all the great info!

cheers,

-H

@Hellmut,

Unfortunately I haven’t put together all the scripts to make those techniques easy. I will, but I can’t be sure when – unless I find a sponsor for the work.

Hi All,

Great tutorial. I used the pesky Ubuntu Lucid EBS ami-ab4d67df as a base in eu-west-1 region and had a few issues. The first was the dreaded ssh connection refused when the time came to launch and log into the new instance, even after explicitly adding the ephemeral storage as per comment in No. 25. The second, after figuring out that I stumbled by just copying the directions instead of checking where the ephemeral storage is in m1.large instance (they go to /dev/sdb and /dev/sdc – instead of /dev/sda2 and /dev/sda3 ) was when I stopped the new instant and restarted again, to be refused entry, again!

What finally worked for me is detailed below. As always YMMW.

Before rsynching to /vol (or even after, you need to rsynch again) edit the etc/fstab so that the ephemeral drives are correctly mounted and root is specified as xfs.

Example:

# /etc/fstab: static file system information.
#
proc /proc proc nodev,noexec,nosuid 0 0
/dev/sda1 / xfs defaults 0 0
/dev/sdb /mnt auto defaults,comment=cloudconfig 0 0
/dev/sdc /log auto defaults,comment=cloudconfig 0 0

Then, change the ec2-register instructions so that the root device, /dev/sda1, has its DeleteOnTermination flag set to false. This should be automatic given its an EBS volume and also we are not terminating, but simply stopping and restarting, but for some reason it appears that without this flag explicitly set, weird things happen to /etc/fstab and the ephemeral storage devices, which in turn affect services such as ssh.

This is the command I used for the Lucid ami-ab4d67df . Replace the XXXXX with your unique information and change the size (30) immediately after the snap-xxxxxxx: again reflect the size of your own snapshot or a number bigger than your snapshot. If you specify a number smaller than the your snapshot, register AMI will fail.


ec2-register --kernel aki-cb4d67bf --region xx-xxxx-x '--description=Ubuntu 10 Lucid formatted with XFS' --name=ubuntu-10.4-64-bit-ami-ab4d67df-xfs --architecture x86_64 --root-device-name /dev/sda1 -b /dev/sda1=snap-xxxxxxx:30:false --block-device-mapping /dev/sdb=ephemeral0:false --block-device-mapping /dev/sdc=ephemeral1:false

Hi,

This was really useful.

I am trying to use a bootable windows ebs volume with eucalyptus but I am not able to figure out how to make the boot ami point to the windows ebs volume.

Any help is appreciated.

Thanks

@FreeMind,

In EC2 the only way to create a bootable EBS volume with Windows on it is by starting with an EBS-based instance and using the API tools to create a new image from that instance – there’s no way to create a snapshot and then register it as a Windows AMI. I’m not familiar with how Eucalyptus does Windows, but I imagine it might have the same limitation.

Has anyone had any luck doing this with Lucid on a Micro instance? My new XFS AMI is always unbootable i.e. it has an empty System Log and is totally unreachable via ssh even though the instance shows as running in the AWS Console. My guess is that this is do to the lack of ephemeral storage on micro instances but without the System Log info I really don’t know where to start …

I’ve updated the fstab to specify xfs as the filesystem on the root device but that didn’t seem to help at all.

Any thoughts would be appreciated!

Unfortunately no. I was using the latest release last night for testing hoping that it would fix the issue but it didn’t. I’ll try again later this evening and document my steps and post here again so you can see what I did … and hopefully point out my mistake! 🙂

After carefully walking through the steps in the posting and using the most recent Lucid AMI, I’ve had success! In my previous attempts, I was also installing some default packages before doing the rsync so maybe something we causing issues there. Now that I have my basic XFS AMI, I’m going to try installing the packages I need and creating a new AMI from that. If I run into any issues, I’ll report back. Otherwise, thanks for the great walkthrough and follow-up help!

After failing for quite some time to get this working on a Micro instance using alestic.com’s Ubuntu 10.10 Maverick EBS boot (ami-548c783d) 64-bit, I finally discovered what the problem was.

The above call to format the volume with XFS needs to use the -L flag. I discovered this by opening /etc/fstab and seeing “LABEL=uec-rootfs / ” instead of the expected “/dev/sda1 /”

Therefore, use:

sudo mkfs.xfs -L uec-rootfs /dev/sdh

I have a similar issue where my instance is inaccessible via ssh following a reboot. I’m using ami-8c0c5cc9 (Lucid 10.4.1 32bit) micro instance, with kernel aki-e40c5ca1. The root partition is on EBS volume. The only thing I updated yesterday was openssl then the instance was rebooted. It shows as running but I cannot ssh to it. Is there anything I can do to it now to resurrect it? I am sure it is the same fstab mapping issue as described above.

Here are the last few lines of my System Log:
[code]
[ 0.566980] VFS: Mounted root (ext3 filesystem) readonly on device 8:1.

[ 0.592904] devtmpfs: mounted

[ 0.592957] Freeing unused kernel memory: 216k freed

[ 0.594161] Write protecting the kernel text: 4332k

[ 0.594500] Write protecting the kernel read-only data: 1336k

init: console-setup main process (63) terminated with status 1

%Ginit: plymouth main process (45) killed by SEGV signal

init: plymouth-splash main process (244) terminated with status 2

cloud-init running: Wed, 08 Dec 2010 12:30:33 -0800. up 3.87 seconds
mountall: Disconnected from Plymouth
[/code]

Got this thing working on Ubuntu 10.10. 64-bit. Then tried using the script ec2-consistent-snapshot from cron, and the instance got totally stuck. Shortly after all IO got stuck. Incoming ssh connections also kept failing. Had to stop/start the instance from another instance using ec2 tools. And it’s consistently reproducible. Could it be that something fails in ec2-consistent-snapshot, and the script fails to release the xfs lock due to sloppy error handling? (I’m not using db locking)

@ec2-do,

Sorry to hear that utility is giving you trouble. If it works from the command-line but not from cron then it’s very likely a PATH issue.
Eric Hammond, the author of ec2-consistent-snapshot, has put great error-handling in that script. Try looking at (or adding to) the comments on Eric Hammond’s page for ec2-consistent-snapshot.
BTW, it’s not necessary to fire up another EC2 instance just to use the command-line tools to start/stop instances. You can install those tools locally. Or you can use the AWS Management Console.

hi , I get a strange error when I try to create a volume …
Any suggestions ?
thanks
Anand

ubuntu@ubuntuamibuild:~$ aws ec2 create-volume --size 100 --availability-zone us-east-1a --dry-run
Service ec2 not available in region "us-east-1"
ubuntu@ubuntuamibuild:~$

@Anand Rao,

Strange indeed.

Have you tried explicitly setting the region on the command line, as follows:
aws --region us-east-1 ec2 create-volume –size 100 –availability-zone us-east-1a –dry-run
If this works, then check your ~/.aws/config file to ensure the default region is set properly.

One Other Question ,
in Step 4 we are registering the snapshot , how are we telling the system to use the newly created volume as the root device ?

all steps worked for me , but when I boot into a new instance with the AMI I created, I see 8 GB in my root , when I was expecting 100.

Also , I had to do a small change from ssh to xvdc based on the link below.

Not sure , what I need to change in step 4 to switch to xvdc instead of sdh.

Any help is appreciated.
thanks
Anand

http://stackoverflow.com/questions/6986737/aws-ebs-attached-but-cant-find-on-instance

ubuntu@ubuntuamibuild:~$ sudo mkfs.xfs /dev/xvdh
meta-data=/dev/xvdh isize=256 agcount=4, agsize=6553600
blks= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=12800, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
ubuntu@ubuntuamibuild:~$

@Anand Rao,

I’m not sure what you are describing. The steps above show the procedure to create a snapshot of the newly created volume and then to register that snapshot as a new AMI. The ec2-register command specifies --root-device-name /dev/sda1 to set the device where the new instance’s root volume will be mounted.

Leave a Reply

Your email address will not be published.