Categories
Cloud Developer Tips The Business of IT

Developing Means Delivering

An informal survey of my application developer friends revealed four main types of motivations driving developers. These four types of motivations are represented in this diagram:

Individual developers can be motivated more or less altruistically. Individual developers can also be focused on the external manifestations of success – such as appreciative customers – or on the internal manifestations – such as more power or money. The diagram above is not a judgement of “better” or “worse” motivations – it is simply meant to capture two personality factors and their expression among individual developers. Also note that developers may have different motivations at different times, or even simultaneously in combination.

Just as individual developers can have varying motivations, organizations can also be more or less altruistic, and focused more internally or externally. Internally focused organizations spend an undue proportion of their energy and resources doing work (or inventing work to do) that will not be visible to the outside world, while externally focused organizations measure their results based on their effect on the outside world – market share, profit, and customer satisfaction.

Where do you rate your organization on the above diagram? Most business leaders want organizations firmly motivated by providing value to the customer, and software businesses are no different. Developing software is all about delivering value. Your software development efforts can only provide value if they are successfully delivered to the customer – and in today’s “as-a-serivce” world, that value is constantly provided via the internet. This means that you should spend significant effort ensuring your customer can actually reach your service to receive the value. It means automating your deliveries. It means measuring and improving your delivery speed and success rates. It means involving your customer early on so that you know the value they seek and so you can provide it. It means making sure all your teams work together to get this to happen.

Because ultimately, it means a developer’s job is not finished until the customer has derived value.

Categories
Cloud Developer Tips The Business of IT

Software Delivery Operations

Like companies producing any kind of offering, software companies require three elements in order to successfully deliver: knowing what to build, knowing how to build it, and knowing how to deliver it to the customer. When each of these three functions – research, engineering, and delivery – does its job independently, you get slow progress and uninspired, sub-par products. But when all three functions work together in a customer-centric fashion, you get excellence – in implementation, innovation – in concept, and world-class results – by delivering those two to the customer.

With the explosion of internet-based services in the past decade delivery has by necessity become an ongoing activity. And so in recent years much attention has been devoted to integrating development with technical operations – DevOps. DevOps integrates into a single team the skills of how to build the service and the skills of how to deliver it. But, as pointed out earlier, DevOps teams often lack the most critical element: the customer’s involvement. DevOps teams can build and deliver really fast, but don’t know what to build.

Integrating development with the customer-facing functions (marketing, sales, etc.) without technical operations results in an organization that knows what to build and how to build it, but doesn’t know how to deliver it to the customer. This organization is full of ideas – even innovative ideas – and implements them, but they don’t see the light of day. I call this a “feasibility practice” – it can show you what is feasible but it can’t deliver it to customers.

Teams that know what to build and how to deliver it to the customer, but not how to build it, also do not provide value to the customer. Such teams are close to the customer both on the inbound (understanding the customer’s needs) and the outbound (explaining the company’s offering) directions. Marketing sits squarely in this space: it knows what to build and how to deliver but can’t transform that knowledge into value for the customer.

As an executive leading a software delivery effort, make sure your organization occupies the very center so it can understand, build, and deliver the right value to the customer. The customer is here in the center as well: he participates in validating the ideas, critiquing the implementation, and accepting the delivery. It is only here that your organization can achieve excellence, innovation, and world-class results.

Categories
Cloud Developer Tips The Business of IT

The missing layer

Traditional descriptions of cloud computing and the various cloud operating models – IaaS, PaaS, SaaS – focus on the locus of responsibility for various layers of the system: facilities, network, storage, servers, etc. But these descriptions typically omit a critical element. Can you spot it in the diagram below?

The missing layer is the only layer that really matters – the customer. Ensuring that your application can actually be consumed – that delivery can be successful – is a critical part of providing your value. Your application’s facilities, network, and storage may be running properly, but will your customers actually benefit? Without a properly staffed, trained, equipped, and managed operations team, your service won’t last long enough for customers to care. If your customer is important, then so is your operations team.

Categories
The Business of IT

Being Good and Being Known for it

The owner of a home-based baking business once described his marketing strategy to me as “my cookies speak for themselves.” True, they were delicious cookies. But he completely neglected marketing so his customer base never grew beyond friends and family, and his business failed as a result.

At a recent client meeting I was reminded of that baker. The CEO of a fledgeling tech business said to me, “our differentiating factor will be our superior service quality.”

“Wonderful!” I said. “So I’d expect to see significant investment in engineering and quality control. Is that the case?”

“Yes! Here is our R&D and quality program…,” and he showed me.

“That looks reasonable,” I said, after glancing at it briefly. “Tell me about your marketing.”

“Oh, we hardly need any!” the CEO replied. “Our customers will love our service quality so we don’t need significant marketing.”

I was immediately concerned. “Let me show you something,” I said to him. And I drew the following:

Being good and being known for it are both essential. New businesses begin in the lower left quadrant. At first there is no product so it’s not good, and nobody knows about the company. Here customers are neutral about your offering: if they knew more they’d know it wasn’t good – and if it was good they still wouldn’t know it. In the upper left quadrant you’re no good and customers know it, so they leave. In the lower right you’re good but nobody knows it, so you lose opportunities to do business. The goal is to be in the upper right quadrant, where you’re good and customers know it, so you grow.

A business will traverse the playing field as its quality and reputation change over time. Sometimes quality will suffer or improve, sometimes reputation will have ups or downs. The key factor in achieving success is the ability to nimbly correct course, heading always toward being good and people knowing it. Identify improvements and deliver them to your customers quickly – and make them aware of the improvements. You’ll need agile product development, minimizing the time between concept and delivery. You’ll need to measure your customers’ perception of your quality. You’ll need to empower every employee to satisfy the customer as quickly and completely as possible at every level. The net result of enabling quick course correction will be that you get good and you get known for it – and you grow.

Thankfully, this CEO was convinced by my explanations. He agreed that “being known for being good” is as important as “being good” and I helped him craft his marketing, development, and customer support efforts accordingly. The cookies will speak, and he will make sure people hear them.

Categories
The Business of IT

How I helped by refusing work

A very good repeat client introduced me to his own customer, CEO of a fast-growing company, with the words “you should talk to Shlomo – he does wonders.” When I get an enthusiastic referral like that – which happens often, as most of my business is via referrals – I move fast to see how I can help.

The CEO and I met and began to explore important areas of his business that I typically help with: usage of cloud resources, leveraging the data collected, streamlining processes and team dynamics. The discussion was progressing quickly – in fact, too easily, too rapidly – toward defining a potential project. I sensed something was wrong. He couldn’t describe why he wanted these things to get done; all he could say was, “this is the next issue we need to tackle.”

“Tell me what your rate is for doing these things and we’ll get moving immediately,” he finally said. In the past I would have jumped at the offer – what could be easier than business that practically lands itself in my lap after only thirty minutes of discussion? But, thankfully, I’m constantly learning from the past. My motto is “make new mistakes,” and I had learned the perils of undertaking work without comprehending the motivation behind it – without understanding the “why.”

“Joe,” I said, “I’m not going to work with you on any of that.” He stared at me in wide-eyed shock for a moment. Had I not known he was an elite long-distance runner, in top cardiovascular shape, I would have feared the worst. I continued: “You’re too close to the business to see what’s really going on and what areas need the most urgent attention. That’s why it’s hard for you to explain why these items are important. With your permission, I’ll lead you through a brief series of questions to help us both ascertain the areas of true priority for your organization. Then we’ll be able to jointly determine the best way to work together to achieve those priorities.”

And so we did. We began with these three questions:

  • How much of your own time is spent resolving day-to-day issues versus establishing and communicating direction and priorities?
  • If your business volume were to triple tomorrow, what about your organization would break?
  • If you were to ask your engineers, your support staff, your accounting staff, etc., what their biggest headache was, what would they answer, and would you be surprised by their answers – and why?

These questions helped Joe step out of his day-to-day role and think about the business from the outside, as a doctor would observe a patient. From this vantage point we observed the symptoms that the patient exhibited – the areas in which the organization was not healthy. There were many. We both agreed on the diagnosis: the underlying problem was the organization’s inability to take on new customers because its accounting functions could not handle the additional work. Then we discussed the course of treatment. Joe suggested introducing an automated billing system.

An automated billing system would certainly have alleviated the problems, but I sensed that there was still something more fundamental to explore. I asked Joe, “how can your billing system be directly beneficial to your customers? Why should they care?” Joe thought for a moment, and then he said excitedly, “you’re so right! We need to integrate billing into the customer interface!” Joe’s product is all about simplifying the use of multiple aggregated services, and therefore simplified billing is part of his product’s key value proposition. He realized that he was missing a core element of his product, one that would also enable the company to grow profitably.

We had come a long way from “help me implement these next few things we need done,” and were now discussing how to help his company achieve its strategic objective: to grow profitably. The tasks we had been discussing originally would not have contributed to this goal significantly, and Joe acknowledged that those immediate issues of reducing cloud usage costs and leveraging collected data were illusorily urgent. And he thanked me for pushing back on his request to tackle those issues.

Joe and I are currently discussing various options for helping him build his critical billing capability. He would not have realized the real priority, the true “why” he needs to address, if I had said “yes” to his initial proposals. And my involvement in helping him achieve strategic objectives is much more valuable, and much easier to demonstrate, than extensive work on non-strategic tasks. I prefer the former, and so does Joe, and all my other clients as well.

Categories
Cloud Developer Tips

Poking Holes in CloudFront-Based Sites for Dynamic Content

As of Februrary 2011 AWS S3 has been able to serve static websites, giving you superior availability for unchanging (or seldom-changing) content. But most websites today are not static; dynamic elements drive essential features such as personalized pages, targeted advertisements, and shopping carts. Today’s release from AWS CloudFront: Support for Dynamic Content alleviates some of the challenge of running dynamic websites. You can now configure a custom set of URL patterns to always be passed through to the origin server. This allows you to “poke holes” in the CDN cache for providing dynamic content.

Some web sites, such as this one, appear to be static but are driven by dynamic code. WordPress renders each page on every request. Though excellent tools exist to provide caching for WordPress, these tools still require your web server to process WordPress’s PHP scripts. Heavy traffic or poor hosting choices can still overwhelm your web server.

Poking Holes

It’s relatively easy to configure your entire domain to be served from CloudFront. What do you need to think about when you poke holes in a CloudFront distribution? Here are two important items: admin pages and form actions.

Admin pages

The last thing you want is for your site’s control panel to be statically served. You need an accurate picture of the current situation in order to manage your site. In WordPress, this includes everything in the /wp-admin/* path as well as the /wp-login.php page.

Form actions

Your site most likely does something with the information people submit in forms – search with it, store it, or otherwise process it. If not, why collect it? In order to process the submitted information you need to handle it dynamically in your web application, and that means the submit action can’t lead to a static page. Make sure your form submission actions – such as search and feedback links – pass through to the webserver directly.

A great technique for feedback forms is to use WuFoo, where you can visually construct forms and integrate them into your website by simple Javascipt. This means that your page can remain static – the Javascript code dynamically inserts the form, and WuFoo handles the processing, stops the spam, and sends you the results via email.

When Content Isn’t So Dynamic

Sometimes content changes infrequently – for example, your favicon probably changes rarely. Blog posts, once written, seldom change. Serving these items from a CDN is still an effective way to reduce load on your webserver and reduce latency for your users. But when things do change – such as updated images, additional comments, or new posts, how can you use CloudFront to serve the new content? How can you make sure CloudFront works well with your updated content?

Object versioning

A common technique used to enable updating static objects is called object versioning. This means adding a version number to the file name, and updating the link to the file when a new version is available. This technique also allows an entire set of resources to be versioned at once, when you create a versioned directory name to hold the resources.

Object versioning works well with CloudFront. In fact, it is the recommended way to update static resources that change infrequently. The alternative method, invalidating objects, is more expensive and difficult to control.

Combining the Above Techniques

You can use a combination of the above techniques to create a low-latency service that caches sometimes-dynamic content. For example, a WordPress blog could be optimized by integrating these techniques into the WordPress engine, perhaps via a plugin. Here’s what you’d do:

  • Create a CloudFront distribution for the site, setting its custom origin to point to the webserver.
  • Poke holes in the distribution necessary for the admin, login, and forms pages.
  • Create new versions of pages, images, etc. when they change, and new versions of the pages that refer to them.

Even though WordPress generates each page via PHP, this collection of techniques allows the pages to be served via CloudFront and also be updated when changes occur. I don’t know of a plugin that combines all these techniques, but I suspect the good folks at W3-EDGE, producers of the W3 Total Cache performance optimization framework I mentioned above, are already working on it.

Categories
Cloud Developer Tips

Scalability and HA Limitations of AWS Marketplace AMIs

Reading AWS’s recent announcement of the AWS Marketplace you would think that it provides a catalog of click-to-deploy, highly-available, scalable applications running on EC2. You’d be partially right: the applications available in the AWS Marketplace are deployable in only a few clicks. But highly-available and scalable services will be difficult to build using Marketplace images. Here’s why.

Essential Ingredients of HA and Scalability on AWS

AWS makes it easy to run scalable, HA applications via several features. Not all applications use all of these features, but it would be very difficult to provide scalable and highly available service without using at least one of these:

  • Elastic Load Balancing
  • Auto Scaling
  • Elastic Block Storage volumes

ELB and AutoScaling both enable horizontal scalability: spreading load and controlling deployment size via first-class-citizen tools integrated into the AWS environment. They also enable availability by providing an automated way to recover from the failure of individual instances. [Scalability and availability often move in lock-step; improving one usually improves the other.] EBS volumes provide improved data availability: data can be retrieved off of dying instances – and often are used in RAID configurations to improve write performance.

AWS Marketplace Limitations

The AWS Marketplace has limitations that cripple two of the above features, making highly available and scalable services much more difficult to provide.

Marketplace AMI instances cannot be added to an ELB

Update 17 May 2012: The Product Manager for AWS Marketplace informed me that AWS Marketplace instances are now capable of being used with ELB. This limitation no longer exists.

Try it. You’ll get this error message:

 Error: InvalidInstance: ElasticLoadBalancing does not support the paid AMI or supported AMI of instance i-10abf677.

There is no mention of this limitation in the relevant ELB documentation.

This constraint severely limits horizontal scalability for Marketplace AMIs. Without ELB it’s difficult to share web traffic to multiple identically-configured instances of these AMIs. The AWS Marketplace offers several categories of AMIs, including Application Stacks (RoR, LAMP, etc.) and Application Servers (JBoss, WebSphere, etc.), that are typically deployed behind an ELB – but that won’t work with these Marketplace AMIs.

Root EBS volumes of Marketplace AMI instances cannot be mounted on non-root devices

Because all Marketplace AMIs are EBS-backed, you might think that there is a quick path to recover data if the instance dies unexpectedly: simply attach the root EBS volume to another device on another instance and get the data from there. But don’t rely on that – it won’t work. Here is what happens when you try to mount the root EBS volume from an instance of a Marketplace AMI on an another instance:

Failed to attach EBS volume 'New-Mongo-ROOT-VOLUME' (/dev/sdj) to 'New-Mongo' due to: OperationNotPermitted: 'vol-98c642f7' with Marketplace codes may not be attached as a secondary device.

This limitation is described here in AWS documentation:

If a volume has an AWS Marketplace product code:

  • The volume can only be attached to the root device of a stopped instance.
  • You must be subscribed to the AWS Marketplace code that is on the volume.
  • The configuration (instance type, operating system) of the instance must support that specific AWS Marketplace code. For example, you cannot take a volume from a Windows instance and attach it to a Linux instance.
  • AWS Marketplace product codes will be copied from the volume to the instance.

Closing a Licensing Loophole

Why did AWS place these constraints on using Marketplace-derived EBS volumes? To help Sellers keep control of the code they place into their AMI. Without the above limitations it’s simple for the purchaser of a Marketplace AMI to clone the root filesystem and create as many clones of that Marketplace-derived instance without necessarily being licensed to do so and without paying the premiums set by the Seller. It’s to close a licensing loophole.

AWS did a relatively thorough job of closing that hole. Here is a section of the current (25 April 2012) AWS overview of the EC2 EBS API and Command-Line Tools, with relevant Marketplace controls highlighted:

Command and API Action Description
ec2-create-volumeCreateVolume Creates a new Amazon EBS volume using the specified size or creates a new volume based on a previously created snapshot. Any AWS Marketplace product codes from the snapshot are propagated to the volume. For an overview of the AWS Marketplace, go to https://aws.amazon.com/marketplace/help/200900000. For details on how to use the AWS Marketplace, see AWS Marketplace.
ec2-attach-volumeAttachVolume Attaches the specified volume to a specified instance, exposing the volume using the specified device name. A volume can be attached to only a single instance at any time. The volume and instance must be in the same Availability Zone. The instance must be in the running or stoppedstate.

[Note] Note
If a volume has an AWS Marketplace product code:

  • The volume can only be attached to the root device of a stopped instance.
  • You must be subscribed to the AWS Marketplace code that is on the volume.
  • The configuration (instance type, operating system) of the instance must support that specific AWS Marketplace code. For example, you cannot take a volume from a Windows instance and attach it to a Linux instance.
  • AWS Marketplace product codes will be copied from the volume to the instance.

For an overview of the AWS Marketplace, go to https://aws.amazon.com/marketplace/help/200900000. For details on how to use the AWS Marketplace, see AWS Marketplace.

ec2-detach-volumeDetachVolume Detaches the specified volume from the instance it’s attached to. This action does not delete the volume. The volume can be attached to another instance and will have the same data as when it was detached. If the root volume is detached from an instance with an AWS Marketplace product code, then the AWS Marketplace product codes from that volume will no longer be associated with the instance.
ec2-create-snapshotCreateSnapshot Creates a snapshot of the volume you specify. After the snapshot is created, you can use it to create volumes that contain exactly the same data as the original volume. When a snapshot is created, any AWS Marketplace product codes from the volume will be propagated to the snapshot.
ec2-modify-snapshot-attributeModifySnapshotAttribute Modifies permissions for a snapshot (i.e., who can create volumes from the snapshot). You can specify one or more AWS accounts, or specify all to make the snapshot public.

[Note] Note
Snapshots with AWS Marketplace product codes cannot be made public.

The constraints above are meant to maintain the AWS Marketplace product code, the mechanism AWS uses to identify resources (AMIs, snapshots, volumes, and instances) that require Marketplace licensing integration. Note that not all AMIs in the AWS Marketplace have a product code – for example, the Amazon Linux AMI does not have one. AMIs that do not require licensing control (such as Amazon Linux, and Ubuntu without support) do not have AWS Marketplace product codes – but the rest do.

A Hole

There remains a hole in this lockdown scheme. Any instance whose kernel allows booting from a volume based on its volume label can be manipulated into booting from a secondary EBS volume. This requires root privileges on the instance. I have successfully booted an instance of the MongoDB AMI in the AWS Marketplace from a secondary EBS volume created from the Amazon Linux AMI. Anyone exploiting this hole can circumvent the product code lockdown.

Plugging the Hole

Sellers want these licensing controls and lockdowns. Here’s how:

  • Disable the root account.
  • Disable sudo.
  • Prevent user-data from being executed. On the Amazon Linux AMI and Ubuntu AMIs, user-data beginning with a hashbang is executed as root during the startup sequence.

Unfortunately these mitigations result in a crippled instance. Users won’t be able to mount EBS volumes – which requires root access – so data can’t be stored on EBS volumes for better recoverability.

Alternatively, you could develop your AWS Marketplace solutions as SaaS applications. For many potential Sellers this would be a long-term effort.

I’m still looking for good ways to enable scalability and HA of Marketplace AMIs. I welcome your suggestions.

Update 27 April 2012: Amazon Web Services PR has contacted me to say they are actively working on a fix for the ELB limitations, and are also working on removing the limitation related to mounting Marketplace-derived EBS volumes on secondary devices. I’ll update this article when this happens. In the meantime, AWS said that users who want to recover data from Marketplace-derived EBS volumes should reach out to AWS Support for help.

Update 17 May 2012: The Product Manager for AWS Marketplace informed me that AWS Marketplace instances are now capable of being used with ELB.

Categories
Cloud Developer Tips The Business of IT

Ten^H^H^H Many Cloud App Design Patterns

Today I presented at the Enterprise Cloud Summit at the Interop conference. The talk was officially entitled Ten Cloud Design Patterns, but because my focus is on the application, I re-titled it. And I mention more than ten patterns, hence the final title Many Cloud App Design Patterns.

I explore a number of important issues for cloud applications. Application state and behavior must both be scaled. Availability is dependent on MTTR and MTTF – but one of them is much more important than the other in the cloud. Single Points of Failure are the nemesis of availability, and I walk through some typical patterns for reducing SPOFs in the cloud.

Hat tips to Jonas Bonér for his inspiring presentation Scalability, Availability, Stability Patterns and to George Reese for his blog The AWS Outage: The Cloud’s Shining Moment.

Here’s the presentation – your comments are welcome.

Categories
Cloud Developer Tips The Business of IT

Roundup: CloudConnect 2011 Platforms and Ecosystems BOF

The need for cloud provider price transparency. What is a workload and how to move it. “Open”ness and what it means for a cloud service. Various libraries, APIs, and SLAs. These are some of the engaging discussions that developed at the Platforms and Ecosystems “Birds of a Feather”/”Unconference”, held on Tuesday evening March 8th during the CloudConnect 2011 conference. What about the BOF worked? What didn’t? What should be done differently in the future? Below are some observations gathered from early feedback; please leave your comments, too.

Roundup

In true unconference form, the discussions reflected what was on the mind of the audience. Some were more focused than others, and some were more contentious than others. Each turn of the wheel brought a new combination of experts, topics, themes, and participants.

Provider transparency was a hot subject, both for IaaS services and for PaaS services. When you consume a utility, such as water, you have a means to independently verify your usage: you can look at the water meter. Or, if you don’t trust the supplier’s meter, you can install your own meter on your main intake line. But with cloud services there is no way to measure many kinds of usage that you pay for – you must trust the provider to bill you correctly. How many Machine Hours did it take to process my SimpleDB query, or CPU Usage for my Google App Engine request? Is that internal IP address I’m communicating with in my own availability zone (and therefore free to communicate with) or in a different zone (and therefore costs money)? Today, the user’s only option is to trust the provider. Furthermore, it would be useful if we had tools to help estimate the cost of a particular workload. We look forward to more transparency in the future.

As they rotated through the topics, two of the themes generated similar discussions: Workload Portability and Avoiding Vendor Lock-in. The themes are closely related, so this is not surprising. Lesson learned: next time use themes that are more orthogonal, to explore the ecosystem more thoroughly.

In total nine planned discussions took place over the 90 minutes. A few interesting breakaway conversations spun off as well, as people opted to explore other aspects separately from the main discussions. I think that’s great: it means we got people thinking and engaged, which was the goal.

Some points for improvement: The room was definitely too small and the acoustics lacking. We had a great turnout – over 130 people, despite competing with the OpenStack party – but the undersized room was very noisy and some of the conversations were difficult to follow. Next time: a bigger room. And more pizza: the pizza ran out during the first round of discussions.

Participants who joined after the BOF had kicked off told me they were confused about the format. It is not easy to join in the middle of this kind of format and know what’s going on. In fact, I spent most of the time orienting newcomers as they arrived. Lesson learned: next time show a slide explaining the format, and have it displayed prominently throughout the entire event for easy reference.

Overall the BOF was very successful: lots of smart people having interesting discussions in every corner of the room. Would you participate in another event of this type? Please leave a comment with your feedback.

Thanks

Many thanks to the moderators who conducted each discussion, and the experts who contributed their experience and ideas. These people are: Dan Koffler, Ian Rae, Steve Wylie, David Bernstein, Adrian Cole, Ryan Dunn, Bernard Golden, Alex Heneveld, Sam Johnston, David Kavanagh, Ben Kepes, Tony Lucas, David Pallman, Jason Read, Steve Riley, Andrew Shafer, Jeroen Tjepkema, and James Urquhart. Thanks also to Alistair Croll not only for chairing a great CloudConnect conference overall, but also for inspiring the format of this BOF.

And thanks to all the participants – we couldn’t have done it without you.

Categories
Cloud Developer Tips

Recapture Unused EC2 Minutes

How much time is “wasted” in the paid-for but unused portion of the hour when you terminate an instance? How can you recapture this time – which represents compute power – and put it to good use? After all, you’ve paid for it already. This article presents a technique for repurposing an instance after you’re “done” with it, until the current billing hour is up. It’s inspired by a tweet from DEVOPS_BORAT:

We have new startup CloudJanitor. We recycle old or unuse cloud instance. Need only your cloud account login!

To clarify, we’re talking about per-hour pricing in public cloud IaaS services, where partial hours consumed are billed as whole hours. AWS EC2 is the most prominent example of a cloud sporting this pricing policy (search for “partial”). In this pricing policy, terminating (or stopping) an instance after it’s been running for 121 minutes results in a usage charge for three hours, “wasting” an extra 59 minutes that you have paid for but not used.

What’s Involved

You might think it’s easy to repurpose an instance: just Stop it (if it’s EBS-backed), change its root volume to a new one, and Start the instance again. Not so fast: Stopping an EC2 instance immediately ends the current billing hour before you can use it all, and when you Start the instance again a new billing hour begins – so we can’t Stop the instance. We also can’t Terminate the instance – that would also immediately curtail the billing hour and prevent us from utilizing it. Instead, we’re going to reboot the instance, which does not affect the billing.

We’ll need an EBS volume that has a bootable distro on it – let’s call this the “beneficiary” volume, because it’s going to benefit from the extra time on the clock. The beneficiary volume should have the same distro as the “normal” root volume has. [Actually, to be more precise, it need only have a distro that works with the same kernel that the instance is currently running.] I’ve tested this technique with Ubuntu 10.04 Lucid and 10.10 Maverick.

One of the great things about the Ubuntu images is how easy it is to play this root volume switcheroo: these distros boot from any volume that has the label uec-rootfs. To change the root volume we’ll change the volume labels, so a different volume is used as the root filesystem upon reboot.

It’s very important to disassociate the instance from all external hooks, such as Auto-Scaling Triggers and Elastic Load Balancers before you repurpose it. Otherwise the beneficiary workload will influence those no-longer-relevant systems. However, this may not be possible if you use hooks that cannot be de-coupled from the instance, such as a CloudWatch Dimension of ImageIdInstanceId, or InstanceType.

The network I/O incurred during the recaptured time may be subject to additional charges. In EC2, only communications between instances in the same availability zone, or between EC2 and S3 in the same region, are free of charge.

You’ll need to make sure the beneficiary workload only accepts communications on ports that are open in the normal instance’s security groups. It’s not possible to add or remove security groups while an instance is running. You also wouldn’t want to be modifying the security groups dynamically because that will influence all instances in those security groups – and you may have other instances that are still performing their normal workload.

The really cool thing about this technique is that it can be used on both EBS-backed and instance-store instances. However, you’ll need to prepare separate beneficiary volumes (or snapshots) for 32-bit and 64-bit instances.

How to Do it

There are three stages in repurposing an instance:

  1. Preparing the beneficiary volume (or snapshot).
  2. Preparing the normal workload image.
  3. Actually repurposing the instance.

Stages 1 and 2 are only performed once. Stage 3 is performed for every instance you want to repurpose.

Preparing the beneficiary snapshot

First we’re going to prepare the beneficiary snapshot. Beginning with a pristine Ubuntu 10.10 Maverick EBS-based instance (at the time of publishing this article that’s ami-ccf405a5 for 32-bit instances), let’s create a clone of the root filesystem:

ec2-run-instances ami-ccf405a5 -k my-keypair -t m1.small -g default

ec2-describe-instances $instanceId #use the instanceId outputted from the previous command

Wait for the instance to be “running”. Once it is, identify the volumeId of the root volume – it will be indicated in the ec2-describe-instances output, the one attached to device /dev/sda1.

At this point you have a running Ubuntu 10.10 instance. For real-world usage you’ll want to customize this instance by installing the beneficiary workload and arranging for it to automatically start up on boot. (I recommend Folding@home as a worthy beneficiary project.)

Now we create the beneficiary snapshot:

ec2-create-snapshot $volumeId #use the volumeId from the previous command

And now we have the beneficiary snapshot.

Preparing the normal workload image

Begin with the same base AMI that you used for the beneficiary snapshot. Launch it and customize it to contain your normal workload stuff. You’ll also need to put in a custom script that will perform the repurposing. Here’s what that script will do:

  1. Determine how much time is left on the clock in the current billing hour. If it’s not enough time to prepare and to reboot into the beneficiary volume, just force ourselves to shut down.
  2. Disassociate any external hooks the instance might participate in: remove it from ELBs, force it to fail any Auto-Scaling health checks, and make sure it’s not handling “normal” workloads anymore.
  3. Attach the beneficiary volume to the instance.
  4. Change the volume labels so the beneficiary volume will become the root filesystem at the next reboot.
  5. Edit the startup scripts on the beneficiary volume to start a self-destruct timer.
  6. Reboot.

The following script performs steps 1, 4, 5, and 6, and clearly indicates where you should perform steps 2 and 3.

#! /bin/bash
# reboot into the attached EBS volume on the specified device, but terminate
# before this billing hour is complete.
# requires the first argument to be the device on which the EBS volume is attached

device=$1
safetyMarginMinutes=1 # set this to how long it takes to attach and reboot

# make sure we have at least "safetyMargin" minutes left this hour
t=/tmp/ec2.running.seconds.$$
if wget -q -O $t http://169.254.169.254/latest/meta-data/local-ipv4 ; then
	# add 60 seconds artificially as a safety margin
	let runningSecs=$(( `date +%s` - `date -r $t +%s` ))+60
	rm -f $t
	let runningSecsThisHour=$runningSecs%3600
	let runningMinsThisHour=$runningSecsThisHour/60
	let leftMins=60-$runningMinsThisHour-$safetyMarginMinutes
	# start shutdown one minute earlier than actually required
	let shutdownDelayMins=$leftMins-1
	if [[ $shutdownDelayMins < 2 || $shutdownDelayMins > 59 ]]; then
		echo "Shutting down now."
		shutdown -h now
		exit 0
	fi
fi

## here is where you would disassociate this instance from ELBs,
# force it to fail AutoScaling health checks, and otherwise make sure
# it does not participate in "normal" activities.

## here is where you would attach the beneficiary volume to $device
# ec2-create-volume --snapshot snap-00000000 -z this_availability_zone
# dont forget to wait until the volume is "available"

# ec2-attach-volume . . . and don't forget to wait until the volume is "attached"

## (optionally) force the beneficiary volume to be deleted when this instance terminates:
# ec2-modify-instance-attribute --block-device-mapping '$device=::true' this_instance_id

## get the beneficiary volume ready to be rebooted into
# change the filesystem labels
e2label /dev/sda1 old-uec-rootfs
e2label $device uec-rootfs
# mount the beneficiary volume
mountPoint=/tmp/mountPoint.$$
mkdir -m 000 $mountPoint
mount $device $mountPoint
# install the self-destruct timer
sed -i -e "s/^exit 0$/shutdown -h +$shutdownDelayMins\nexit 0/" \
	$mountPoint/etc/rc.local
# neutralize the self-destruct for subsequent boots
sed -i -e "s#^exit 0#chmod -x /etc/rc.local\nexit 0#" $mountPoint/etc/rc.local
# get out
umount $mountPoint
rmdir $mountPoint

# do the deed
shutdown -r now
exit 0

Save this script into the instance you’re preparing for the normal workload (perhaps, as the root user, into /root/repurpose-instance.sh) and chmod it to 744.

Now, make your application detect when its normal workload is completed – this exact method will be very specific to your application. Add in a hook there to invoke this script as the root user, passing it the device on which the beneficiary volume will be attached. For example, the following command will cause the instance to repurpose itself to a volume attached on /dev/sdp:

sudo /root/repurpose-instance.sh /dev/sdp

Once all this is set up, use the usual EC2 AMI creation methods to create your normal workload image (either as an instance-store AMI or as an EBS-backed AMI).

Actually repurposing the instance

Now that everything is prepared, this is the easy part. Your normal workload image can be launched. When it is finished, the repurposing script will be invoked and the instance will be rebooted into the beneficiary volume. The repurposed instance will self-destruct before the billing hour is complete.

You can force this repurposing to happen by explicitly invoking the command at an SSH prompt on the instance:

sudo /root/repurpose-instance.sh /dev/sdp

Notice that you will be immediately kicked out of your SSH session – either the instance will reboot or the instance will terminate itself because there isn’t enough time left in the current billable hour. If it’s just a reboot (which happens when there is significant time left in the current billing hour) then be aware: the SSH host key will most likely be different on the repurposed instance than it was originally, and you may need to clean up your local ~/.ssh/known_hosts file, removing the entry for the instance, before you can SSH in again.