The Business of IT

The ROI of cloud computing

I am often asked by CEOs and CIOs to explain the ROI of cloud computing. My answer goes something like this.

How well do you see? The main consideration in evaluating the ROI of cloud computing is your eyesight. What is the scope of your vision? What do you believe about the future? Ask yourself, as we create the ideal future for our customers, what kinds of changes will my organization need to accommodate? Think about how these changes will impact your use of computing – and indeed your entire operation. Consider the four major dimensions that change will include: technological, economic, organizational, and risk.

Cloud computing is a way of incorporating changes in technological, economic, organizational, and risk considerations into your use of computing. The value of cloud computing, when properly deployed, is in being able to support the changing technological, economic, organizational, and risk landscapes while keeping rock-steady focus on your business’s raison d’être: delivering great products and services to your customers. If you can see that future clearly, and appreciate the changes your organization will need to accommodate along the way, then you can make effective cost and value (ROI) decisions about cloud computing. You’ve got to see change in order to experience the sea change.

Do you need help painting your vision of the future and appreciating the changes necessary to get there? Contact me.

The Business of IT

Cloud pricing wrinkles and tea in China

As you’ve probably heard, Amazon Web Services reduced their on-demand cloud prices significantly last week. You’d think customers would be happy across the board, but that’s not the case. Here’s why, and what will happen as a result.

As discussed previously, AWS customers translate their capacity planning into Reserved Instance purchases, based on the relative savings these RIs provide over on-demand prices. But, when on-demand prices are reduced without a corresponding reduction in the instance-hour price for RIs – as happened last week – the RI breakeven point shifts and upsets the optimal RI coverage calculus. AWS customers who purchased RIs before the price reduction can find themselves stuck with inventory that now costs more per hour, in amortized terms, than the cost if they had not purchased the RI. I have several clients in this situation, and none are very happy about it.

Tea in China

I can imagine the counterargument to this thinking, no doubt coming from the mouth of a cool-headed economist: If the customer was happy when she bought tea at a 10% discount, why should she be any less happy when the price is further reduced a day or a week later? She deemed the value of the tea to be worth the cost when she bought it, and that value has not changed. By rational reasoning, she should be equally satisfied today.

But people don’t think purely in economic terms – emotions play a part in their decisions as well. And the AWS customers who are feeling unloved by this price reduction are the very customers, I imagine, that AWS wants to keep most: These customers have made long-term commitments to AWS already. So, it won’t be long until we hear another price reduction announcement from AWS, specifically directed at this customer segment. Don’t be surprised to see AWS granting a proportional hourly price reduction – or an equivalent credit – for already-purchased reserved instances.

The Business of IT

Cloud price reductions and capacity planning

Last week both Google Cloud Platform and Amazon Web Services reduced their prices for cloud computing services significantly to comparable levels, and both now offer significant discounts for long-term usage. Yet, though the two cloud services may seem similar, their radically different long-term pricing models reveal just how different these cloud offerings really are.

Whose responsibility is capacity planning?

The core difference between GCP and AWS is in capacity planning: Whose responsibility is it? In AWS, the customer owns their own capacity planning. If the customer can accurately predict their needs for the long term, they can purchase Reserved Instances and save significantly as compared to the on-demand cost. Whereas in GCE, Google owns the capacity planning. GCE customers are granted a Sustained Use discount at the end of the month for resources that were active for a significant portion of the month. The GCE customer might track their expected vs. actual costs and be pleasantly surprised when their bill at the end of the month is lower than expected, but the GCE customer cannot a priori translate their capacity planning prowess into reduced costs.

This begs the question: Are cloud consumers actually good at capacity planning? Sadly, no. Capacity planning in the pre-cloud age of the data center was dicey at best, with managers relying on overprovisioning to save their necks in the face of so much uncertainty. This overprovisioning is no longer a time-to-market-driven necessity in the cloud – but classic capacity planning is no more accurate in the cloud than it was in the data center. That’s why a host of new cloud financial management tools, such as Cloudyn, have sprung up in recent years: These tools help the cloud consumer predict usage and optimize their up-front commitments to maximize cost savings.

Using cloud financial management tools can help you reduce long-term costs, but only to the extent that the cloud provider rewards accurate capacity planning with lower prices – as does AWS. With Google Cloud, there’s no such pricing lever to pull.

But, make no mistake: Choose your cloud provider based on technical and business merits, not based solely on the long-term pricing model. Contact me if you need help.

The Business of IT

The four pillars of cloud computing success

Succeeding with cloud computing requires incorporating your organization's unique economic, cultural, risk, and technological considerations.
Succeeding with cloud computing requires incorporating your organization’s unique economic, cultural, risk, and technological considerations.

Success with cloud computing means adopting more than just technology change. It requires addressing the elements that make your organization unique: how you work, the risks you’re prepared to take, and the market in which you sell. Even a tailor with all the right supplies – several yards of fabric, a sewing machine, and an iron – needs to craft a garment that looks good, is of high quality, and is done in an efficient manner. Here are the key ways to incorporate your organization’s economic, cultural, and risk factors into your adoption of cloud computing and accelerate your success.


The three crucial economic aspects to incorporate into your use of cloud computing are:

  1. Understand the business impact of the services that will run in the cloud. Only by understanding the business impact of each service will you be able to properly prioritize regular and emergency work on those services.
  2. Increase capacity with demand. Demand will change over time, so success with cloud requires that your services adjust their use of compute power accordingly. Measuring the demand is the first step toward this goal.
  3. Decrease capacity as demand wanes. Don’t keep around inventory that just sits gathering dust – get it off your balance sheet. Note that decreasing capacity with demand can seldom be done effectively with a self-hosted or private cloud.


The following three aspects of your organization’s culture must be incorporated into your adoption of cloud computing:

  1. Act strategically. No business goal ever achieved itself – it requires concerted effort by people collaborating toward a shared goal. Give people the context they need to understand the shared goal.
  2. Support change. Cloud computing enables organizations to adjust their use of computing as requirements change. Encourage your organization to adjust and adapt to change.
  3. Make it easy to access all your organization’s data. When change comes along, the data will need to be juggled in new ways.


Three critical aspects of incorporating your organization’s changing risk profile into cloud computing adoption:

  1. Hire and partner with only the most trustworthy. As risks materialize, your business will rely on these trusted people to maintain normal operation.
  2. Regularly assess all assets and access points for risk. Look especially diligently into automating work processes so they utilize “blessed” configurations.
  3. Create a single source of truth for authentication and authorization. Whereas other data in your business need not be immediately consistent, authorization and authentication should be, in order to avoid the split-brain, he said she said phenomenon.


While we’re at it, here are the three most impotent technical aspects to get right in your cloud adoption efforts:

  1. Treat all processes as elements in a single value stream: service delivery. With all component activities in the delivery process focused on this result, collaboration and integration between disparate teams is vastly improved.
  2. Utilize small, standardized, compassable elements. Standardization and simplicity is the key to operating at scale.
  3. Build systems that expect failure and act reasonably despite it. When change comes and failure happens, your systems will be able to cope reasonably.

Succeeding at cloud computing means looking beyond the technology and incorporating your organization’s economic, cultural, and risk factors.

Contact us if you need help.

How I learned to stop worrying and love the cloud from Shlomo Swidler
Cloud Developer Tips

Fragment of heretofore unknown Tractate of Babylonian Talmud discovered

Ancient wisdom apparently has much to offer modern cloud application architects. This fragment was discovered in a shadowy basement in the Tel Aviv area of Israel.

MasechetDBKammaSee a PDF of the fragment

This finding clearly shows that ancient cloud application architects in the great talmudic academies of Babylon struggled with the transition away from classic databases. At the time, apparently, a widely used solution was known as Urim veTumim (“oracle”). Yet this database was unsuited for reliable use in cloud applications, and the text explores the reasons behind that unsuitability.

Okay, here’s the real story: I created this for a client in 2011, and I was delighted to find it on my computer serendipitously today. It reflects the state of the art at the time. Translation into plain English:

1. Oracle RAC does not run on EC2

2. Achieving Oracle high availability on EC2 is a problem: there is no shared device, and relying on NFS is problematic.

3. The cloud frameworks (enStratus, etc.) do not currently support Oracle.

The Business of IT

Answers from OpenStack leaders

This week I attended the OpenStack Israel 2013 conference. The conference was sold out, with over 400 attendees in a standing-room-only auditorium. As I mentioned earlier, I moderated a panel entitled “If OpenStack is the answer, what was the question?” Here is the video. Below is a summary of the panel’s Q&A.

I want to thank the audience at the event for asking provocative questions, and the panelists:

  • Mark Collier, COO of the OpenStack Foundation
  • Chris Jackson, Manager of Big Cloud Solutions at Rackspace
  • Mac Devine, Director and CTO, Cloud Portfolio, Global Services at IBM
  • Mark McClain, Senior Developer at DreamHost
  • Sivan Barlizy, Product Line Manager, CloudBand Business Unit at Alcatel-Lucent

and, for joining the panel unprepared when I invited him on stage halfway through:

  • Florian Haas, CEO of hastexo.

Earlier I posted some questions I would ask the panelists. Time was short, so not all questions were covered.

Summary of panel discussions

What can be done to improve the operability of OpenStack?

Mark Collier said that The OpenStack Foundation actively listens to users and creates the roadmap based on their input, and operability has been improving: upgrading from Folsom to Grizzly is much easier than previous upgrades. The Foundation encouraged the writing of a book about how to operate OpenStack. We also need to remember that we have hindsight bias: it took many years for Linux to mature. OpenStack is maturing rapidly – but expectations are higher and keep rising. Although there are mostly developers, not operators, contributing to the project, we expect this to change over time as it is more widely deployed.

How does Rackspace keep its public OpenStack cloud consistently running the trunk version two weeks behind the project, without user downtime?

Chris Jackson differentiated between enabling OpenStack adopters to easily deploy and the procedures specific to Rackspace and maintaining compatibility with its legacy SliceHost infrastructure it still operates. But, although the Rackspace tech team shares some of their experiences on their blog, the techniques Rackspace uses to accomplish this particular feat are not shared.

OpenStack’s mission is to encourage an ecosystem of interoperable cloud providers. Why does Rackspace want to encourage that?

Chris Jackson said Rackspace encourages people to be able to stand up OpenStack easily. Their goal is to open up all the ways to build and run a cloud, and Rackspace will differentiate based on service quality. However Fanatical Support will not support just any cloud, only Rackspace’s OpenStack flavor, in order to deliver a consistent quality level.

Why don’t we see an ecosystem of public OpenStack clouds actually forming? Where are the billion dollar budgets backing the creation of these alternatives to AWS?

Rackspace views vendor choice as crucial to OpenStack success, so they sponsor choice by helping telcos build OpenStack clouds. Telcos have billion dollar budgets, and Rackspace has a program that helps them get to public cloud without reinventing the wheel.

Why aren’t we talking about Amazon anymore?

It’s because OpenStack has matured and we have real use cases, real OpenStack deployments, to point to. Rackspace claims that AWS and OpenStack may appear to be similar, but they serve very different functions: AWS gives you cookie-cutter infra resources and tells you how to consume them, while OpenStack gives you the freedom to build for your own custom needs, using any vendor you want, without worrying about any single vendor’s roadmap. Rackspace doesn’t see AWS as competitors. When Rackspace competes with AWS on price it’s just because it’s a necessary evil, to stay relevant in the public cloud market. Public cloud prices are based on the presumption of zero customer loyalty. However, hybrid cloud supports a price premium.

Won’t OpenStack always be playing follow-the-leader with AWS?

Mac Devine pointed out that there is a correlation of the roadmaps of these efforts because AWS blazed the trail. In the early days, OpenStack was playing follower. But from this point, from where we are today, it’s equally important to listen to the ecosystem and implement the features that are requested. Of note is that as AWS matured they began releasing services above the IaaS layer that encourage lock-in. The OpenStack approach is to provide a variety of vendors that can be used to supply services – such as relational database as a service.

Q: What is OpenStack displacing?

VMware should be very concerned. There is no lift-and-shift from VMware to OpenStack. People want an open, no-license model, and providers like Rackspace work with customers to help them transition from VM consumption to cloud consumption. Mac Devine said that even among accounts where VMware is dominant, there is increased interest and commitment to open, lockin-free environments for all new projects. These clients are using OpenStack to extend the features of apps that run in VMware. This will increase the pressure on VMware to be more open. Mac believes that that pressure is one reason why VMware spun off Pivotal Labs, with CloudFoundry inside it, into a separate unit.

How do we get enterprise apps into the cloud, then? What is the roadmap for these apps that were developed without cloud in mind?

There are several things missing in order to be able to do that easily. Bare-metal provisioning, single tenancy, location awareness, standards compliance, integration with management utilities, and network configuration are several examples. Alcatel-Lucent has been leveraging OpenStack as a platform that solves common problems, and adding code around it to provide these features, but these additions are not contributed back to the project. Mac Devine pointed out that legacy apps are still being maintained, but those teams maintaining and developing these legacy apps are already on notice that they’ll need to change the way they design apps. Mac’s division at IBM put almost 600 developers on a project to make DB2 aligned with OpenStack, and he is sure we’ll see traditional OLTP apps being adapted and reshaped for OpenStack gradually. The network is one of the things that is very different in legacy and OS. Each app has its own unique network configuration, a snowflake network. Mac recalled that one client rearchitected their application but didn’t think about the network, and when the new app running in OpenStack needed IP addresses it still took them 10 days to provision those manually.
Florian Haas said it’s silly to think that we will rearchitect legacy apps for the cloud. In terms of getting enterprise apps into OpenStack easily, Florian said we need a check box that says “give me this VM, and make it HA.” “We are about 80% of the way there, which means we only have the other 80% left.”
Chris Jackson pointed out that The Foundation is moving toward the convergence of IaaS and PaaS, so control will be given less to each individual app in the future.

What about OpenStack itself – are the internals HA?

Florian said we’re there already. The message bus, relational DB, the API – these things are done. The hard part that is left is the “I want to make a machine HA.”

The Business of IT

Questions for OpenStack leaders

I’ll be moderating a panel of OpenStack leaders next week at the OpenStack Israel 2013 event in Tel Aviv. Last year’s OpenStack Israel event was excellent, and this year’s gathering looks like it will knock it out of the park. Over 350 people are already registered for the conference, and the two days of technical training following the conference are already full. If you’re in the area you shouldn’t miss this conference. And make sure to introduce yourself to me as well.

In preparation for next week’s event I’ve been formulating tough questions to ask the OpenStack leaders. So far I’ve received great suggestions from Nati Shalom, Randy Bias, and George Reese. Here are some of the questions I’ll put to the panel:

  • How committed are you to ensuring compatibility of OpenStack clouds across different providers? What about between versions?
  • Why is it so difficult to operate OpenStack in practice? How do you plan to address this?
  • Is OpenStack the code or is it the APIs?
  • Public cloud providers are naturally incentivized to differentiate by adding proprietary features to their services, thereby increasing lock-in. Doesn’t this pressure to differentiate conflict with OpenStack’s mission to create multiple, interoperable clouds? And, doesn’t this pressure towards fragmentation undermine the value of OpenStack as an alternative to the Amazon Web Services ecosystem? What do you plan to do about this?
  • What existing systems are being replaced by OpenStack implementations?
  • For the vendors among you (e.g. IBM, HP), how do you plan to make money from OpenStack?
  • Why is OpenStack Foundation membership so expensive? What’s wrong with following the model provided by the Apache Foundation? Is the high barrier to entry in the Foundation really necessary?

Do you have a question you’d like to hear the OpenStack leadership address? Let me know, and I’ll report back on their answers.

The Business of IT

Not workloads – impact.

Before you ask yourself “what can and can’t I do in the cloud,” stop to consider the larger picture. How will your customers know that you have adopted cloud computing? What resulting radical improvement will delight your customers? These questions help you focus on the impact you want to achieve with your initiative. The answers help you understand the facets that you’ll need to address – technology, skills, procedures, and behaviors – in order to achieve and measure that impact. Then you’ll be ready to talk nuts and bolts about vendors, tools, and cloud-appropriate workloads.

For example, my client, CIO of a media company, was enthusiastic about using cloud computing and wanted to get right down to details: what should he move into the cloud, how long would it take, who could help, and so on. After walking through the questions mentioned above we made several important discoveries:

  • External customers wanted a streamlined billing process.
  • Internal customers wanted to eliminate boring, error-prone manual work.
  • Neither group of customers cared what technology was used.
  • He didn’t know the effects of the current billing process on satisfaction, retention, and revenue.

It was clear that the project was more properly regarded as a customer retention initiative, not as a technology adoption effort. As a result, the CIO immediately knew what he had to do: measure the effects of billing on customer satisfaction, retention, and revenue; and he needed to recast the cloud computing adoption program as a program to substantially improve customer satisfaction. These guidelines provided the business context within which his staff was able to make intelligent, focused decisions about implementation details.

Next time you find yourself asking about what workloads to move to the cloud, think about the only thing that matters: what will delight customers?

The Business of IT

Three Critical Pitfalls to the Success of Cloud Offerings

Whether you operate IaaS, PaaS, or a SaaS service, watch out for these three pitfalls to the success of cloud offerings:

  1. Encouraging the wrong thing. Measuring the success of the offering by looking only at utilization metrics damages sales. One cloud service provider measured the success of its IaaS service – and compensated its sales force – based on virtual machine usage. Little surprise that, for all the years that this practice continued, the provider could not land strategically important accounts. Like in any mature sales operation, your prospective cloud service clients should be evaluated based on several criteria, including:
    • strategic fit
    • urgency
    • reputation
    • visibility
    • cost of sale
    • short-term sales potential
    • long-term sales potential.

    Make sure you evaluate all these elements, and that you prioritize your sales and marketing efforts toward prospects with the right combinations of all these factors.

  2. Ignoring the relationship. On-demand and pay-as-you-go are not simply usage models; they are components of a new type of customer relationship based on transparency and immediacy. Make sure your customer relationships allow for:
    • Seeing usage and  billing easily, and up-to-the-minute.
    • Setting custom notifications and enforceable consumption limits based on both usage and cost.
    • Getting quick notice of service degradations and their ongoing status.
    • Being reassured after service disruptions that you have identified the cause and taken steps to prevent a reoccurrence.

    Your service may have other ways to build a customer relationship based on transparency and immediacy. Identify and exploit them.

  3. Focusing on infrastructure reliability. Selling a cloud IaaS service that differentiates with higher level of service reliability is not sustainable; instead high-SLA workloads must be built using cloud-appropriate designs. Any IaaS provider who tries to compete against the commodity non-reliable infrastructure clouds by offering a higher SLA engages in an unwinnable battle: the cost structure will make it impossible to compete against commodity offerings. As computing services become more commoditized, previously established patterns of infrastructure usage (aka “best practices”) that called for highly reliable hardware and “N+1” architectures are architecturally moot. Following those patterns in the cloud will not increase overall workload reliability. Instead, workloads that use cloud infrastructure need to be designed for scale-out and rapid recovery from infrastructure failure, providing overall workload reliability via software. The overall cost of reengineering workloads for cloud-appropriate architecture will, in the end, be less than the cost of building and operating a cloud that delivers a “classic” hardware-based HA.

Don’t make these mistakes. They could be killing your cloud business.


The Business of IT

Cloud adoption flu season

This coming year tens of thousands of medium and large companies will conduct initiatives to adopt cloud computing – and most of these will fail. The cause of these project failures will be unrealistic expectations. The inoculation is a hefty dose of reality. Get your vaccine here.

No one questions the fact that IT is undergoing a revolution and cloud is at its center. You need only glance at the article titles in any business and tech publication to see this. Cloud is a critical element in achieving unprecedented agility and supply-chain optimization.

However, will cloud alone get you these purported benefits? No, it won’t. Nor will any single vendor’s solution. Nor will the time and resource investment required to get there be small, as I’ve discussed previously. Shame on the vendors, analysts, and pundits who claim so or set this expectation in meetings with their prospective customers.

Reality. The flu vaccine for cloud-high cloud project expectations.

Let’s set some proper expectations. Real change requires serious investment, time, resources, and oversight. The larger the ship, the more difficult it is to adjust course. Here’s what to expect a successful cloud adoption program to look like:

  • It will be a multi-year effort.
  • It will require investing a significant percentage of your current IT budget just to properly plan. And significantly more than that to execute.
  • It will require adopting new technology and changing the way your people work.
  • It will demand the participation of your customers – internal and external.
  • It will demand executive-level leadership, involvement, support, and prioritization.
  • It will fail if you think otherwise.

The cloud adoption flu season is upon us. The best way to protect yourself is to understand the reality of cloud adoption initiatives – that’s the vaccine.