Categories
OpenStack Israel Podcast The Business of IT

OpenStack Israel Podcast, Episode 11

This podcast series explores topics of interest to OpenStack practitioners, focusing on the ecosystem in Israel.

In this episode I speak with Nelson Nahum, CEO and founder of Zadara Storage. We talk about:

  • What does Zadara Storage do and how do you use OpenStack?
  • How easy or difficult have you found it to keep up with the changing APIs across releases, especially in the early days of OpenStack? What should OpenStack users do to keep up
  • What challenges have you had with using OpenStack?
  • What do you think of RedHat’s acquisition of Inktank, creators of storage provider Ceph that is popular in OpenStack deployments?
  • What lessons have you learned from your experience operating several large scale OpenStack deployments?
  • How can we get an OpenStack Summit to happen in Israel?

Shlomo Swidler’s OpenStackIL Podcast Episode 11: Nelson Nahum of Zadara Storage

 Subscribe to this podcast series

Categories
OpenStack Israel Podcast The Business of IT

OpenStack Israel Podcast, Episode 10

This podcast series explores topics of interest to OpenStack practitioners, focusing on the ecosystem in Israel.

In this episode I speak with Yaron Haviv, VP of Datacenter and Storage Solutions at Mellanox. We talk about:

  • Why is OpenStack interesting for Mellanox today?
  • What lessons are being learned by Mellanox’s customers using OpenStack?
  • What are you doing to leverage your lessons learned from customer and internal use?
  • What is the scale of OpenStack use and skills within Mellanox?
  • What are Mellanox customers doing with OpenStack, and how are you helping them?
  • Why is OpenStack important to Mellanox for the future?
  • When and why did you know that OpenStack was worthy of serious consideration?
  • What will you present about at the upcoming OpenStack Israel event?
  • What advice would you give someone just starting to get familiar with OpenStack?

Shlomo Swidler’s OpenStackIL Podcast Episode 10: Yaron Haviv of Mellanox

 Subscribe to this podcast series

Categories
OpenStack Israel Podcast The Business of IT

OpenStack Israel Podcast, Episode 9

This podcast series explores topics of interest to OpenStack practitioners, focusing on the ecosystem in Israel.

In this episode I speak with Ken Pepple, CTO of Solinea and author of Deploying OpenStack. We talk about:

  • What does Solinea do?
  • How does OpenStack compare to other infrastructure cloud software on the market, such as CloudStack, vCloud, and Eucalyptus?
  • What types of customers are using CloudStack vs. OpenStack, and for what?
  • What are CloudStack and OpenStack deployments making money on?
  • What has attracted customers to OpenStack in the past and now with Icehouse?
  • What is the focus of the Icehouse release, and how will it help enterprises?
  • What kinds of Hadoop solutions are happening that integrate OpenStack?
  • What happens during the two to three months of a pilot infrastructure cloud project?
  • What is the difference between an OpenStack distro and an OpenStack services offering?
  • What are the two main challenges facing the Israel OpenStack community?

Shlomo Swidler’s OpenStackIL Podcast Episode 9: Ken Pepple of Solinea

 Subscribe to this podcast series

Categories
OpenStack Israel Podcast The Business of IT

OpenStack Israel Podcast, Episode 8

This podcast series explores topics of interest to OpenStack practitioners, focusing on the ecosystem in Israel.

In this episode I speak with Florian Haas of Hastexo. We talk about:

  • OpenStack Icehouse and breaking out of IaaS
  • Trove Data Storage API
  • Typical challenges to building highly available OpenStack services.

 

Shlomo Swidler’s OpenStackIL Podcast Episode 8: Florian Haas of Hastexo

 Subscribe to this podcast series

Categories
OpenStack Israel Podcast The Business of IT

OpenStack Israel Podcast, Episode 7

This podcast series explores topics of interest to OpenStack practitioners, focusing on the ecosystem in Israel.

In this episode I speak with Nati Shalom, CTO of Gigaspaces. We talk about:

    • General update on the upcoming OpenStack Icehouse release
    • Recent Gigaspaces Cloudify 2.7 updates
    • Upcoming Cloudify 3.0 additions
    • Call for Papers for the upcoming OpenStack Israel Event June 2014.

 

Shlomo Swidler’s OpenStackIL Podcast Episode 7: Nati Shalom of Gigaspaces

 Subscribe to this podcast series

Categories
The Business of IT

Cloud pricing wrinkles and tea in China

As you’ve probably heard, Amazon Web Services reduced their on-demand cloud prices significantly last week. You’d think customers would be happy across the board, but that’s not the case. Here’s why, and what will happen as a result.

As discussed previously, AWS customers translate their capacity planning into Reserved Instance purchases, based on the relative savings these RIs provide over on-demand prices. But, when on-demand prices are reduced without a corresponding reduction in the instance-hour price for RIs – as happened last week – the RI breakeven point shifts and upsets the optimal RI coverage calculus. AWS customers who purchased RIs before the price reduction can find themselves stuck with inventory that now costs more per hour, in amortized terms, than the cost if they had not purchased the RI. I have several clients in this situation, and none are very happy about it.

Tea in China

I can imagine the counterargument to this thinking, no doubt coming from the mouth of a cool-headed economist: If the customer was happy when she bought tea at a 10% discount, why should she be any less happy when the price is further reduced a day or a week later? She deemed the value of the tea to be worth the cost when she bought it, and that value has not changed. By rational reasoning, she should be equally satisfied today.

But people don’t think purely in economic terms – emotions play a part in their decisions as well. And the AWS customers who are feeling unloved by this price reduction are the very customers, I imagine, that AWS wants to keep most: These customers have made long-term commitments to AWS already. So, it won’t be long until we hear another price reduction announcement from AWS, specifically directed at this customer segment. Don’t be surprised to see AWS granting a proportional hourly price reduction – or an equivalent credit – for already-purchased reserved instances.

Categories
The Business of IT

Cloud price reductions and capacity planning

Last week both Google Cloud Platform and Amazon Web Services reduced their prices for cloud computing services significantly to comparable levels, and both now offer significant discounts for long-term usage. Yet, though the two cloud services may seem similar, their radically different long-term pricing models reveal just how different these cloud offerings really are.

Whose responsibility is capacity planning?

The core difference between GCP and AWS is in capacity planning: Whose responsibility is it? In AWS, the customer owns their own capacity planning. If the customer can accurately predict their needs for the long term, they can purchase Reserved Instances and save significantly as compared to the on-demand cost. Whereas in GCE, Google owns the capacity planning. GCE customers are granted a Sustained Use discount at the end of the month for resources that were active for a significant portion of the month. The GCE customer might track their expected vs. actual costs and be pleasantly surprised when their bill at the end of the month is lower than expected, but the GCE customer cannot a priori translate their capacity planning prowess into reduced costs.

This begs the question: Are cloud consumers actually good at capacity planning? Sadly, no. Capacity planning in the pre-cloud age of the data center was dicey at best, with managers relying on overprovisioning to save their necks in the face of so much uncertainty. This overprovisioning is no longer a time-to-market-driven necessity in the cloud – but classic capacity planning is no more accurate in the cloud than it was in the data center. That’s why a host of new cloud financial management tools, such as Cloudyn, have sprung up in recent years: These tools help the cloud consumer predict usage and optimize their up-front commitments to maximize cost savings.

Using cloud financial management tools can help you reduce long-term costs, but only to the extent that the cloud provider rewards accurate capacity planning with lower prices – as does AWS. With Google Cloud, there’s no such pricing lever to pull.

But, make no mistake: Choose your cloud provider based on technical and business merits, not based solely on the long-term pricing model. Contact me if you need help.

Categories
The Business of IT

Driving business growth with your software organization

These are the three crucial ingredients to making sure your software organization drives overall business growth.

Keep the R in R&D

“R&D” stands for “Research and Development.” Growth requires both research – exploring the unknown to discover new insights – and development – building the product or service. Unfortunately, in many companies the R&D department is focused solely on the latter: the organization tends to be devoted to developing products, and scarcely ever explores new technologies. Make sure your R&D organization is accountable not only for delivering your product, but also for devising new ways to satisfy customers.

Measure, measure, measure

Do you know what feature is most useful to your best customers? Do you know what the cost of delivering your most popular feature is? How about the way those figures change over time? If you’re going to improve the business value that the software drives, you need to be able to answer these questions. Build a dashboard that reflects the most up-to-date information about these questions, and it will help you set priorities for future work. You can’t get where want to go unless you know where you are.

Own the product

Make every software development decision as if you own the product. Will customers care if you make this change? If not, don’t waste your time. If yes, do it. The bottom line is, make sure you prioritize your software development efforts according to actual customer impact.

Driving business growth should be a priority for every software leader. You’ll be able to do that if you encourage research, measure your results, and act as if you own the product.

Categories
OpenStack Israel Podcast The Business of IT

OpenStack Israel Podcast, Episode 6

This podcast series explores topics of interest to OpenStack practitioners, focusing on the ecosystem in Israel.

In this episode I welcome back Samuel Bercovici of Radware, and we talk about how to contribute to OpenStack.

 

Shlomo Swidler’s OpenStackIL Podcast Episode 6: Welcoming Back Samuel Bercovici of Radware

 Subscribe to this podcast series

Categories
Cloud Developer Tips

Using the new awscli with Chef and OpsWorks

We used to go through so much trouble to manipulate Amazon Web Services resources from the command-line. There were separate AWS command-line tools for each service, and each needed to be downloaded and had unique configuration, and each had its own unique output format. Eric Hammond wrote in detail about this. With the new awscli tool by Mitch Garnaat (of python boto fame) and his team at AWS, we have a single tool that does it all, uniformly, and can be simply configured. Thanks, Amazon!

But what if you need to use the awscli from within a Chef recipe? How do you install and configure the awscli with Chef and OpsWorks? I had this very same question on some recent projects. Here’s how to do this easily.

The awscli cookbook

I created the awscli cookbook to install and configure the awscli command line tools. Add this cookbook to your deployment and include the awscli::default recipe in your run list, and watch it work.

This cookbook supports some common deployment scenarios:

  • Installing the awscli “plain vanilla”, during the Chef “execute” stage.
  • Installing the awscli during the Chef “compile” phase, so it can be called from within Chef.
  • Configuring the awscli’s AWS access credentials.
  • Configuring multiple configuration profiles.

It does not need to be run on an AWS instance – you can use this cookbook to install the awscli anywhere that you can run Chef.

See the cookbook’s README for more details.

Using awscli with IAM Roles

It’s not obvious from the awscli documentation, but when an instance has an associated IAM Role, the awscli will automatically read its credentials from the instance’s IAM metadata. This will often be the case when you are using OpsWorks to launch your instances. In these circumstances, you don’t need to configure any awscli access credentials. But, make sure you use IAM to grant the IAM Role permissions for the AWS API calls you’ll be making with the awscli.

Why use the awscli from within Chef?

Chef recipes are written in Ruby, and it’s easy to parse and manipulate JSON in Ruby. The awscli outputs its responses in JSON format – so it’s easy to parse those responses into your ruby code. The two make a very handy combination. For example, here is how to wait for a newly-created EBS volume to be available (so you can attach it to an instance without an error):

    require 'json'
    volume_id = "vol-12345678"
    region = "us-east-1"
    command = ""
    case node[:platform]
    when 'debian','ubuntu'
      command << "/usr/local/bin/"
    when 'redhat','centos','fedora','amazon','scientific'
      command << "/usr/bin/"
    end
    command << "aws --region #{region} ec2 describe-volumes --volume-ids #{volume_id} --output json"
    Chef::Log.info("Waiting for volume #{volume_id} to be available")
    loop do
      state=""
      shell = Mixlib::ShellOut.new("#{command} 2>&1")
      shell.run_command
      if !shell.exitstatus
        Chef::Log.fatal(output)
        raise "#{command} failed:" + shell.stdout
      end
      jdoc = JSON.parse(shell.stdout)
      vol_state = jdoc["Volumes"].first["State"]
      Chef::Log.debug("#{volume_id} is #{vol_state}")
      if vol_state=="available"
        break
      else
        Chef::Log.debug("Waiting 5 more seconds for volume #{volume_id} to be ready...")
        sleep 5
      end
    end

The last ten lines show how easy it is to access the awscli output within your ruby code (in your chef recipe).