≡ Menu

Track Changes to your Dynamic Cloud Services Automatically

Dynamic infrastructure can be a pain to accommodate in applications. How do you keep track of the set of web servers in your dynamically scaling web farm? How do your apps keep up with which server is currently running what service? How can applications be written so they don’t need to care if a service gets moved to a different machine? There are a number of techniques available, and I’m happy to share implementation code for one that I’ve found useful.

One thing common to all these techniques: they all allow the application code to refer to services by name instead of IP address. This makes sense because the whole point is not to care about the IP address running the service. Every one of these techniques offers a way to translate the name of the service into an IP address behind the scenes, without your application knowing about it. Where the techniques differ is in how they provide this indirection.

Note that there are four usage scenarios that we might want to support:

  1. Service inside the cloud, client inside the cloud
  2. Service inside the cloud, client outside the cloud
  3. Service outside the cloud, client inside the cloud
  4. Service outside the cloud, client outside the cloud

Let’s take a look at a few techniques to provide loose coupling between dynamically movable services and their IP addresses, and see how they can support these usage scenarios.

Dynamic DNS

Dynamic DNS is the classic way of handling dynamically assigned roles: DNS entries on a DNS server are updated via an API (usually HTTP/S) when a server claims a given role. The DNS entry is updated to point to the IP address of the server claiming that role. For example, your DNS may have a production-master-db.example.com record. When the production deployment’s master database starts up it can register itself with the DNS provider to claim the production-master-db.example.com dns record, pointing that DNS entry to its own IP address. Any client of the database can use the host name production-db-master.example.com to refer to the master database, and as long as the machine that last claimed that DNS entry is still alive, it will work.

When running your service within EC2, Dynamic DNS servers running outside EC2 will see the source IP address for the Dynamic DNS registration request as the public IP address of the instance. So if your Dynamic DNS is hosted outside EC2 you can’t easily register the internal IP addresses. Often you want to register the internal IP address because from within the same EC2 region it costs less to use the private IP address than the public IP addresses. One way to use Dynamic DNS with private IPs is to build your own Dynamic DNS service within EC2 and set up all your application instances to use that DNS server for your domain’s DNS lookups. When instances register with that EC2-based DNS server, the Dynamic DNS service will detect the source of the registration request as being the internal IP address for the instance, and it will assign that internal IP address to the DNS record.

Another way to use Dynamic DNS with internal IP addresses is to use DNS services such as DNSMadeEasy whose API allows you to specify the IP address of the server in the registration request. You can use the EC2 instance metadata to discover your instance’s internal IP address via the URL .

Here’s how Dynamic DNS fares in each of the above usage scenarios:

Scenario 1: Service in the cloud, client inside the cloud: Only if you run your own DNS inside EC2 or use a special DNS service that supports specifying the internal IP address.
Scenario 2: Service in the cloud, client outside the cloud: Can use public Dynamic DNS providers.
Scenario 3: Service outside the cloud, client inside the cloud: Can use public Dynamic DNS providers.
Scenario 4: Service outside the cloud, client outside the cloud: Can use public Dynamic DNS providers.

Update window: Changes are available immediately to all DNS servers that respect the zero TTL on the Dynamic DNS server (guaranteed only for Scenario 1). DNS propagation delay penalty may still apply because not all DNS servers between the client and your Dynamic DNS service necessarily respect TTLs properly.

Pros: For public IP addresses only, easy to integrate into existing scripts.

Cons: Running your own DNS (to support private IP addresses) is not trivial, and introduces a single point of failure.

Bottom line: Dynamic DNS is useful when both the service and the clients are in the cloud; and for other usage scenarios if a DNS propagation delay is acceptable.

Elastic IP Addresses

In AWS you can have an Elastic IP address: an IP address that can be associated with any instance within a given region. It’s very useful when you want to move your service to a different instance (perhaps because the old one died?) without changing DNS and waiting for those changes to propagate across the internet to your clients. You can put code into the startup sequence of your instances that associates the desired Elastic IP address, making this approach very scriptable. For added flexibility you can write those scripts to accept configurable input (via settings in the user-data or some data stored in S3 or SimpleDB) that specifies which Elastic IP address to associate with the instance.

A cool feature of Elastic IP addresses: if clients use the DNS name of the IP address (“ec2-1-2-3-4.compute-1.amazonaws.com”) instead of the numeric IP address you can have extra flexibility: clients within EC2 will get routed via the internal IP address to the service while clients outside EC2 will get routed via the public IP address. This seamlessly minimizes your bandwidth cost. To take advantage of this you can put a CNAME entry in your domain’s DNS records.

Summary of Elastic IP addresses:

Scenario 1: Service in the cloud, client inside the cloud: Trivial, client should use Elastic IP’s DNS name (or set up a CNAME).
Scenario 2: Service in the cloud, client outside the cloud: Trivial, client should use Elastic IP’s DNS name (or set up a CNAME).
Scenario 3: Service outside the cloud, client inside the cloud: Elastic IPs do not help here.
Scenario 4: Service outside the cloud, client outside the cloud: Elastic IPs do not help here.

Update window: Changes are available in under a minute.

Pros: Requires minimal setup, easy to script.

Cons: No support for running the service outside the cloud.

Bottom line: Elastic IPs are useful when the service is inside the cloud and an approximately one minute update window is acceptable.

Generating Hosts Files

Before the OS queries DNS for the IP address of a hostname it checks in the hosts file. If you control the OS of the client you can generate the hosts file with the entries you need. If you don’t control the OS of the client then this technique won’t help.

There are three important ingredients to get this to work:

  1. A central repository that stores the current name-to-IP address mappings.
  2. A method to update the repository when mappings are updated.
  3. A method to regenerate the hosts file on each client, running on a regular schedule.

The central repository can be S3 or SimpleDB, or a database, or security group tags . If you’re concerned about storing your AWS access credentials on each client (and if these clients are web servers then they may not need your AWS credentials at all) then the database is a natural fit (and web servers probably already talk to the database anyway).

If your service is inside the cloud and you want to support clients both inside and outside the cloud you’ll need to maintain two separate repository tables – one containing the internal IP addresses of the services (for use generating the hosts file of clients inside the cloud) and the other containing the public IP addresses of the services (for use generating the hosts file of clients outside the cloud).

Summary of Generating Hosts Files:

Scenario 1: Service in the cloud, client inside the cloud: Only if you control the client’s OS, and register the service’s internal IP address.
Scenario 2: Service in the cloud, client outside the cloud: Only if you control the client’s OS, and register the service’s public IP address.
Scenario 3: Service outside the cloud, client inside the cloud: Only if you control the client’s OS.
Scenario 4: Service outside the cloud, client outside the cloud: Only if you control the client’s OS.

Update Window: Controllable via the frequency with which you regenerate the hosts file. Can be as short as a few seconds.

Pros: Works on any client whose OS you control, whether inside or outside the cloud, and with services either inside or outside the cloud. And, assuming your application already uses a database, this technique adds no additional single points of failure.

Cons: Requires you to control the client’s OS.

Bottom line: Good for all scenarios where the client’s OS is under your control and you need refresh times of a few seconds.

A Closer Look at Generating Hosts Files

Here is an implementation of this technique using a database as the repository, using Java wrapped in a shell script to regenerate the hosts file, and using Java code to perform the updates. This implementation was inspired by the work of Edward M. Goldberg of myCloudWatcher.

Creating the Repository

Here is the command to create the necessary database (“Hosts”) and table (“hosts”):

mysql -h dbHostname -u dbUsername -pDBPassword -e \
USE Hosts; \
CREATE TABLE \`hosts\` ( \
\`record\` TEXT \
INSERT INTO \`hosts\` VALUES ("   localhost   localhost.localdomain");'

Notice that we pre-populate the repository with an entry for “localhost”. This is necessary because the process that updates the hosts file will completely overwrite the old one, and that’s where the localhost entry is supposed to live. Removing the localhost entry could wreak havoc on networking services – so we preserve it by ensuring a localhost entry is in the repository.

Updating the Repository

To claim a certain role (identified by a hostname – in this example “webserver1” – with an IP address it is registered in the repository. Here’s the one-liner:

mysql -h dbHostname -u dbUsername -pDBPassword -e \
'DELETE FROM Hosts.\`hosts\` WHERE record LIKE "% webserver1"; \
INSERT INTO Hosts.\`hosts\` (\`record\`) VALUES ("   webserver1");'

The registration process can be performed on the client itself or by an outside agent. Make sure you substitute the real host name and the correct IP address.

On an EC2 instance you can get the private and public IP addresses of the instance via the instance metadata URLs. For example:

$ privateIp=$(curl --silent
$ echo $privateIp
$ publicIp=$(curl --silent
$ echo $publicIp

Regenerating the Hosts File

The final piece is recreating the hosts file based on the contents of the database table. Notice how the table records are already in the correct format for a hosts file. It would be simple to dump the output of the entire table to the hosts file:

mysql -h dbHostname -u dbUsername -pDBPassword --silent --column-names=0 -e \
'SELECT \`record\` FROM Hosts.\`hosts\`' | uniq > /etc/hosts  # This is simple and wrong

But it would also be wrong to do that! Every so often the database connection might fail and you’d be left with a hosts file that was completely borked – and that would prevent the client from properly resolving the hostnames of your services. It’s safer to only overwrite the hosts file if the SQL query actually returns results. Here’s some Java code that does that:

Connection conn = DriverManager.getConnection("jdbc:mysql://" + dbHostname + "/?user=" +
	dbUsername + "&password=" + dbPassword);
String outputFileName = "/etc/hosts";
Statement stmt = conn.createStatement();
ResultSet res = stmt.executeQuery("SELECT record FROM Hosts.host");
HashSet<String> uniqueMe = new HashSet<String>();
PrintStream out = System.out;
if (res.isBeforeFirst()) {
	out = new PrintStream(outputFileName);
while (res.next()) {
	String record = res.getString(1);
	if (uniqueMe.add(record)) {

This code uses the MySQL Connector/J JDBC driver. It makes sure only to overwrite the hosts file if there were actual records returned from the database query.

Scheduling the Regeneration

Now that you have a script that regenerates that hosts file (you did wrap that Java program into a script, right?) you need to place that script on each client and schedule a cron job to run it regularly. Via cron you can run it as often as every minute if you want – it adds a negligible amount of load to the database server so feel free – but if you need more frequent updates you’ll need to write your own driver to call the regeneration script more frequently.

If you find this technique helpful – or have any questions about it – I’d be happy to hear from you in the comments.

Update December 2010: Guy Rosen guest-authored this article on using AWS’s DNS service Route 53 to track instances.

{ 15 comments… add one }
  • Hank Lin July 5, 2010, 9:58 am

    Thanks, I’m reluctant to use private DNS servers, because I have no experience about it. I hope Amazon can provide DNS services to solve it.

    • shlomo July 5, 2010, 11:09 am

      @Hank Lin,

      Amazon have not said anything publicly about offering DNS services. You might find your business needs are met sooner and more easily by learning it yourself. Also, it’s not that hard – in my opinion hosting SMTP is more of a pain than DNS.

      • JohnB August 17, 2010, 4:54 am

        Yes, setting up a DNS instance is easy. But what about scaling or dealing with DOS attacks?

        You have to register a DNS server with an IP right? ELB won’t work for DNS since you can’t give it a static IP, so I guess you could run your own ha-proxy in front of several mydns-ng boxes talking to a single mult-az RDS. Still, I think I’d rather pay a DNS provider that can more effectively deal with DOS attacks and get other benefits, like automatically pointing European users to your EU region if you have one. My vote is that it would be very helpful if Amazon added DNS to AWS.

        • shlomo August 17, 2010, 7:22 am


          I wholeheartedly agree – an AWS-managed DNS service would ease the pain.

          If you find yourself running your own DNS inside AWS then you can minimize the risk of a DDoS and limit the scaling problem. You can use a separate domain name (or use a completely fake one, such as myapp.internal) and host this domain’s DNS in the cloud but leave the real public domain’s DNS outside the cloud. In this manner, the DNS server in the cloud need not have any public-facing services so it is not exposed to a DDoS attack. And it’s very unlikely that your application will outgrow the capacity delivered by a single (hefty) instance, though this can also be mitigated by tuning the TTLs. This technique does require your application to be written specifically to use the shadow/internal DNS domain or the public domain name, as appropriate for the circumstance, and (as mentioned in the article under “Dynamic DNS”) that the instances hosting your application be configured to use your own DNS server for the shadow/internal domain.

          Of course this will only help if your service lives completely inside the cloud. If not, then you can outsource hosting both the public DNS and the shadow DNS to a service outside the cloud.

          • JohnB August 18, 2010, 12:48 am

            In my case I was thinking more of a public facing DNS for trying to host thousands of client domains, but I can see your point of a private AWS-only DNS and its usefulness. Thanks for your reply and numerous informative posts, they are all very helpful!

      • bmullan August 30, 2010, 1:09 pm

        Shlomo, thanks for your article. I’ve looked at this same problem myself in the past and I also believe that unless AWS provides a dynamic dns service so internal IPs can be easily supported somehow that it is just best to do it yourself (DYI).

        I did find that DynDNS’s “free” service will let you setup 5 hostnames/ip addresses. Those hosts can be updated by an EC2 instance using something like ddclient. Of course querying those names from “outside” AWS would not provide a reachable address but for use between multiple servers inside AWS the name/ip mapping can make things easier. So doing this was like doubling my ElasticIP allotment on EC2 .

        Your article highlights the issue well and provides some good paths to take.

  • ohad R July 6, 2010, 9:03 am

    very good essay. one question:
    “You can put code into the startup sequence of your instances that associates the desired Elastic IP address, making this approach very scriptable” – interesting… how u do that?

    • shlomo July 20, 2010, 4:32 am

      @ohad R,

      The basic approach is the same regardless of whether you’re running a Windows or a Linux instance: you write some code that calls the EC2 API. This code can be a call to the EC2 command-line tools (which require Java on the instance) or a separate program that uses a library (such as boto for Python, or typica for Java) to directly call the EC2 API. The difference between Windows and Linux here is the way you register a hook to call this code at instance startup time. In linux you register a new “rc.d” script or put something into /etc/rc.local, depending on your linux distro. In Windows you create a new Scheduled Task.

      For real flexibility you need to configure what Elastic IP address is to be associated with the instance, so each instance can have the same code (calling the EC2 API to associate the Elastic IP) but a different Elastic IP. One way to do this is to pass the Elastic IP address in via the user-data and have the code on the instance retrieve the user-data and fetch the Elastic IP address from there.

  • Rodney Quillo July 8, 2010, 1:00 am

    Nice article regarding AWS DNS and IPs..:)

    >hosting SMTP is more of a pain than DNS — I Agree.

Leave a Comment