S3 has an “eventual consistency” model, which presents certain limitations on how S3 can be used. Today, Amazon released an improvement called “read-after-write-consistency” in the EU and US-west regions (it’s there, hidden at the bottom of the blog post). Here’s an explanation of what this is, and why it’s cool.
What is Eventual Consistency?
Consistency is a key concept in data storage: it describes when changes committed to a system are visible to all participants. Classic transactional databases employ various levels of consistency, but the golden standard is that after a transaction commits the changes are guaranteed to be visible to all participants. A change committed at millisecond 1 is guaranteed to be available to all views of the system – all queries – immediately thereafter.
Eventual consistency relaxes the rules a bit, allowing a time lag between the point the data is committed to storage and the point where it is visible to all others. A change committed at millisecond 1 might be visible to all immediately. It might not be visible to all until millisecond 500. It might not even be visible to all until millisecond 1000. But, eventually it will be visible to all clients. Eventual consistency is a key engineering tradeoff employed in building distributed systems.
One issue with eventual consistency is that there’s no theoretical limit to how long you need to wait until all clients see the committed data. A delay must be employed (either explicitly or implicitly) to ensure the changes will be visible to all clients.
Practically speaking, I’ve observed that changes committed to S3 become visible to all within less than 2 seconds. If your distributed system reads data shortly after it was written to eventually consistent storage (such as S3) you’ll experience higher latency as a result of the compensating delays.
What is Read-After-Write Consistency?
Read-after-write consistency tightens things up a bit, guaranteeing immediate visibility of new data to all clients. With read-after-write consistency, a newly created object or file or table row will immediately be visible, without any delays.
Note that read-after-write is not complete consistency: there’s also read-after-update and read-after-delete. Read-after-update consistency would allow edits to an existing file or changes to an already-existing object or updates of an existing table row to be immediately visible to all clients. That’s not the same thing as read-after-write, which is only for new data. Read-after-delete would guarantee that reading a deleted object or file or table row will fail for all clients, immediately. That, too, is different from read-after-write, which only relates to the creation of data.
Why is Read-After-Write Consistency Useful?
Read-after-write consistency allows you to build distributed systems with less latency. As touched on above, without read-after-write consistency you’ll need to incorporate some kind of delay to ensure that the data you just wrote will be visible to the other parts of your system.
But no longer. If you use S3 in the US-west or EU regions (or other regions supporting read-after-write consistency), your systems need not wait for the data to become available.
Update March 2011: As more S3 regions come online they seem to be getting the same features as US-West. So far the AP-Singapore and AP-Tokyo regions also support Read-After-Write consistency. US Standard does not.
Update June 2012: As pointed out in the comments below, more S3 regions now support read-after-write consistency: US-West Oregon, SA-Sao Paolo, and AP-Tokyo. It’s not easy keeping up with the pace of AWS’s updates!
Why Only in the AWS US-west and EU Regions not in the US Standard region?
Read-after-write consistency for AWS S3 is was only available in the US-west and EU regions, not the US-Standard region. I asked Jeff Barr of AWS blogging fame why, and his answer makes a lot of sense:
This is a feature for EU and US-West. US Standard is bi-coastal and doesn’t have read-after-write consistency.
Aha! I had forgotten about the way Amazon defines its S3 regions. US-Standard has servers on both the east and west coasts (remember, this is S3 not EC2) in the same logical “region”. The engineering challenges in providing read-after-write consistency in a smaller geographical area are greatly magnified when that area is expanded. The fundamental physical limitation is the speed of light, which takes at least 16 milliseconds to cross the US coast-to-coast (that’s in a vacuum – it takes at least four times as long over the internet due to the latency introduced by routers and switches along the way).
If you use S3 and want to take advantage of the read-after-write consistency, make sure you understand the cost implications: some other regions have higher storage and bandwidth costs than the US-Standard region.
Next Up: SQS Improvements?
Some vague theorizing:
It’s been suggested that AWS Simple Queue Service leverages S3 under the hood. The improved S3 consistency model can be used to provide better consistency for SQS as well. Is this in the works? Jeff Barr, any comment? 🙂