In an ongoing effort to improve its suite of web services, Amazon said today that it’s adding persistent storage features to its EC2 storage service. Why is this important?
As the AWS blog explains, up until now you were able to attach 160 GB to 1.7 TB of storage to an EC2 “instance.” (An “instance” is essentially the server.) As long as the server was running, the storage remained available. Once you shut it down, the storage disappeared. “Applications with a need for persistent storage could store data in Amazon S3 or in Amazon SimpleDB, but they couldn’t readily access either one as if it was an actual file system,” the blog says.
Amazon CTO Werner Vogels, a keynote speaker at our Structure 08 conference, on his blog describes persistent storage this way: “It basically looks like an unformatted hard disk. Once you have the volume mounted for the first time you can format it with any file system you want or if you have advanced applications such as high-end database engines, you could use it directly.”
In other words, this new persistent storage essentially acts like an external hard drive “attached” to your “instance.”
It can also be plugged into more than one “instance,” thus making it a shared drive. (I misreported the deleted bit. Error is regretted.) We are a little intrigued by how Amazon is making this happen. Some experts believe that it might be via using iSCSI. But persistent iSCSI at such large scale is expensive. (If anyone has a better explanation, please let me know.)
What it all means is that AWS/EC2 has gone up a few notches in terms of reliability. This reliability will go a long way towards the company offering service-level agreements to customers, especially large enterprises that want to utilize Amazon’s on-demand infrastructure. Alistair Croll earlier this month wrote a post in which he argued that Amazon was going after larger corporations, and today’s announcement bolsters his theory.