10 thoughts on “So what happens to storage….”

  1. I don’t know, all of you people who think businesses, large and small, and individuals are going to move most or all of their storage from their own cloud to somebody else’s cloud (which the media thinks is THE cloud) ignore the fact that not every company has access to true high speed internet service. If migrating our networks to 1 Gbps was so important, I don’t see how anyone can now say that connecting to the Internet at 5 or 25 or even 100 Mbps is acceptable. And even with the latter speed for an enterprise, the performance will be so much worse than having local storage, for that 100 Mbps is shared among all users, while a 1 Gbps LAN is switched.

    The companies selling “opex” want you to believe they can do it more efficiently, but when you manage your own storage, you’re not paying for another company’s profits. And you’re data is the top priority. And you have control over who is hired that will have access to your data.

    Turning data service from capex to opex may make sense on a case-by-case basis, but not across the board for most companies. The savings are overstated, and the risks are often glossed over.

    1. Yep, same outsourcing bandwagon that’s been going on for years. It used to be that the company that had it together the most (most efficiently integrated across all functions) was successful. Now “core competencies” are so narrowly defined that everything is a commodity and nobody has a discriminating competitive advantage. And the process has engendered a raft of overhead intermediate managers and salespeople bleeding off any cost advantage without the companies realizing it. Wait until some of these self serving geniuses outsource all the “cloud” to some lowest bidder in inland China with knock off consumer drives that constantly go down and everything funneled though a pipe that makes AOL dial up look good.

    2. I’m Marc Farley. I’ve been in the storage industry for a couple decades and now work for StorSimple. There is no question that the performance of cloud storage will not equal local storage. Some day that could change, but probably not that quickly. The biggest economic advantages of cloud storage come from using cloud capacity to offload data and processes that are inefficient with local storage and legacy processes.
      For instance, the year to year growth in storage requirements that cost businesses such much of their annual IT budget ends up being used to store “cool or cold” data that has aged and is no longer used. It’s ironic that companies spend so much time managing storage resources for data assets that are being used less and less, but that’s the reality of a normal data lifecycle. The problem for IT is that there is no way to know which data may be needed in the future, but there are many reasons to keep data around, including compliance requirements. Putting cool/cold in a cloud storage tier that can be immediately and easily accessed makes a lot of sense, especially if the amount of cloud capacity and bandwidth are reduced by deduplication and compression technologies first to reduce the cost of cloud storage. Using cloud as a tier this way improves both the RTO (recovery time objective) and RPO (recovery point objective) for cool/cold data and that makes almost everybody in the organization happier.
      Of course when you mention RTO and RPO, you think about disaster protection. DR is a royal pain to almost everybody because of the cost and time involved – and the ambiguity of it’s effectiveness. Automatically moving recovery data to the cloud, as an inexpensive off-site facility is very appealing. The recovery from the cloud is location-independent, which means it can occur anywhere a customer has an internet connection, but it can also occur in the cloud itself in conjunction with cloud compute. To your point, the bandwidth to the cloud may not be that great, so the ability to restore quickly is critical, which mandates snapshot and data reduction technologies as well as new technologies such as thin restores, that prioritize recovery data based on the actual need for data by users and applications. That way, days are not wasted restoring high volumes of cool/cold data that aren’t needed to return to normal operations.
      Hope that helps you understand the dynamics – it’s not just about the dynamics of CAPEX and OPEX, – but also about making storage and storage management more efficient.

  2. Love to see some of that “independent verification of the trend that we’ve seen in the last 3 months”

  3. I’m Marc Farley. I’ve been in the storage industry for a couple decades and now work for StorSimple. There is no question that the performance of cloud storage will not equal local storage. Some day that could change, but probably not that quickly. The biggest economic advantages of cloud storage come from using cloud capacity to offload data and processes that are inefficient with local storage and legacy processes.

    For instance, the year to year growth in storage requirements that cost businesses such much of their annual IT budget ends up being used to store “cool or cold” data that has aged and is no longer used. It’s ironic that companies spend so much time managing storage resources for data assets that are being used less and less, but that’s the reality of a normal data lifecycle. The problem for IT is that there is no way to know which data may be needed in the future, but there are many reasons to keep data around, including compliance requirements. Putting cool/cold in a cloud storage tier that can be immediately and easily accessed makes a lot of sense, especially if the amount of cloud capacity and bandwidth are reduced by deduplication and compression technologies first to reduce the cost of cloud storage. Using cloud as a tier this way improves both the RTO (recovery time objective) and RPO (recovery point objective) for cool/cold data and that makes almost everybody in the organization happier.

    Of course when you mention RTO and RPO, you think about disaster protection. DR is a royal pain to almost everybody because of the cost and time involved – and the ambiguity of it’s effectiveness. Automatically moving recovery data to the cloud, as an inexpensive off-site facility is very appealing. The recovery from the cloud is location-independent, which means it can occur anywhere a customer has an internet connection, but it can also occur in the cloud itself in conjunction with cloud compute. To your point, the bandwidth to the cloud may not be that great, so the ability to restore quickly is critical, which mandates data reduction technologies as well as new technologies such as thin restores, that prioritizes recovery data based on the actual need for it. That way, days are not wasted restoring high volumes of cool/cold data that aren’t needed to return to normal operations.

    Hope that helps you understand the dynamics – it’s not just about the dynamics of CAPEX and OPEX, – but also about making storage and storage management more efficient.

  4. This comment is a repeat of a comment I left below. My apologies for leaving it twice – I wasn’t sure of its status and re-posted. My bad. Could you delete this one please? Thanks, Marc Farley

  5. The question isn’t so much cloud vs. local storage. It’s just recognizing that there are different tiers of storage.

    There are use cases where very fast, low latency local storage is the best solution. Many options are available today to address this area of the market. However, what’s missing from an enterprise perspective, is something that looks and feels like a cloud storage product, but is available in a local data center to provide a tier of storage that makes public cloud storage so attractive.

    The reason why the public offerings have been so popular is because they’re so easy to use. There is a simple HTTP API that developers can use to create applications, IT only has to pay for what is needed, there is a rich ecosystem of tools and client libraries, performance is good enough. What is needed shouldn’t be thought of in terms of cloud vs. local, but rather a redefinition of what this tier or storage needs to look like going forward.

    Storage is also a quite different than other services. Whereas other services benefit from elasticity – use it when you need it, turn it off when you don’t. Storage usage is often exhibits a ratcheting effect. Storage grows steadily and doesn’t burst up and down, which diminishes the value of storage service-ifcation.

    This is actually an opportunity for OpenStack Swift – which is the same code that run public storage clouds by Rackspace, HP, SoftLayer, Internap and others. OpenStack Swift can be deployed privately to provide that tier. I believe so much in this space, that I’ve formed a company to do just that – with http://swiftstack.com. At SwiftStack we provide software to enable private OpenStack Swift deployments so that way an IT group can have the benefits of storage that looks and feels like a public cloud (and even be compatible with existing tools) but can be ran in their own data center, giving them control and cost savings.

  6. When the CIA made an announcement to their software vendors that they would be switching their licensing terms to pay-as-you-go, it made IT providers in the government sector take notice, and I wouldn’t be surprised if other, less proactive agencies will follow in short time.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.