sss78 - Fotolia
If there are any universal truths to managing enterprise data, they would be that data is always growing and all data is not created equal. So it shouldn't come as a surprise that we at ESG continue to find the challenge of data growth a top concern among IT professionals, regardless of whether a study investigates storage, data protection or overall IT priorities.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Fortunately, the IT industry has seen a wealth of innovations to deal with the challenge of data growth and the complexity of efficiently supporting diverse data types. Flash storage emerged to address the requirements of high-performance workloads, for example, while the development of scale-out NAS and object storage tackled the needs of lower-performing, yet demanding, higher-capacity data sets.
As a result, storage is bifurcated into separate tiers, which helps reduce the total cost of ownership (TCO) of IT. ESG research shows that nearly half of enterprises surveyed that employ solid-state storage report a reduction in TCO, while the top reason for deploying object storage is to lower capital storage expenditures.
Bottom line: Leveraging different storage tiers works and saves money.
Not all storage tiers created equal
Extending the concept of storage tiers a little further -- outside the data center, in fact -- some enterprises have turned to public cloud storage providers as a tier for secondary storage or archive data. Often, for these environments, the thinking goes, "We have all this stagnant data, let's move it to the cloud and get it out of our data center." This approach sounds logical and makes sense theoretically, but it can lead to the most expensive storage tier in your ecosystem if not done carefully.
The fundamental assumption that enables the tiered-storage model to work is the belief that you can accurately discern which data set belongs in which storage tier. For many workloads, this may not be the case, however, for the following reasons:
- The data access pattern doesn't always easily align to a specific storage tier.
- Data access rates often do not stay constant over time.
- Organizations frequently lack accurate data on the true performance demands of their workloads.
For an on-premises storage infrastructure, the penalty for misdiagnosing data set performance characteristics is usually minimal. The cost and complexity can skyrocket when dealing with off-premises cloud resources, though. That's because the physical separation introduced by public cloud storage providers results in a significant latency penalty when moving data back and forth between local and off-premises resources. As a result, the cost of a storage tier can increase dramatically when a data set thought to require low access rates is migrated to the cloud only to discover after the fact it's serving far more transactions than anticipated.
This happens far more often than you might expect, unfortunately, and can result in a storage tier that is ultimately far more expensive than anything residing on premises. This scenario can lead to any number of the following issues:
- Public cloud storage providers may issue a higher-than-expected bill, sometimes far exceeding budget expectations.
- Cloud storage users may need to launch an investigation into the source of the increase in demand, requiring cycles from IT personnel.
- And enterprises may need to decide whether to proceed with a costly migration process to move data back on premises to reside on storage infrastructure that may no longer be available.
Thankfully, there are a number of options you can leverage to help gain a greater understanding of workload performance characteristics prior to a migration and ease data movement across the WAN:
- Invest in tools to better understand the performance characteristics of applications. There are multiple software-defined storage (SDS) products that reside in the control plane that can provide insights into the resource demands of applications or virtual machines. These SDS technologies also let you virtualize or aggregate multiple on- and off-premises storage resources.
- Evaluate hybrid cloud services. If you want to leverage public cloud storage providers as a storage repository, you should investigate services that can deploy a local high-performance storage cache while virtualizing off-premises storage resources. These technologies can greatly reduce the number of required data transactions over the WAN should performance demands increase.
- Investigate the latest on-premises storage offerings before starting a cloud migration. Storage innovation has been rampant in recent years, reducing both the cost of capacity and performance. For example, 2016 has seen multiple announcements for new high-density, all-flash storage arrays that deliver massive amounts of high-performance capacity with incredibly low space and power footprints.
Data transactions over the WAN slow performance and add a level of permanence to off-premises architecture decisions. Sure, there are many benefits to public cloud storage providers, but if data sets are migrated hastily, problems can occur and costs can increase. A way to mitigate these concerns is to invest in technologies that provide better insight into workload requirements or greater flexibility in hybrid cloud data movement.
About the author:
Scott Sinclair is a storage analyst with Enterprise Strategy Group in Austin, Texas.
Taking stock of the public cloud storage market
What to ask providers of public cloud storage
Choosing a cloud storage provider for nearline data