What you will learn in this tip: While a significant amount of hype and confusion surrounds cloud storage, creating a private cloud can lead to considerable
benefits. The process boils down to a handful of steps and important factors to keep in mind. Find out how to develop a private cloud architecture and what is truly necessary to help you assess your current or future private cloud potential.
Many IT advisory organizations have developed business process maturity models, all of which look more or less the same. Your favorite search engine can locate a few in a couple of seconds. They usually describe five levels of maturity similar to the following:
- Level 1. Ad hoc, tactical where few processes are defined or documented.
- Level 2. Repeatable, where processes are defined and documented but may vary between functional areas even for similar tasks.
- Level 3. Processes are documented and standardized across the organization and include performance metrics.
- Level 4. Process metrics are routinely gathered, correlated to business operations and disseminated to stakeholders.
- Level 5. Continuous process improvement is enabled by quantitative feedback; proactive capabilities are implemented.
Within the context of private cloud storage, organizational process maturity is clearly a prerequisite to a successful private cloud implementation. Firms should attain a Level 3 capability at a minimum before considering a private cloud storage implementation. The reasons for standardized processes relate to standardized infrastructure, which will be discussed shortly. If your firm doesn't legitimately have a Level 3 maturity, improving processes to that level is the first step to take before embarking on the road to private cloud storage.
Developing a private storage cloud architecture
The organizational benefits from a cloud implementation flow from the discipline and standardization demanded by a cloud architecture. These include better control, optimized utilization, simplified infrastructure architecture and enterprise-wide management practices
A key characteristic of private cloud storage is a standardized infrastructure, which is sometimes referred to as a reference architecture. Some may argue that a standardized infrastructure is necessary to standardizing procedures and there's some merit to that argument. However, backup and recovery, provisioning, monitoring and other storage management tasks can be standardized across disparate platforms.
Although a reference architecture can be single-vendor, most are not. A reference architecture is merely a specification of the systems and configurations the organization will support. This will include versions of software and firmware to ensure the technology components are consistent across the organization. For most organizations, storage consolidation will play a key role in evolving to a reference architecture. Because of business acquisitions, business unit autonomy or simply circumstance, organizations often have more variety in systems than can be economically or technologically justified. A private cloud storage initiative is a good opportunity to pare extraneous systems from the data center or to at least prevent them from expanding into other areas.
Private storage cloud building blocks
While IT organizations can develop a reference architecture for any combination of systems, they can also use preconfigured systems such as NetApp's FlexPod. FlexPod is a prequalified and preconfigured system consisting of VMware components, Cisco Unified Computing System blade servers, Nexus switching components and NetApp FAS storage. It's probably most appropriate for new application deployments or technology refreshes because it's a fresh start from existing systems and doesn't incorporate storage from other vendors. It also simplifies technical support, as all three vendors implicitly support the configuration and keep firmware levels in sync.
For organizations that want to consolidate existing diverse systems into a private storage cloud, Hitachi Data Systems' Virtual Storage Platform (VSP) storage controller allows a wide variety of arrays from other vendors to directly attach to it. This offers the benefits of heterogeneous virtualization as well as standardized tools sets (Hitachi's) across the different arrays. This approach can function as a transitional step from diverse systems to standardized configurations without losing the current equipment investment.
The software side of the storage cloud
Standardization can also be facilitated at the software level. For example, Symantec Corp. offers a software stack that can be used to bring commonality to multiple hardware systems. Its Storage Foundation product has file system, volume manager and data movement products across operating platforms. Symantec's Veritas Operations Manager and recently announced (though not yet released) Veritas Operations Manager Advanced purport to provide single-point of management across the virtual server and storage environments, including storage resource management (SRM) functionality for visibility and reporting about the storage environment. The reporting and measurement functionality of SRM apps, among other things, allows firms to determine the cost of storage delivery. This facilitates chargeback for services, which is essential to controlling costs. Some companies won't actually enforce a chargeback, but rather use the charge function to establish the relationship between cost and delivery, and to illustrate it to the IT group, user and management.
F5 Networks Inc., perhaps best known for its IP load balancers, also offers a slightly different take on integrating existing infrastructure into a cloud. F5 takes a more app-oriented perspective. Its Dynamic Services Architecture uses appliances to provide data classification services that help ensure data is located appropriately so it can be delivered at the required service level. These appliances dynamically move data to the appropriate storage tier or device.
Indeed, application classification is critical to appropriate deployment of cloud services. Part of having mature operational procedures is maintaining an application catalog for the organization that includes service-level requirements and delivery specifications. This is necessary for any cloud deployment because some applications are more appropriate for a public cloud than a private cloud.
The clearest way to segregate applications appropriately is to consider their strategic importance to the organization. Applications can be classified as either commodity or high value. Commodity applications are important but offer no competitive advantage in the marketplace, while high-value apps offer such an advantage.
To get a better picture of the difference, consider backup and recovery. Every company needs data protection, but it doesn't result in a competitive advantage in the marketplace; organizations with great backup and recovery processes won't be able to charge higher prices for their products or use their backup prowess to increase demand for the company's products. Thus, backup and recovery, once it has met the threshold of viability, is an operation where costs should be minimized. This makes it an ideal target for public cloud services. Email and contact management are two other examples of necessary but non-strategic applications.
In contrast, strategic applications differentiate a company from its competitors. Examples could be unique manufacturing processes and product design systems. In those cases, the systems may depend upon unique devices, specialized configurations of devices or operating software not commonly found in public cloud deployments. For those technological reasons, strategic apps aren't candidates for cloud outsourcing. Moreover, secure systems, such as those related to defense or other classified environments, couldn't be deployed externally. Nevertheless, they can benefit from standardization and improved operational processes and, therefore, private cloud configurations.
EMC Corp. can offer a standardized infrastructure for private cloud deployments, anchored by its Symmetrix VMAX architecture. In addition, the company has a unique filtering model for classifying applications. The filtering model is applied by EMC's consulting arm to assist organizations in their private storage cloud transformations.
This filtering model specifies an economic filter, a trust filter and a functional filter. For example, applications have economic parameters, trust requirements and functional requirements that may be improved or hindered by a specific cloud architecture. By mapping applications to the results of the filter analysis, EMC classifies applications as being best suited to a private cloud, public cloud or hybrid cloud, or simply left in a legacy environment. No organization will migrate all its applications to a cloud environment, and EMC's methodology is a useful way to classify applications and set priorities.
Hype, but hope too
The hype surrounding the private cloud can make it seem like a unique industry development with heretofore unachievable benefits; and vendor marketing has it sounding like success is only a purchase order away. Transforming data center storage systems into private storage clouds begins with disciplined processes based on a standard operating platform. Does that qualify as utility storage, cloud storage or just better infrastructure? Call it what you want; users care about better service, not labels.
BIO: Phil Goodwin is a storage consultant and freelance writer.
This story originally appeared in Storage magazine.
This was first published in August 2011