Designing a private cloud storage architecture using object storage

When designing a private cloud using object storage, you need to consider the data that will live on it, and how data will be migrated and recalled.

What you will learn: When you're designing a private cloud using object storage, you'll need to consider the data that will live on it, the volume of data, and how data will be migrated and recalled.

The fact that data is growing isn't a surprise to any IT professional. The greater challenge is the inability to purge old information while the pace of growth continues. Add to that a new reality: Data generation and collection is no longer confined to just users. Devices such as smartphones and cameras are collecting data faster than humans can. As a result, it's become widely apparent that traditional network-attached storage-based systems are unable to scale to meet these demands.

Many data centers are now looking to build their own private cloud storage architecture based on object storage. This article provides step-by-step guidance for designing an object-based, private cloud storage architecture.

A common architecture as a starting point

Object storage systems assign each file an object ID. To access an object, you provide the system with the ID and the system will retrieve it. Object storage systems are a simple flat architecture, as opposed to a more traditional POSIX file system in which data is organized in a hierarchical folder structure.

Most object storage systems start with a similar architecture, regardless of the type of data to be stored. From a hardware perspective, they tend to be nodal in nature. This means the object store is made up of a group of servers, called nodes, with internal storage that is aggregated and presented as a common pool to the attaching applications. It's the responsibility of this software to perform the aggregation, manage the object database and maintain reliability.

It's important to understand how the object storage system will store data. Will it replicate or use erasure coding? In both cases, each object is sent to a node for encoding. In most systems, this is done on a round-robin basis so no single node becomes overwhelmed.

In the replication use case, the object is copied to x number of other nodes, with varying levels of sophistication on where those other nodes might be physically located. Erasure coding segments the object into parts and then distributes those parts across a large number of discrete nodes. Think of replication as similar to mirroring, in which a complete copy of the object is made a number of user- or data-defined times. The system always ensures there are x number of copies available in the object store. Think of erasure coding as RAID, where an object is segmented, then a parity segment is created to rebuild the data in case a node fails.

When you talk with vendors about object storage, you'll likely be offered two distinct choices: a vendor-specific, turnkey solution or a vendor-agnostic software solution. The former may cost more but require less time to get up and running. However, it requires the purchase of new, dedicated hardware. The software solution may be less expensive, especially because some current offerings enable you to use an existing array or storage internal to the server. But these offerings require time to choose and implement because you have to research the various hardware components within your design and then integrate them into your infrastructure. There is also an increased burden on your IT shop to support a hardware-agnostic software offering.

The reality is that neither of these solutions is always better than the other; they're just different. The choice is largely dependent on the time and expertise your organization has available to dedicate to the project.

Time to understand the data

Once you understand the architecture, the next step in successfully designing a private cloud storage infrastructure is to understand the data that will live on it. Most organizations will have one of two data types: a type that supports billions of very small files or a type that supports millions of very large files. In most cases, the billions of small files require random access to a large quantity of those small files. A common example is a large analytics environment where thousands of data points are selected from the billions of files stored so that better decisions can be made. In these environments, random I/O performance is very important, which makes it a key requirement of the selected system. In many cases, solid-state drives (SSDs) are required to store the metadata needed to locate files quickly.

The other use case involves bulk data movement, where very large chunks of data are accessed in a sequential fashion. An example might be a media content distributor that needs to stream audio or video to a large number of users. In this instance, random I/O isn't nearly as important as bandwidth and throughput. These systems tend to have a very high number of nodes in their storage cluster and data is dispersed across all of them. In most cases, hard drive performance, as long as high quantities of drives are available, is perfectly acceptable.

In most cases, data centers don't have a mixture of both data types, at least to the extent that both data types are absolutely critical to the business. For the rare business that requires both data types, a perfectly mixed platform doesn't yet exist on the market. While the software may be the same, the architecture layout needs to vary between these data types.

Native access to storage

The next step is to understand how data is migrated to the object storage system. There are two common ways to accomplish this. First, almost all object storage systems are accessed via a Representational State Transfer (REST) application programming interface. This capability allows applications to directly access storage via simple "get" and "put" commands. No longer does an application have to go through a gateway or file system or, worse, embed SCSI commands.

Native access to storage via the application is an ideal capability. The application knows the state of data better than an external software package that can only rank data by data or size. In addition, the application probably understands value and can better predict when data would be needed again. The downside to application integration is that while simple, it requires access to the actual application source code and not all systems have ready access to this.

When the application source code is inaccessible or there's no time to modify the application, a gateway approach may be best. Many object storage systems have the ability to present a CIFS, NFS and even SCSI (Fibre Channel or iSCSI) translation layer so applications can use object storage without alteration. While this gateway approach doesn't provide the fine-grained customization of data placement, it does provide quick access to the other attributes of object storage.

A logical path for most companies is to start with a gateway approach so the application can begin using the object store immediately. Then, over time, capabilities can be natively added to the application. Gateways are available from many object storage providers, and there are a number of third-party add-ons. My suggestion would be to make separate decisions regarding your choice of object storage software and the gateway system that fits your specific needs.

File size, ingest rates are key

Regardless of how the data will be sent to the object store, it's important to understand the volume of data that will be sent to the store. This is partly affected by the size of files described above, as well as the amount of data that needs to be ingested over a given period of time. If files are relatively small and are sent to the object store on a modest basis, then the ingest rate is inconsequential. If thousands or millions of small files are sent to the object store from a variety of sources, it can become a challenge. And if a few very large files are ingested, and the speed that the ingestion occurs is important, that can be also a factor.

Thousands of small files can best be handled by a large node-count system with plenty of hard disk drives. The system, as mentioned above, may also benefit from being able to store metadata information on SSDs. Not all object storage systems have the ability to store metadata separately from the actual data; if this is a requirement for your specific use case, you should verify that the vendor provides it.

In the case of ingesting very large files, a high drive-count situation may not help since the data is typically received by a single node before it's distributed across the entire object store. In other words, the bandwidth per node becomes an issue. In these situations, you may be better served by having a very fast set of hard drives or even SSDs on a few targeted ingestion nodes.

Total recall

How data is recalled also impacts the design. With replication, the least busy node is responsible for sending all the data, so you're limited to the bandwidth of a single node. With erasure coding, all the nodes that have a segment of the object will send data to the requesting application. For large bulk transfers of a low number of files, erasure coding is typically best. For the transfer of a high number of small files, both methods deliver about the same performance.

The framework offered here for designing a private cloud should be shared with the various vendors offering an object storage solution to you. Look for systems that are specifically designed to handle your type of data, your data protection needs, and the rate your data needs to be ingested and recalled. Solutions will typically vary along these lines, but there will be a few that specialize in your specific requirements.

About the author:
George Crump is a longtime contributor to TechTarget, as well as president and founder of Storage Switzerland LLC, an IT analyst firm focused on the storage and virtualization segments. Before founding Storage Switzerland, George was chief technology officer in charge of technology testing, integration and product selection at one of the largest data storage integrators in the U.S.

This was first published in December 2013

Dig deeper on Private Cloud Storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchStorage

SearchSolidStateStorage

SearchVirtualStorage

SearchAWS

SearchDisasterRecovery

SearchDataBackup

SearchSMBStorage

Close