Examining the OpenStack Swift object storage and commodity hardware behind the cloud service of the San Diego Supercomputer Center at the University of California, San Diego, provides a glimpse into the types of software and hardware that many major commercial providers are often unwilling to discuss.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
At 5.5 PB of raw storage, the San Diego Supercomputer Center (SDSC) private cloud pales in size and scope compared to trillions of objects stored in the services of major public cloud providers such as Amazon and Microsoft. Yet they all include horizontally scalable object-based storage running on commodity hardware stocked with hard disk drives and higher-performing solid-state drives (SSDs).
The SDSC cloud is open to educational and research users within the University of California (UC) system, as well as other universities and industry partners throughout the country.
Prior to launching OpenStack Swift object-based cloud storage in 2011, SDSC used tape to archive data for researchers at UC San Diego. But when users needed to access the data on a periodic basis, the tape systems were slow, prone to failure and labor-intensive to maintain, according to Steve Meier, SDSC's storage platform manager.
"The best use case for tape is 'write once, read never.' Our researchers archive and look at data more often," Meier said. "When you have lots of accesses coming from reading back, and then [you have to] keep up with all the writes, there are additional costs to have enough hardware resources to also validate the tapes. All of those considerations made tape an expensive technology for us."
Object storage held the prospect of a less costly distributed system, with no central controller, where data could be written to multiple disk drives spread across clusters of commodity servers. The clusters could scale to store hundreds of petabytes of data through the simple addition of nodes, which equate to servers in many cases.
"We can have lots of systems in aggregate to spread the load. It scales out nicely as we add storage or need performance," Meier said. "With some of the traditional file systems or tape devices, you have to buy some really expensive centralized hardware. With object storage, you can use relatively cheap hardware. You can spread your investment out."
Object storage sits at the backbone of all the major public cloud storage services. Amazon, Google and Microsoft built their own object stores. AT&T runs EMC's Atmos technology, and IBM uses software from Nirvanix, another provider. OpenStack Swift was created when Rackspace Hosting turned over the code that powers its public cloud storage to an open source community for further development. Hewlett-Packard (HP) Co. is among the contributors to the OpenStack project and uses the Swift object store with its cloud storage service.
OpenStack Swift provides durability, data integrity
Technology pickings were slim when SDSC entered the market for object storage, and OpenStack Swift was the best fit. With a tight budget, the SDSC team preferred open source software to reduce costs, eliminate vendor lock-in, and permit code review and modification. SDSC also wanted a system that supported not only the Rackspace/Swift API but Amazon's Simple Storage Service (S3) API (the de facto standard), so users would have access to tools that work with both services, Meier said.
"My group is a group of three. We manage the infrastructure. We do some development and keep things going, but we didn't have a large team to build and support clients and run the infrastructure as well," Meier said. "If you have users that are currently using S3, and they have scripts or command-line clients or other ways to manipulate their data upload, download, search, theoretically they could now point that tool at SDSC's cloud storage and it would just work."
For durability and data integrity, SDSC liked OpenStack software's built-in provision for auditing objects. In the event of bit rot, if a file no longer matches the checksum, the system automatically quarantines the file and re-replicates a version of the object, Meier said.
"It's a pretty simple and reliable service. We're used to running clusters, so setting it up was not a problem," Meier said. "The challenges were getting a lot of our users to understand how object storage works, what you can do, what you can't do and [dealing with] the varying quality of the clients that are out in the wild. We fought through a lot of configuration issues and supporting different operating systems."
OpenStack software logic handles data replication and distribution across servers and drives, but SDSC veered from the default setting of three replicas. To maximize available space and save on storage costs for users, SDSC originally used two copies on top of RAID 6 or RAID-Z2 (a non-standard RAID type, similar to RAID 6, allowing for two disks to fail) on most servers. The approach yielded approximately 20% more usable capacity than the triple-copy default, according to Meier.
But Matthew Kullberg, technical project and services manager at SDSC, said performance waned -- especially during RAID rebuilds -- as utilization grew. The staff is now migrating to OpenStack's default configuration of three copies. Decreasing hardware costs will help the center keep its per-terabyte-cost structure low, he added.
SDSC cloud storage implementation explained
Beth Cohen, a senior cloud architect at Boston-based Cloud Technology Partners Inc., said Swift and Amazon S3 each make three copies of every object by default in three locations, whether physical or logical. That means 1 PB of data requires 3 PB of storage. OpenStack Swift users have the option to make four or more copies, but Cohen said she knows of no one who does that.
"There's no need to do RAID because you're duplicating your data protection," she said.
Another distinction between SDSC's implementation and those of the major providers is the number of data centers. Rackspace operates data centers in Chicago, Dallas and London, and HP has one in the Washington, D.C. area and another in Las Vegas. Microsoft runs four in the United States, two in Europe and two in Asia, and offers a choice of Local Redundant Storage option with three data copies within a data center or Geo Redundant Storage with an additional three copies at a data center at least 400 miles away.
By contrast, SDSC stores 98% of its data at UC San Diego and runs only a small off-site cluster at the University of California Office of the President in Oakland. SDSC's Meier said using one data center is less expensive.
"It takes extra labor to maintain multiple sites," he said. "That's just the reality of the university versus a true cloud provider."
Meier concedes that major services such as Amazon S3 may offer more granular features and controls, such as individual access control lists on objects. But SDSC's cloud storage can be less expensive to use, with no network fees for data transfers in or out of the service, and no charges for data requests. Major providers typically charge for outbound data transfer as well as put, copy, post, list, get and other requests.
SDSC's cloud service has a base price of $3.25 per month for 100 GB of storage. Beyond that, it costs $0.0325 per GB per month, which comes to $32.50 per TB per month or $390 per TB per year.
"We never intended to directly compete [with major cloud providers]. As a non-profit, that's not our charter," Meier said. "Our competition was to try to come up with technology that gave our researchers competitive advantages to get grants and have technologies that they could use to help further their research."
Swift implementation varies with use case
Transparency is another plus for SDSC's users. Major cloud storage providers often refuse to provide specifics on the infrastructures behind their massively scalable cloud storage systems. By contrast, SDSC posts a webpage with a diagram and chart supplying details about its cloud.
SDSC acquired inexpensive hardware for its initial 5.5 PB of raw storage. "Storage node A" consists of 14 Aberdeen x539 storage servers, each with two dozen Hitachi 2 TB near-line SAS drives. About year later, SDSC added "Storage node B" with 35 Dell R610 servers, each with a direct-attached 45-drive JBOD chassis filled with 3 TB Seagate NL SAS drives. SDSC also uses enterprise-grade multi-level cell SAS-based SSDs from Intel to boost the performance of the SQLite database that runs the account and container services.
Redundant 384-port Arista Network 7508 switches connect the 16 Dell R610 proxy servers to the storage over a 10 Gigabit Ethernet data network.
"At the time, when we purchased the systems, they were as cheap as we could find," Meier said. "It's just plain, old, off the shelf -- nothing really special."
Cloud Technology Partners' Cohen said major providers tend to favor Dell, HP, Quanta, Super Micro servers, and Layer 3 networking and a mesh of routers. She said every Swift implementation is different because the use case dictates the technology, including the type of objects, the ratio of proxy servers to storage servers, the number of disks, the amount of memory per server and the speed of the network connections. Some users have separate nodes for accounts, containers and objects; others run all three on any given storage node, as SDSC does, she said.
Meier said SDSC's object storage is suited more to a private or internal cloud rather than a massive public cloud. Based on his experience, he recommended Swift to organizations that need to archive or share large amounts of unstructured data or expand the capacity for backups with relative ease.
"If enterprise X needs object storage," Meier said, "going with Swift commodity storage is as cheap as I know how to do it."