HDS rolled out Version 4 of its Hitachi HCP product with more cloud capabilities and a caching device called the Hitachi Data Ingestor (HDI) to speed the transmission of data from remote sites to and from a central HCP.
Object storage systems that use rich metadata to keep information about context and content of data are becoming more popular as cloud storage systems, particularly in products such as EMC Atmos, HCP, DataDirect Networks Web Object Scaler (WOS), NetApp StorageGRID, and the Dell DX Object Storage platform that uses Caringo CAStor.
HDS builds cloud 'on-ramp'
The HCP is built on technology HDS acquired from Archivas in 2007. Last year, HDS turned the HCP into its cloud platform by adding features such as multi-tenancy, encryption and the ability to set access rights. For today's release, HDS added many-to-one replication, improved chargeback and the ability to scale to 40 PB in one cluster. HDS also added more granular multi-tenancy, giving customers the ability to have multiple namespaces for each tenant.
The big addition, however, is HDI. Miki Sandorfi, HDS' chief strategist for file content and the cloud describes HDI as an "on-ramp to move NFS and CIFS data optimized for HCP onto the cloud." HDI provides file system -- CIFS and NFS -- read/write access to a local or remote HCP system, migrates data to HCP while maintaining a local link to content and presents a cache for fast retrieval of frequently accessed data. It also supports Active Directory and LDAP authentication, HCP tenant and namespace features over CIFS and NFS, and measures storage use for chargeback and quota management.
Sandorfi said the HDI could replace the need for a WAFS device for HCP customers, although it doesn't use deduplication or compression to reduce bandwidth before sending it over the link in the first version.
Sandorfi said HDI could sit at a private cloud customer's site or in remote/branch offices. HDI costs $23,000 for a pair.
Jeff Papen, CEO at Peak Web Hosting, a HDS customer, said he doesn't use HDI but sees it as a possible fit for a private cloud service for companies with remote offices down the road. He currently uses HDS Adaptable Modular Storage (AMS) midrange storage systems for a hosted SAN service and HCP Version 4 to host credit card information for his managed services customers.
He said HCP's handling of object storage lets him scale in ways that would not otherwise be possible. Peak has 125 TB of capacity for the HCP, and Papen said it's a good fit for cloud use because of its multi-tenancy, namespace support and scalability.
"For companies that have a tremendous number of objects, it gives you metadata to provide context for those objects," Papen said. "Most companies manage their data based on file name and timestamp. Good luck with that; that ain't even close to scaling."
He continued: "Because our customers have so many sub-customers under them, there are different elements to each project and different initiatives that they have, so you would need a file name that's about four pages long to account for all that data."
Papen added that he likes HCP's compliance and versioning features. "You can show credit card companies all the dumps, show that you backed all the data up, show that the permissions are there," he said. "A credit card company may say 'I have a seven-year contract, show me it's going to be there for seven years.' You can actually demonstrate that. What's my alternative? Tape backups. Unlike tape, all this data is available in real-time."
Caringo goes from content-addressable storage to cloud
Caringo originally developed CAStor as an archiving alternative to content-addressable storage (CAS) systems such as EMC Centera, but the object-based software has evolved into a cloud enabler. Caringo got a big boost earlier this year when Dell began reselling CAStor in the Dell DX6000 object storage system that competes with EMC Atmos and Centera, and NetApp's StorageGRID based on technology it acquired when it bought Bycast this year.
Caringo CEO Mark Goros said the new features give users within multiple departments or divisions full control over security and authentication by creating domains for each company, department or division. Each domain can designate its own bucket. For instance, within a domain, named objects are organized into multiple storage buckets that are protected with their own security realms and Access Control lists. Authentication and authorization can be set in the bucket, domain or down to the object level.
Once a CAStor cluster is set up, the CAStor administrator sets up tenants using the administrative console. A tenant is a domain plus a set of security realms, and multiple domains can be set up inside one physical CAStor cluster.
"If companies want to completely isolate certain departments, we create domains for each one," Goros said. "Within each domain, a bucket can also be created, like a private cloud within a public cloud. You can set security by the bucket, domain or object level."
David Hill, an analyst at Mesabi Group LLC, said the most significant part of Caringo's announcement is the multi-tenancy function in CAStor 5.0 because it focuses on protecting the integrity of the data. "They are trying to enrich the software so they can get into the cloud storage space, like everybody else," Hill said. "All these features help them get a better edge into the cloud space. Everybody is talking about multi-tenancy, but how they do it is different."
CAStor 5.0 also includes dynamic caching for demand-based replication. The dynamic caching places replicas of content in RAM and end users can access any replicas across multiple nodes. When demand is high, caching spreads throughout the numerous nodes; when demand is low, the cache replicas are scaled down. "Dynamic caching is set up by designating a quantity of RAM in each node to be an object read cache," Caringo's Goros said. "Once that is done, CAStor will create replicas of high-demand objects in cache on multiple nodes based on demand for that object. As demands wane, the replicas are removed from the RAM cache."
The named objects feature is a user-assigned name given to objects, and works with CAStor's system-assigned Universally Unique IDs (UUID) for applications that require self-generated or inherited symbolic names to store and retrieve a file or object.