BOSTON -- EMC Corp.'s unveiling of VPlex, a set of products and an architecture for federating data storage across geographic distance, drew a good deal of interest from customers Monday
Two of four VPlex products are available today: VPlex Local and VPlex Metro. Each product comprises appliances called engines made up of dual quad-core processors, 32 GB cache and 8 Gbps Fibre Channel (FC) connections with an InfiniBand internal network. VPlex uses technology acquired from YottaYotta approximately three years ago, though what had been a proprietary operating system in YottaYotta's storage virtualization devices has now been ported to Linux.
VPlex Local supports up to four VPlex engines and 8,000 storage volumes for non-disruptive data migrations between EMC and non-EMC arrays in the same data center. VPlex Metro can link two separate Vplex clusters within a data center or up to 100 kilometers apart. VPlex Metro can federate some or all of the storage volumes across both clusters, and any data volume can be configured for simultaneous access by applications in two locations.
EMC officials said VPlex Geo and VPlex Global will follow early next year. VPlex Geo will support asynchronous replication and storage federation over intercontinental distances. VPlex Global will support distributed concurrent data access and workload relocations across multiple global locations over both synchronous and asynchronous distances.
EMC VPlex is the "active-active storage" technology EMC first said it was developing at last year's VMworld conference to support distance VMotion. Synchronous mirroring provides the foundation for migrating workloads across distance. But Barry Burke, EMC's chief strategy officer for Symmetrix and virtualization technologies, said the "secret sauce" that allows VPlex to quickly move even large amounts of data is an asynchronous process in which members of the VPlex cluster seek out data segments for access upon request, rather than propagating every write to every system every time.
"Most implementations of distributed data with cache coherency try to make sure all cache is coherent all the time," he said. "What makes VPlex more scalable is that it doesn't treat memory as one big thing but lots of small things. The granularity of locking allows [VPlex] to move writes around without having to notify everybody. The systems know when they go to access the data where the latest copy is."
Burke said VPlex "is like FAST turned sideways"—referring to EMC's Fully Automated Storage Tiering software that automatically moves LUNs vertically between tiers of storage based on performance requirements. VPlex moves data segments from location to location on demand in a distributed configuration.
Another use case for VPlex is disaster recovery and failover, which requires higher-bandwidth pipes between data centers. If replication is interrupted, VPlex uses sets of rules similar to SRDF to either stop I/O to the secondary system until an administrator intervenes, or use a third-party "witness" at another location to monitor communication between nodes.
Users intrigued, but questions remain
EMC customers at the show said they found the vision for VPlex intriguing, but most said they were still working through the cultural and organizational changes required to virtualize their environments, and realize the VPlex vision will take time.
"It's exciting, and useful. However, I feel customers, [as with] other virtualization technologies, need to take this evolution slowly," said Michael Passe, storage architect at Beth Israel Deaconess Medical Center in Boston. "I am sure EMC has done a lot of work on this, but this will not be for all customers on day one. Everyone has a different appetite for this kind and level of change from tradition."
Another EMC customer thinks VMotion and VPlex are a good fit, but he's still getting comfortable with VMotion. "Functionally VPlex will work, but it will have to deal with latency, caching and integrity issues," wrote John Lamb, assistant vice president of platform management at The Hartford Financial Services Group Inc., in an email to SearchStorage.com during the show. "Related to this discussion, we are using VMotion -- and only now mastering it. [But] the promise of VMotion and VPlex could solve service continuity, as well as using an otherwise underutilized disaster recovery site."
Juergen Winkelmann, director of system services for ETH Zurich, a federal institute of technology in Switzerland, watched a demonstration of VPlex by EMC president Pat Gelsinger. "It's taken years to get people to believe virtual servers are as reliable as physical servers and stable," Winkelmann said. "With storage, people are even more anxious. One of the main concerns is how to get around that anxiousness in people."
David Grant, data center manager with a large communications company in Canada that he asked not be identified, added: "Right now those latency issues, and the impact on application performance would stop me, and I'd like to see some other players in the network space extending the layer 2 across the two locations.
"However, it is something that I'll be keeping a close eye on, and should an opportunity arise to experiment, I'd be on it in a flash," he said.
Competitor's not impressed
The VPlex announcement also received attention from EMC competitors. NetApp Inc. issued a statement claiming it can already accomplish this type of stretched cluster over distance with its FlexCache product.
"We are able to use FlexCache to migrate a running virtual machine over distance non-disruptively," wrote Val Bercovici, cloud czar, NetApp's Office of the CTO. "In fact, with FlexCache today we are able to support the ability to pool and share data across longer distances than either VPLEX Local or VPLEX Metro."
"I almost don't want to honor that with a response," retorted EMC's Burke. "The most NetApp can do is split a two-node cluster and stretch it across distance."
VPlex Metro today does look similar to NetApp's stretched clusters with FlexCache, as well as stretched clustering available with IBM's SAN Volume Controller (SVC), Burke acknowledged, but he claims "the differences will emerge as we move down the path" with multiple support and the ability to federate across multiple high-availability engines at multiple sites.