OpenStack Object Storage and OpenStack Block Storage -- often referred to as Swift and Cinder, respectively -- are designed to work with additional services of the open source OpenStack cloud
computing and management platform.
In this interview, Ashish Nadkarni, a research director in the storage systems practice at Framingham, Mass.-based International Data Corp., explains how Swift, Cinder and an upcoming file-based OpenStack storage service fit into the overall OpenStack plan.
Nadkarni also discussed the potential benefits and disadvantages of the OpenStack approach and how third-party storage vendors are working to integrate their products with the OpenStack platform. He cautioned that commercial vendors could put the open source project at risk with their attempts to promote their own hardware and concentrate on their own agendas.
How do OpenStack Block Storage and OpenStack Object Storage fit into the OpenStack cloud computing platform?
Ashish Nadkarni: OpenStack is a series of projects. Each project is designed to solve a particular problem within the data center. So OpenStack Compute is really meant to provide a standard compute infrastructure for the data center. OpenStack Storage is designed to provide a standard storage infrastructure for the data center. Likewise, OpenStack Networking is designed to provide standard networking infrastructure for the data center. And these three are all software-based services.
One of the ways they have designed [OpenStack] Compute is to use an open source hypervisor, which is KVM [Kernel-Based Virtual Machine], to provide that standard infrastructure. That means if you are building a cloud-scale data center, you would use KVM to provide a virtualized compute infrastructure. However, they have also made sure that the APIs [application programming interfaces] are open. They are published, which then allows other hypervisor vendors like VMware, Microsoft, Oracle and Red Hat to make their hypervisors fully compatible with the compute infrastructure. So you can literally replace KVM with [VMware's] ESX or vSphere or [Microsoft's] Hyper-V or any other hypervisor and gain the same benefits of a uniform infrastructure and also be a multi-hypervisor shop.
One of the challenges with being a multi-hypervisor shop today is the fact that you cannot have compatibility between the two hypervisors. If you put OpenStack Compute, which is called Nova, in the middle, then whether you are a four-hypervisor shop or a three-hypervisor shop, the presentation layer for the compute is uniform. And the hypervisor becomes a commodity at that point, and it's the compute layer that essentially governs the construct within the data center.
In storage, there are two components. One of them is known as OpenStack Swift, which is basically an object-based platform that is designed to deliver petabyte-scale storage solutions and store data in an economical fashion using what is known as a RESTful API to access that storage. They also have block storage, which is Cinder, where you can use either internal storage or external storage to create a software-based block storage standard across the board for OpenStack. There is also an effort to create a file-based infrastructure in OpenStack, which is in a little bit of a nascent stage right now. But eventually OpenStack storage will be all about file, block and object.
[The] same thing [happens] with networking. There is the [OpenStack] software-defined networking, which is open and flow-based, where essentially you can do away with a lot of the physical ASIC [application-specific integrated circuit]-based switching and routing and potentially do all of that inside of networking itself.
They also have other services within OpenStack, like an identity service called Keystone, an image service known as Glance, and a dashboard service that is called Horizon. So there are all of these different services that are meant to tie the compute, storage and networking together in order to provide uniform, cloud-scale, software-defined networking, storage and compute. Or [you could] think about it as a software-based data center infrastructure using commodity components.
What would be the main benefit of this model?
Nadkarni: If you were to realize the vision for OpenStack, the main benefit of this model is similar to what you gained with Linux. You are no longer beholden to proprietary vendor stacks. You can drive down your cost of the infrastructure by leveraging more commodity components, and you can essentially build your infrastructure in a scalable fashion, but more importantly, in an Opex-heavy fashion. So you don't have to pay for your infrastructure up front. You can buy as you need, pay as you go and use commodity, off-the-shelf hardware to build it. It also means, for cloud-based services, that you can get these services a lot cheaper in a more economical fashion than what people are used to today.
What would be the main disadvantage of this approach?
Nadkarni: The main disadvantage is that it's a community-based approach. It's an open source-based approach, unlike Linux, where you have Linus Torvalds, who essentially is the gatekeeper to all the changes that go inside of the kernel. You essentially have a very strong rigor and discipline within the community on how things are incorporated into the project.
I think OpenStack suffers from a bit of a community approach because everybody wants to play an equal role in that organization, and there are vendors sponsoring the project who also want their own agenda to be met. So the disadvantage is that until there is somehow a commercial variant of OpenStack -- like the one from Red Hat, say, where even though it's a community-based approach, there is a single vendor's throat to choke -- a lot of people are going to be shy to just download OpenStack from the Web and install and configure it because if something should happen, they're on their own. And enterprises don't like the concept of DIY [do it yourself], where essentially they're on their own if something happens.
To what degree can end users expect to see third-party storage vendors working with this OpenStack?
Nadkarni: To a large extent, what has played out in OpenStack today is a similar sort of movie as was played out with SNIA [the Storage Networking Industry Association] and SMI-S [Storage Management Initiative Specification]. The Fibre Channel industry started off as being proprietary and closed, highly un-interoperable to a point where there was a lot of in-fighting, a lot of battles being fought, battles of dominance, if you will. Then it came to a point where vendors got to a level of interoperability because of a standards body. And, after all was done, the standards body is slowly losing its relevance because interoperability is pretty much considered to be the lay of the land.
I think OpenStack has a similar risk of vendors trying to use OpenStack as a front to push their agenda. Everybody wants to be in OpenStack, but when you look at what exactly they want to be in OpenStack for, it is really about trying to push their gear and their hardware into OpenStack. From a contribution perspective, the real guys who are contributing to OpenStack are probably the folks like Rackspace and Red Hat who don't have a hardware agenda. The risk they are running into is that vendors are really trying to work with OpenStack, only to deliver more hardware into their own customer environments. It's a little bit of a risk to the project in that you might see certain things very well integrated with OpenStack, say the Cinder interface, but other things like Swift and such will probably not be touched by the vendor community at all because it's a direct threat to their current platforms. The compute side has a similar situation, and networking has a similar situation. So it's a situation where vendors will work with OpenStack only as long as it helps push their agenda. But if it doesn't push their agenda, then they want nothing to do with OpenStack.
This was first published in October 2013