When we talk about “hybrid cloud” there is an almost overwhelming propensity to think of a solution that spans the public cloud and the private cloud. The implication is that these are two similar technologies that can be blended to provide a greater Gestalt, something better than either public or private alone. This, however, is not an accurate way to go about evaluating hybrid strategies and their impact upon the enterprise.
Rather, it should be approached as akin to evaluating a hybrid mixing of “premium” gasoline with regular. While we can agree that premium fuel is costlier than regular, we can argue about whether or not it provides superior results for the vehicle in which it is being introduced. The blending of premium fuel with lower octane regular gas surely won’t increase the performance.
Is hybrid cloud a cost-saving option?
Any number of studies will identify cost savings as a primary driver for enterprise cloud adoption. But the Application Service Provider (ASP) model of a dozen years ago proved that this is not always the case. More akin to the private cloud than anything else, the ASP model actually added cost as it moved servers off premises and to a provider’s data center. Customers still had to pay the cost of their servers, plus the communication between the two sites — which was far more expensive back then — as well as maintenance and other services fees. With slow wide -area carrier circuits, ASPs provided inferior performance at increased cost. No wonder the model died an ugly death.
A more appropriate approach, then, would be to identify all three players in the mix: public cloud; private cloud; and on-premise technology. The next step is to define them and analyze the financial impact of these options on your enterprise, rather than the technological impact.
We begin with on-premise technology not only because it is the oldest of the models, but also because it is the most expensive. The enterprise retains all of the costs of operating the data center. These costs include the capital expense of equipment, the operating expenses of housing, powering, cooling, and maintaining all the hardware and supporting the software. No economies of scale are realized, adding to the high cost.
Thought of by some observers as a remote server dedicated to only one customer, the location of the actual private cloud server hardware was rendered inconsequential on September 4, 2012, when Microsoft introduced its Windows Server 2012. This was marketed as a “Cloud OS” that is meant to be used by enterprises to build their own private clouds. Location was no longer the key difference between public cloud and private cloud. Rather, it was whether or not the solution adhered to the cloud definition as provided by the National Institute of Standards in Technology (NIST). This adherence makes Windows Server 2012 qualify as a “cloud,” while the fact that it is used only by a single enterprise in each instance made it “private.”
The public cloud is where we leverage the economies of scale delivered by server virtualization. As opposed to the earliest ASP model — where each enterprise customer had to pay for its own servers either outright or as a function of fees — the public cloud enables dozens of customers to share server hardware, splitting the cost of that hardware and its maintenance. This reduces the cost to each customer to a fraction of what they would pay for their own private servers.
Some observers express concern about the safety of data in this multi-tenant server arrangement. What stops data in one server session from “leaking” into another, or being accessed by another? The effective answer is that we protect ourselves in the same way we protect any storage. We encrypt the data at rest as well as in transit, and we do not make the key available to the cloud service provider. In this way, even if data leakage does happen, the leaked data will be unreadable. Also, if a government agency attempts to subpoena the data without the owners’ knowledge, they will be unable to use it. This will require them to make an appropriate notification to the data owner.
Everything is hybrid cloud by definition
When we talk about putting information technology to work in our enterprise, we often refer to it as network or systems integration. This is a very accurate description. We integrate, or combine, hardware, software, and service elements from various producers to create the best, most high performance computing solution possible.
Almost invariably this arrangement includes on-premise components such as workstations, routers, switches and security devices. More and more, we’re seeing specific functions being migrated to remote data centers or private cloud servers that enable user self-service and elastic resource throttling. Finally, most data centers are using public cloud services for various functions, including Customer Relationship Management (CRM), order processing, supply chain management, productivity software, and communications and collaboration tools.
Hybrid cloud agility
The ability to select and combine elements from on-premise, private cloud, and public cloud provides flexibility and business agility to those responsible for delivering superior IT solutions. We should expect that more and more solutions (ultimately all) will be a hybrid of on-premise, private, and public cloud. It is only by combining them that we can fully realize the greatest reduction in cost while optimizing performance.
About the Author
Howard M. Cohen is a contributor to EnterpriseEfficiency.com, a UBM Tech community.Tags: Cloud Computing,Data Center,IT Security,Storage,Technology,Virtualization