Hyperscale and the Challenge of Internet Data Centers

Hyperscale server racks in data centerThe “cloud” sounds light, fluffy, and ethereal, but in reality, the cloud is one or more huge data centers filled with computers and challenges. Hyperscale challenges include maximizing density to reduce expensive floor space while still keeping the systems cool, how to manage the enormous energy expenses, and bandwidth.

Juggling system density and heat moderation may be the largest issue, or at least the one most amenable to new innovations. The good news is that reducing heat in the systems reduces the air conditioning load, which reduces energy expenses. But how to pack more systems in a smaller space while keeping cool takes enormous effort and attention to the smallest details.

Servers, the building block of clouds, now, thanks go Google and Amazon and other hyperscale pioneers, have changed. Where earlier data center designs focused on redundant parts on every server, including power supplies and network connections, newer designs focus on the applications, not the servers. Why spend the money for redundant power supplies in servers if your application spreads across many servers, and when one goes offline, continues on by relying on the other redundant servers hosting the application?

That said, server heat generation still matters. One way to reduce the thermal load is to rely on lower-power server processors, like Intel Atom processors. A traditional small high availability system runs two servers with multiple virtual machines. Each needs to have the horsepower to handle the load, plus the virtual machines of the other server as a redundant backup. That means a workhorse processor. But you can’t put 48 of those in one rack and handle the heat load. However, 64 lower powered servers with lower heat processors mitigate head build up much better. Air handling is necessary, of course, but some companies are even using servers without cases to increase the efficacy of naturally flowing air patterns through the data center.

Less powerful servers need less electricity, which is a bonus. Less powerful servers create less heat, meaning less air conditioning, another bonus. Other tricks to handle the heat load are intelligent fan algorithms for servers, racks, and data centers, each able to moderate air flow as needed, leaning toward less fan usage to keep power use low. Quad-rank DIMMs reduce heat and power consumption of RAM chips, saving energy on two fronts.

Designing data center airflow reduces energy use over adding more fans to move the air in a badly-designed environment. Venting air from facing racks to a common area (the hot aisle), keeps the other areas cooler, and the hotter air concentration rises more reliably and can be contained based on the environment. Keep smooth, air-flow conducive pathways by eliminating breaks in the raised floor and drop ceiling panels for better air flow.

And no matter how much bandwidth a data center has, the managers wish they had more. A minimum of three Tier 1 connections should terminate in the data center, preferably from three different directions. Data centers that seem to be in an odd place based on climate and energy costs are in that odd place because that’s where three or more Tier 1 data networks intersect.

Data center design remains complicated for all and mastered by few. However, when you control the heat created by efficient server density, control energy costs, and have enough bandwidth (if that’s possible), you are well on the way to a successful data center.

James GaskinJames Gaskin writes books, articles, and jokes about technology, and consults for those who don’t read his books and articles. Email him at [email protected].

Tags: Cloud Computing,Data Center,Technology,Uncategorized