Building a green data center

Diversity in the Ecological Soup by jurvetson (cc) (from Flickr)
Diversity in the Ecological Soup by jurvetson (cc) (from Flickr)

At NetApp’s Analyst Days last week David Robbins, CTO Information Technology, reported on a new highly efficient Global Dynamic Lab (GDL) data center which they built in Raleigh, North Carolina.  NetApp predicts this new data center  will have a power use effectiveness (PUE) ratio of 1.2.  Most data centers today do well if they can attain a PUE of 2.0.

Recall that PUE is the ratio of all power required by the data center (includes such things as IT power, chillers, fans, UPS, transformers, humidifiers, lights, etc.) over just IT power (for racks, storage, servers, and networking gear).  A PUE of 2 says that there is as much power used by IT equipment as is used to power and cool the rest of the data center.  An EPA report on Server and Data Center Efficiency said that data centers could reach a PUE of 1.4 if they used state of the art techniques outlined in the report.  A PUE of 1.2 is a dramatic improvement in data center power efficiency and reduces non-IT power in half.

There were many innovations used by NetApp to reach the power effectiveness at GDL. The most important ones were:

  • Cooling at higher temperatures which allowed for the use of ambient air
  • Cold-room, warm aisle layout which allowed finer control over cooling delivery to the racks
  • Top-down cooling which used physics to reduce fan load.

GDL was designed to accommodate higher rack power densities coming from today’s technology. GDL supports an average of 12kW per rack and can handle a peak load of 42kW per rack.  In addition, GDL uses 52U tall racks which helps reduce data center foot print.  Such high powered/high density racks requires rethinking data center cooling.

Cooling at higher temperatures

Probably the most significant factor that improved PUE was planning for the use much warmer air temperatures.  By using warmer air 70-80F/21.1-26.7C, much of the cooling could now be based on ambient air rather than chilled air.  NetApp estimates that they can use ambient air 75% of the year in Raleigh, a fairly warm and humid location.  As such, GDL chiller use is reduced significantly which generates significant energy savings from the number 2 power consumer in most data centers.

Also, NetApp is able to use ambient air for partial cooling for the much of the rest of the year when used in conjunction with chillers.  Air handlers were purchased that could use outside air, chillers or a combination of the two.  GDL chillers also operate more efficiently at the higher temperatures, reducing power requirements yet again.

Given the temperature rise of typical IT equipment cooling of ~20-25F/7.6-9.4C one potential problem is that the warm aisles can exceed 100F/37.8C which is about the upper limit for human comfort. Fortunately, by detecting lighting use in the hot aisles, GDL can increase cold room equipment cooling to bring temperatures in adjacent hot aisles down to a more comfortable level when humans are present.

One other significant advantage to using warmer temperatures is that warmer air is easier to move than colder air.  This provides savings by allowing lowered powered fans to cool the data center.

Cold rooms-warm aisles

GDL built cold rooms at the front side of racks and a relatively open warm aisle on the other side of the racks.  Such a design provides uniform cooling from the top to the bottom of a rack.  With a more open air design, hot air often accumulates and is trapped at the top of the rack which requires more cooling to compensate.  By sealing the cold room, GDL insures a more equilateral cooling of the rack and thus, more efficient use of cooling.

Another advantage provided by cold-rooms, warm aisles is that cooling activity can be regulated by pressure differentials between the two aisles rather than flow control or spot temperature sensors.  Such regulation effectiveness, allows GDL to reduce air supply to match rack requirements.  As such, GDL reduces excess cooling that is required by more open designs using flow or temperature sensors.

Top down cooling

I run into this every day at my office, cool air is dense and flows downward, hot air is light and flows upward.  NetApp designed GDL to have air handlers on top of the computer room rather than elsewhere.  This eliminates much of the ductwork which often reduces air flow efficiency requiring increased fan power to compensate.  Also by piping the cooling in from above, physics helps get that cold air to the racked equipment that needs it.  As for the hot aisles, warm air will naturally rise to the air return above the aisles and can then be vented to the outside, mixed with outside ambient air or chilled before it’s returned to the cold room.

For normal data centers cooled from below, fan power must be increased to move the cool air up to the top of the rack.  GDL’s top down cooling reduces the fan power requirements substantially from below the floor cooling.

—-

There were other approaches which helped GDL reduce power use such as using hot air for office heating but these seemed to be the main ones.  Much of this was presented at NetApp’s Analyst Days last week.  Robbins has written a white paper which goes into much more detail on GDL’s PUE savings and other benefits that accrued to NetApp when the built this data center.

One nice surprise was the capital cost savings generated by using GDL’s power efficient data center design.  This was also detailed in the white paper.  But at the time this post was published the paper was not available.

Now that summer’s here in the north, I think I want a cold room-warm aisle for my office…