Data Center Journal

VOLUME 42 | FEBRUARY 2016

Issue link: http://cp.revolio.com/i/634770

Contents of this Issue

Navigation

Page 21 of 24

THE DATA CENTER JOURNAL | 19 www.datacenterjournal.com t his paradox is at the heart of why the industry is not making significant efficiency gains. Let's take a closer look at where the industry stands on efficiency progress. Since the conversation began in earnest with the EPA Report to Congress on Server and Data Center Efficiency in August 2007, there have been efficiency gains, at least as measured by PUE at the facility level. e work of e Green Grid, efficiency improvements from infrastruc- ture suppliers, and innovation in data center design and operation have yielded considerable savings. Most notably, the shi from Tier IV to Tier III with reserve bus, as well as from cooling systems with high electrical loads (such as DX and chilled-water systems) to those using vast amounts of water in direct or indirect evaporative systems (and somewhat gam- ing the PUE metric), has helped bring the average PUE in North America from the 2.4–2.8 range down to a weighted average trending toward 1.5–1.6. It's worth noting that some hyperscale industry leaders are achieving electrical-only PUEs around 1.1, albeit in data center–friendly climates and with unusual tier architectures. Even without engaging in heroic measures, the use of new power systems, thermal-man- agement techniques and DCIM enables a high-availability, mission-critical, Tier IV data center brought online today to easily achieve a PUE in the 1.4–1.6 range without sacrificing reliability or performance. But PUE is only part of the picture, because it doesn't adequately "reward" sav- ings made through IT-equipment optimi- zations. It addresses the ratio between IT and its enabling equipment—how the pie is divided—but stops short of addressing the size of a facility's energy-use pie and result- ing IT productivity per watt. Frankly, PUE is akin to seeing a building with all the lights on and not knowing whether anyone is home to use them. So let's look at how to make that pie—your total energy use—smaller. Here are six strategies that will deliver savings. incRease seRveR uTiLizaTion Raising server utilization is a major part of enabling server power supplies to operate at maxi- mum efficiency. Some chip manufacturers will tell you that internal paths from the processor to memory and so on can only handle so much traffic. Most say 80 or 85 percent of processor and/or I/O capacity (or server SPECPower rating) should be the limit. Today, the estimated average is between 8 to 12 percent (which is up 50 percent from circa 2007). at percent could double or quadruple without com- ing close to the recommended headroom. ere are very few instances in business where gains this big and for such little effort are possible. Of course, increasing server utilization improves efficiency, but the biggest benefit comes in the build-out it could delay. Tapping into four times more server capacity could delay your need for additional servers—and possibly more space—by a factor of four. Beyond the benefits one organization can achieve, reporting IT-asset utilization effectiveness (IUE) could go a long way toward changing this entrenched low-uti- lization modus operandi. Perhaps IT-asset utilization rates should be reported much like PUE, WUE and CUE, giving us a more complete picture of efficiency in the data center. Given that framework, the first data center operator to start reporting an average IT utilization rate much over 20 percent is bound to be the industry leader. sLeep deep In fairness to CIOs every- where, the fact that servers are on 24x7 when "produc- tion" workloads operate more like 9x5 or 10x5 is the largest obstacle in improving average server utilization rates. In Emer- son's Energy Logic strategies (released in 2007, updated in 2012 and still relevant), we said placing servers into a sleep state during known extended periods of nonuse, such as nights and weekends, will go a long way toward improving overall data center efficiency. at advice is even more relevant today. If you are afraid to pull the trigger, start only with the equipment that's under warranty with SLAs that would have a service technician to you on site in a guaranteed time frame should a problem occur. When your idle time frame is draw- ing to an end, power the equipment back up with enough time for the technician to arrive according to the SLA, plus two hours for buffer. Powering down your servers has the potential to cut your total data center energy use by 9 percent, so it may be worth the extra effort. In the future, creative methods will likely emerge as alternatives, such as taking a page from cloud providers and letting servers work the night shi. Could people in Singapore use servers sitting in NYC data centers at night, Eastern time? In general, the answer is yes. For latency, you might not want to go as far as halfway around the globe, but you get the picture. is process would require resources to be pooled and aggregated through an inter- mediary and, like everything, have some legal indemnifications, so we're clearly not there yet. e scenario is on the near horizon, though. Move To neweR seRveRs In a typical data center, more than half of severs are "old," consuming approximately 65 percent of the energy and producing 4 percent of the output. is number comes from the e- book Energy Efficient Servers—Blueprints for Data Center Optimization (written by Corey Cough, Ian Steiner and Winston Saunders of Intel). In most enterprise data centers, you can probably shut off all servers four or more years old aer you move the workloads with VMs to your newer hardware. In addition to the straight energy savings, this consolidation will free up space, power and cooling for your new applications. Another significant benefit of switch- ing to newer servers is the improvement in idle power. Servers manufactured today enjoy a 30–50 percent reduction in idle energy compared with those deployed four or more years ago. You can use Emerson CUPS (see Fig- ure 1) or SPECPower to visualize the year- over-year computational throughput-per- watt improvements available from moving loads from older to newer equipment. idenTify and decoMMission coMaTose seRveRs A recent report by Dr. Jonathan Koomey of Stanford and the An- tithesis Group asserts that up to 30 percent of servers are comatose—using electric- ity but delivering no useful information services. Obviously, these servers would be the perfect place to start the decommis- sioning process. Identifying servers that aren't being utilized is more complicated than measur- ing CPU and memory usage if you want to be completely sure you aren't turning off 1 2 3 4

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - VOLUME 42 | FEBRUARY 2016