Data Center Journal

VOLUME 42 | FEBRUARY 2016

Issue link: http://cp.revolio.com/i/634770

Contents of this Issue

Navigation

Page 6 of 24

4 | THE DATA CENTER JOURNAL www.datacenterjournal.com 2 http://it-resource.schneider-electric.com/i/483043-wp-37-avoiding-costs-from-oversizing-data-center-and-network-room-infrastructure/2 ThRee Main conTRiBuToRs To daTa cenTeR inefficiency t o improve data center efficiency, it's necessary to understand what consumes energy in the first place. Although a number of elements contribute to high energy costs, wasteful energy consump- tion is in many cases attributable to three main factors: 1) oversizing, 2) mechanical systems and 3) air-flow management. Historically, data center design and operations have focused on reliability and capacity. is focus has led to the unfortunate situation where data centers and network rooms are typically oversized by five times the initial capacity and more than one and a half times the average operating capacity. Oversizing drives excessive capital, maintenance and energy expenses on the order of 30 percent in some cases. 2 Some of this oversizing may be in- tentional as a safety margin for growth or for redundancy, and some is accidental because of the difficulty in forecast- ing future loads, oen owing to virtualization and new server technology. e unused capacity of data centers and network rooms is not only an avoidable capital cost but an avoidable operating cost when factoring the regular maintenance and energy of unused systems. Power-equipment inefficiencies with UPS systems, transformers, transfer switches and wiring also do their part in driving up energy costs. Everyone now knows that efficiency is drastically reduced when equipment is doubled for redundancy or operated well below its rated power. Additionally, the heat generated by power equip- ment must be removed from the facility, in turn raising energy and maintenance costs even more. One of the biggest sources of inefficiencies is the long-held mindset that data centers need more cooling than is actually necessary. Traditionally, a general rule of thumb was to keep the data center between 68°F and 70°F to prevent servers from overheating. Although at one time this concept made sense, it's no longer applicable with cur- rent IT hardware technology. Also, as with power equip- ment, when cooling equipment is doubled for redundancy or operated well below its rated power, efficiency falls dra- matically. Today, most data centers can operate at ambient room temperature (or even hotter) without jeopardizing IT hardware or uptime. sTeps To iMpRoving daTa cenTeR efficiency and Reducing eneRgy cosTs Any data center operator, large or small, can dramati- cally reduce energy consumption through appropriate design of the physical infrastructure and IT architecture. Frequently overlooked yet straightforward steps to improving energy efficiency can bring huge energy sav- ings. As a first step, it's important to conduct a thorough audit and evaluation of the data center infrastructure to get a clear understanding of where energy is being consumed. is process will allow data center mangers to know where the problem exists and accurately allocate budget dollars to fix it. From here, data center managers can choose to im- plement a number of cost-effective methods that will help maximize energy efficiency and reduce the overall TCO. power down unused equipment. is is perhaps the cheapest way to reduce energy usage, and facility managers oen fail to acknowledge how much electricity unused (or grossly underused) equipment is wasting simply by being powered "on." Just as we're taught to turn off the lights or television in a room we're not currently occupying, it's im- portant to make sure only the IT equipment being used is on. Virtualization makes this task especially easy because it enables server consolidation, which means the data cen- ter will need to use and maintain less physical hardware. With the appropriate DCIM tools in place, managers can also shi whole IT loads around according to time of day and shut down entire lots of servers for hours at a time. 1 One of the biggest sources of inefficiencies is the long-held mindset that data centers need more cooling than is actually necessary. Traditionally, a general rule of thumb was to keep the data center between 68°F and 70°F to prevent servers from overheating. Although at one time this concept made sense, it's no longer applicable with current IT hardware technology.

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - VOLUME 42 | FEBRUARY 2016