Data Center Journal

Volume 28 | August 2013

Issue link: https://cp.revolio.com/i/148754

Contents of this Issue

Navigation

Page 6 of 36

factor on operating UPSs. Use of multiple smaller units can provide the same level of redundancy while still maintaining higher load factors, where UPS systems operate most efficiently." Furthermore, engineers may overlook operational savings from higher-efficiency UPSs in the name of immediate capital savings. Although a higherefficiency model may cost more to start, the savings over time can quickly eclipse the initial cost premium. As energy prices rise, the savings grow commensurately. In addition to load and efficiency, the power-conditioning capabilities of the data center's UPS deployment also affect efficiency and thus cost. Over-engineering the power-supply system—for instance, by using double-conversion UPSs instead of line-interactive UPSs—means higher operating expenses for questionable returns in power conditioning. Here, as with any design decision, weighing the cost of downtime against the initial and ongoing expenses of a design upgrade is critical to finding the right balance of uptime and affordability. MECHANICAL A recent Digital Realty Trust survey of large companies in North America found that the average data center power usage effectiveness (PUE) is 2.9. In other words, on average, for every watt consumed by the IT infrastructure, almost two watts go to cooling, power-distribution inefficiency and other unrelated power drains. Much of this extra power consumption in a typical data center results from cooling; companies try to maintain sufficiently cool conditions to avoid stressing sensitive equipment, but the costs of this cooling effort—both monetary and in terms of public image—can be burdensome. One mistake that can exacerbate this situation is a failure to take airflow into consideration. Kevin Lemke, product line manager of row and small systems cooling for Schneider Electric's IT business, notes that "it is very important to understand the airflow characteristics of the IT equipment. A majority of the IT equipment is front-toback airflow, but there is also equipment that requires side-to-side, front-to-top, and bottom-to-top airflow." If these airflow characteristics are contrary to the expectations of the design, the result can be mixing of hot and cold air, decreasing the 4 | THE DATA CENTER JOURNAL cooling efficiency. Lemke also identifies the dynamic operation of the IT equipment as something that can be overlooked. IT loads typically vary according to time of day, time of year and so on; cooling systems should be able to adjust accordingly. Designing the cooling system for a single, static power dissipation in the data center can result in hot spots, overcooling (and thus wasted energy and money) or both. "During peak operation the equipment is going to produce a higher heat load that the cooling solution needs to be designed to handle, but during the off-peak times the cooling solution needs to be able to scale back to make it as efficient as possible," said Lemke. When designing-in a raised floor, engineers should ensure that it leaves sufficient space to permit the necessary cabling as well as the requisite airflow. As cables accumulate under the raised floor, they can begin to obstruct airflow, potentially leading to dangerous hot spots. At a minimum, obstructions reduce cooling efficiency, raising power consumption and operating costs. In addition to airflow, designers should remain conscious of how raised-floor space is used: deploying equipment (like batterybased UPS systems) that should be kept elsewhere wastes valuable floor space and can hinder scalability. PLUMBING For data centers that rely on a steady supply of water to enable cooling, designers should be careful not to forget that the utility supplying the water is a single point of failure, apart from deployment of some backup system. Water "outages" may be rare, but they can shut down a data center just as readily as a power outage. But onsite water storage is an added cost and can only enable a very limited timeframe for keeping the facility running, pending restoration of service by the utility. Designers should therefore consider the probability of a service disruption when deciding whether to use a water-based cooling system. In data centers that do use a water-based cooling system, one design mistake to avoid is a failure to take into account variations in piping sizes. Schneider Electric's Kevin Lemke points out that "engineers should ensure they refer to the exact cooling solution's product specifications when designing the piping system that connects between the indoor unit and the outdoor heat exchanger. On all systems, the various piping lengths and diameters change the pressure in the piping systems. The change in pressure will result in a degradation in cooling capacity or will force changes in other piping components to compensate for the added resistance." Furthermore, Brocade's Victor Garcia cites pumping systems as one area where engineers may go wrong. "Most designers are still designing primary/secondary pumping systems versus primary-only systems. Where primary pumping systems are applicable (constant or steady loads), primary-only systems could help save on initial capital costs and provide moreenergy-efficient operations through the life of the system. Designers should ensure that their pumping system is designed with maintenance in mind, having strainers and either isolation valves or bypasses where you might need to perform future changes or where maintenance can help save time and money down the road." CONCLUSIONS The increasing complexity of data center facilities makes the job of engineers much tougher in designing them to be robust and efficient. Naturally, being human, a designer can easily overlook subtle (or not-so-subtle) areas, resulting in a greater potential for downtime, reduced efficiency, higher costs or some combination of the above. Given both the capital and operational expenses of running a data center, these mistakes—although understandable in one sense—can be extremely costly. The above design considerations for electrical, mechanical, plumbing and overall data center development outline just a few of the pitfalls that engineers should be careful to avoid. Because each data center is unique, however, some of these potential problems will be easier to dodge than others. And, of course, a limited budget will invariably mean companies won't get everything they want out of their facility—although the number of over-budget projects indicates that many are willing (or forced) to shell out more than they planned for the capabilities they need. For engineers designing data centers on behalf of companies, however, avoiding some of the common mistakes can greatly ease the process and avoid the wrath of a dissatisfied customer. n www.datacenterjournal.com

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - Volume 28 | August 2013