Data Center Journal

Volume 28 | August 2013

Issue link: https://cp.revolio.com/i/148754

Contents of this Issue

Navigation

Page 5 of 36

Building a data center can cost millions of dollars, or much more, in capital expenses. Companies therefore have a tremendous incentive to ensure that the process avoids overrunning the budget and is completed on schedule—two feats that are challenging enough by themselves. But even once the facility is operational, mistakes made during the design phase can reduce efficiency or cause downtime, leading to greater costs through lost business, decreased productivity and repair expenses. D ata centers, however, are as different as the companies that operate them. What works well in one facility may not work well in another, so engineers are constantly required to make design decisions on the basis of changing factors, like company needs, available resources (time and money), regulations, industry best practices and so on. Balancing all these considerations in the context of a unique project can be challenging, and engineers are bound to make mistakes. By looking at the shortcomings of past designs, however, companies and engineers can work together to make their individual data center projects as fault-free as possible, given the various constraints. Of course, owing to the complexity of a data center, no design will be perfect. The commissioning (or "shakedown") phase will likely uncover some deviations from the ideal setup, but careful attention during the design phase—with some wisdom garnered from the errors of others—can minimize the impact of these deviations. Here's a look at some of the most common data center design mistakes that engineers (and companies overseeing the design process) should be on the lookout for. GENERAL PRINCIPLES Before considering three major areas of data center design—electrical, mechanical and plumbing—a look at some overall design principles is beneficial. Whatever the component, device or feature of a system, it has a finite probability of failure. That means if you just wait long enough, it will eventually fail. Not every kind of failure will cause downtime in a data center, but many will. For this reason, redundancy is critical to maintaining uptime. An important design principle is thus avoiding single points of failure. This principle may be fairly obvious, but applying it in practice— given the complexity of the data center— www.datacenterjournal.com can be difficult. The critical question for each component of a data center design is what happens, should it fail: will the facility continue functioning (either because of built-in redundancy or appropriate isolation of that single point of failure), or will the failure greatly reduce efficiency or even cause downtime? In the latter case, redundancy or better isolation of the component from the rest of the system can help. In trying to avoid single points of failure, engineers can also go overboard in the opposite direction. The purpose of redundancy is to have a backup, should a particular component fail. It's easy to take that logic too far however: just because dual or triple redundancy improves reliability doesn't mean that a hundred-fold redundancy is even better. Because redundancy typically adds some form of "switching" or other infrastructure to enable use of the backup following a failure, it adds more components that can also fail. As it turns out, beyond a certain level, adding more redundancy can actually decrease reliability—not to mention that going overboard can cost a company more than the incremental reliability gains are worth. So, in avoiding single points of failure, engineers must be sure not to go to the other extreme. A related concept in data center design (or design of any system) is simplicity: the more complex a system, the more prone it is to failure resulting from a design flaw or faulty component. A simpler data center approach, where practical, generally enables easier troubleshooting, keeps costs down and improves uptime. The decision to add complexity to the design should take into account the return on "investment" as well as the effects on reliability. Another high-level design mistake is a failure to enable scalability. Many companies are struggling with a dearth of available capital, making "right-sizing" and modularity critical to affordably meeting customer demands. Instead of shelling out extra capital at the start to add unneeded (at the time) capacity in hopes that the business will grow into it, some companies aim to instead build what they need and add capacity as they go. Victor Garcia, Director of Facilities at Brocade, notes, "Most data centers are still being overly sized from both an electrical and mechanical standpoint, and many data center operators don't have a handle on their current and future density requirements. This leads to oversized and inefficient electrical-distribution systems. From oversizing systems and poor load phase balancing at the rack level, operators will find themselves with poor power factors." During the design phase, engineers should keep this goal of scalability in mind when laying out and selecting equipment for the data center. ELECTRICAL The electrical and mechanical (cooling) aspects of data center design have significant overlap; one part of the data center design that can fail to garner requisite attention is power density. As companies try to cram more compute power into smaller volumes (for instance, more servers in racks), the result can be higher operating costs due to hot spots, which hamper cooling efficiency. These hot spots require greater output from cooling equipment (and thus more energy consumption), typically leading to unnecessarily low temperatures in other parts of the data center. Thus, spreading the power load as much as possible to avoid highdensity zones—and thus hot spots—can aid cooling efficiency and reduce cooling costs. Another mistake in electrical infrastructure is designing for small loads on the uninterruptible power supplies (UPSs), as well as failure to take into account the longer-term benefits of higher efficiency. According to Lawrence Berkeley National Laboratory, "When using battery-based UPSs, design the system to maximize the load THE DATA CENTER JOURNAL | 3

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - Volume 28 | August 2013