Data Center Journal

Volume 29 | November 2013

Issue link: http://cp.revolio.com/i/207723

Contents of this Issue

Navigation

Page 21 of 32

power. With all that power being consumed, cooling the data center consumes even more power. Some companies are building data centers in traditionally "cold" locations (such as Duluth, MN and Finland) to allow them to use the cold air to cool the data center more economically. The wiring for a data center can be a work of art or a bird's nest. Most data centers have at least 3 different types of wiring (copper, fiber and power). In addition, to provide increased resiliency, data centers have two sets of each type of wire, referred to as A side and B side. This allows the machines housed in the data center to be connected to redundant power, networks, etc. by being connected to both the A and B side wiring. Keeping separation between the copper wires and the power cords is important because the power cords could cause disruption to the copper signals. Of course, fiber is not impacted by that type of interference. Two options for running the wiring exist, under the floor and above the rack. Some data centers use a combination of the two options, but the general trend is to run the cables through troughs mounted above the racks. It makes the wire much easier to run, locate and maintain. Current cost considerations for size and space With all the power, cooling and wiring requirements, data centers are expensive. Current studies have indicated that building a data center can cost $1500-$2000 per square foot of data center space, not including costs for racks, servers, etc. Renting square footage within a data center (normally referred to as colocation) can run from $30-$50/square foot/month, depending on the type of data center. The Uptime Institute has developed a classification system for data centers based on redundancy and resiliency built into the data center. Four tiers categorize data centers from least resilient (Tier 1) to most resilient (Tier 4). Additional resiliency is accomplished via redundant hardware and cross connections. The more resilient the data center, the more expensive it will be. With the increased use of the internet and customer demands for systems that are available 24x7, companies are beginning to move their applications from traditional, inhouse Tier 1, or at best, Tier 2 data centers to professionally managed Tier 3 or 4 colocation facilities. These facilities cost more to use, so companies are spending considerable amounts of money trying to reduce their foot print and reduce the amount of wasted space (aisles, etc.). Technologies such as blade servers and virtualization have allowed companies to reduce the footprint, but the amount of wasted space is still high due to requirements to have access aisles on both sides of current racks, one side to access the front side of the server/ switch/etc., and the other side to access the cabling on the backside of the equipment. Traditional designs using traditional equipment that require a 3-4 foot wide aisle on each side of racks waste a significant amount of space. New fiber management technology and cassette connections allow the elimination of the back side-aisle because all of the functionality and cables can be accessed from the front. Scalability is Crucial to Cost-Effective Growth Traditional fiber management was overkill --- there were simply too many needless components which were driving up costs. Today's fiber management must be designed from conception for high density environments - very flexible and configured so that it can be scaled to meet a variety of environments. In addition, today's economic environment is forcing all data center providers to maximize the align- ment between capital equipment and data center utilization rates. As a result, it makes economic sense to take fiber management down to the lowest common denominator that aligns with fiber constructions of the day -- twelve fibers at a time. Fiber management that scales in 12-port increments allows the data center to upgrade port count. But, it must do so without loss of space at full configuration. Careful attention must be placed on every element of the fiber. One area of concern is the protection of buffer tubes; most traditional solutions are inadequate as the buffer tube is stored in a common route path and area. Solutions that address this challenge with in-device buffer tube storage not only reduce the footprint of the overall device but also enhance the protection of the fiber sub-unit.  Superior Density Reduces Real Estate Costs Fiber management design has long promoted the need for density. We push to increase the number of ports per RU space while balancing the need for fiber access. Regardless of whether you own your central office or rent or choose to co-locate, real estate is a cost of doing business. To calculate an independent cost metric to this savings of space, we are using a cost per square foot to rent space in a co-location environment ("a cage") from a third party. While some locations will be significantly higher, we are using the figure of $30 per square foot per month. Assuming the port count is fully maximized; dividing the cost of the footprint per year by the number of ports on the frame calculates the cost per port for the high density solution to be .80 cents per port per year. This is .24 cents per port per year less than the $1.04 cost of the traditional solution, delivering a cost savings of 23%. Footprint Square ft per Rack $ per square ft per month Cost of footprint per month Price per port per month if maximized Cost of footprint per year Cost per port per year 1728 24x30 5 $30 $150 $0.0868 $1,800 $1.04 2016 18" x 36" 4.5 $30 $135 $0.067 $1,620 $0.80 # of Ports Traditional frame High Density Frame www.datacenterjournal.com THE DATA CENTER JOURNAL | 19

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - Volume 29 | November 2013