Data Center Journal

VOLUME 53 | DECEMBER 2017

Issue link: http://cp.revolio.com/i/915954

Contents of this Issue

Navigation

Page 22 of 24

20 | THE DATA CENTER JOURNAL www.datacenterjournal.com functions compared with x86 platforms. Regardless of CPU preference, however, each architecture and platform uses high- speed data buses to transfer data between the CPUs, memory, storage devices and add-in cards. PCI Express (PCIe) is the dominant data bus in servers because of its low implementation cost, high bandwidth, and availability in most CPUs, FPGAs, SoCs and ASICs. e PCI Special Interest Group (PCI-SIG) recently introduced its fourth-generation PCIe specification, which increases the data rate from 8Gbps to 16Gbps. In addition to appearing in server motherboards, PCIe is seeing wide adopt- ing in data center storage as solid-state drives (SSDs) become favored over hard- disk media. e expanded use of the PCIe data bus is driving the need for more and higher-precision PCIe reference clocks throughout the entire rack, from the server CPU all the way down to each SSD. Solid-state storage uses the NVM Express (NVMe) protocol as opposed to the SAS and SATA serial protocols that serve in legacy hard-disk storage designs. An NVMe-based SSD connects to a storage system over a standard PCIe connector, meaning PCIe reference clocks are neces- sary for all NVMe-based SSDs. It's also common for flash-array storage systems to use FPGAs and for custom controller ASICs to manage the traffic between the servers and SSDs, each of which needs its own high-performance reference clocks. Although hard disks are expected to be the dominant data center storage media for the next several years, flash-array deployment is growing rapidly. Industry analysts are anticipating a steep ramp in flash-array-storage adoption in 2018– 2020, primarily driven by web-service data centers. acceleraTor cards e design cycle for new data center equipment is typically two years. To ac- commodate new soware and web-servic- es product launches on a faster schedule, data center architects have started devel- oping specialized processor add-in cards that can provide alternative types of pro- cessing power optimized for web search, artificial intelligence or machine learning. Add-in cards plug into a standard server motherboard over a PCIe connector, im- mediately providing expanded capabilities to an existing server. e design cycles for add-in cards can be as short as six months, giving operators and web companies added capabilities in a data center without rearchitecting our reoutfitting an entire facility with new servers. Many types of add-in cards have seen deployment in data center servers over the past few years using FPGAs, graphics processing units (GPUs) and custom ASICs. is trend is expected to accelerate with the arrival of new GPU, FPGA and SoC products optimized for specific applications. In addition to PCIe, adoption of new alternative protocols is beginning. PCIe Gen4, CCIX, NVLink, OpenCAPI and GenZ are enabling faster data transfer between CPUs, memories and accelerator cards, achieving data rates of 16–32Gbps. Each data-bus link requires a reference clock for the serdes in both the transmitter and receiver IC. Given these high data rates, the reference clocks must be incredibly precise to ensure robust sig- nal integrity and minimize bit-error loss. As more add-in cards see deployment, the number reference clocks in a server or storage design also increases. Jitter performance requirements, board-space constraints and power-consumption bud- gets are all primary factors in server and storage design, making clock generators the ideal solutions. suMMary Data centers are increasingly impor- tant in many aspects of our lives, enabling information storage on a vast service as well as cloud services and emerging artificial-intelligence systems. To continue supporting the rapid pace of new innova- tions and applications running in the cloud, architects and hardware designers must continue to increase bandwidth in the servers, storage equipment and switch- ing networks of data centers. e move to 100GbE in data center interconnects and leaf/spine switches, PCIe Gen4 in servers and add-in cards, and NVME in solid-state storage all exemplify adoption of new technologies to address the need for higher bandwidth. To ensure that these technologies reach their maximum po- tential, system designers must put greater importance on clock-tree design and use ultra-low-jitter reference clocks through- out the data center. n about the author: Kyle Beckmeyer serves as product marketing manager for Silicon Labs' timing products, being responsible for managing product strategy, new-production definition, and business development in the data center, communications and industrial markets. Kyle joined the company in 2013, bringing eight years of timing experience and market knowledge. Previously, he worked in the timing divisions at Integrated Device Technology (IDT) as well as Integrated Circuit Systems (ICS). Kyle holds a Bachelor of Science degree in electrical engineering from the University of California, Davis, and a master's degree in business administration from Santa Clara University. Figure 1. Leaf-spine network architecture.

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - VOLUME 53 | DECEMBER 2017