Data Center Journal

VOLUME 53 | DECEMBER 2017

Issue link: https://cp.revolio.com/i/915954

Contents of this Issue

Navigation

Page 21 of 24

THE DATA CENTER JOURNAL | 19 www.datacenterjournal.com Data Center Trends Driving the Next Wave in Timing Innovation By kyle BeckMeyer T he rapid proliferation of streaming video, IoT, so- cial media and cloud-based enterprise soware as well as the upcoming adoption of 5G wireless are collectively driving the need for higher-bandwidth data centers optimized to run a multitude of complex tasks and applications. e rollout of new soware and service offerings has tradi- tionally depended on new hardware being deployed in data centers. Until recently, new soware and service offerings had to align with the introduction of new servers, storage and switches, oen on a two-year refresh cycle. e rate of new services being introduced for cloud computing, soware as a service (SaaS) and web services is now outpac- ing this fixed-hardware upgrade cycle, presenting challenges for data center operators and web-services companies. To meet demand, service provid- ers, data center operators and web- services companies are rapidly moving to a soware-defined networking (SDN) model that abstracts soware and services away from the underlying computing, switching and storage hardware. Service providers and data center operators are adopting new hardware technology that supports the industry transition to SDN while simultaneously increasing the speed and bandwidth between and within data centers. Servers, storage systems, spine/ leaf switches, aggregation routers and op- tical transponders are all going through a seismic technology shi. ey're adopting new 100/200/400Gbps optical transmis- sion technologies, higher-speed PCIe Gen4 and cache-coherent interconnects (CCIX) for accelerators data buses, NVM Express–based solid-state storage, special- ized processing technology optimized for machine learning and artificial intel- ligence, and new memory technologies to meet the ever increasing demand for higher-bandwidth networks. A common thread throughout this data center bandwidth upgrade is that reference-clock timing requirements are growing more stringent. Now more than ever, system architects must pay close attention to timing and clock-tree design during hardware design. daTa cenTer inTerconnecTs Data centers connect to each other and the underlying core and aggregation telecom network through high-speed optical-fiber connections. Coherent optics is the latest technology seeing implemen- tation in data center aggregation switches and optical transponders, providing the ability to transfer more information across a fiber-optic cable at speeds of 100Gbps today and up to 600Gbps in the near future. At a high level, coherent optics combines advanced high-speed digital signal processing and high-speed data converters to modulate both the ampli- tude and phase of the light being transmit- ted between each transmitter and receiver, enabling more data to travel over existing fiber networks. e data converters in both the transmitter and receiver require very low- jitter, high-frequency reference clocks, oen in excess of 1.7GHz. In addition, reference timing is necessary to support digital signal processing. Initial 100Gbps coherent optical-line-card and module designs have used multiple timing ICs and oscillators to satisfy these requirements, necessitating a considerable amount of board space and cost. To address these design challenges, new monolithic high-frequency jitter-attenuating clock ICs have been introduced, consolidating all reference clocks of a coherent-optics design into a single chip while achieving ultra-low jitter performance of less than 100fs (RMS). spine and leaf swiTches Spine and leaf switches create a network of connections between racks of servers and storage equipment, evenly distributing traffic throughout the data center. As Figure 1 shows, leaf switches sit atop each rack, providing downstream connections to the servers and upstream connections to each of the spine switches in the network. Next-generation spine- and leaf- switch designs are adopting switch SoCs that include both 28Gbps and 56Gbps se- rializers/deserializers (serdes) to support downstream port-bandwidth migration from 10GbE to 25/40GbE and upstream port migration to 100GbE. ese in- creased speeds require significant advanc- es in reference-clock jitter performance, with maximum specifications as low as 150fs (RMS) across a 12kHz–20MHz mask for the 56Gbps serdes. Additional system clocks are also required for FPGAs, CPUs, memory, CPLDs and board- management controllers (BMCs) in these designs. Satisfying these stringent timing requirements in 100GbE has become increasingly difficult using traditional os- cillators and clock-buffer solutions, mak- ing high-performance multioutput clock generators or jitter-attenuating clocks the preferred solutions. serVers and sTorage Most server and storage processors in today's data centers are based on the Intel x86 architecture. Increasingly, new products are using Power (IBM) and ARM architectures. Power- and ARM-based platforms generally require additional clocks for the processors and other I/O

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - VOLUME 53 | DECEMBER 2017