Data Center Journal

Volume 27 | May 2013

Issue link: https://cp.revolio.com/i/141720

Contents of this Issue

Navigation

Page 24 of 35

family, are being designed with the capabilities required for deterministic performance, linear scalability and programmability. Many of these processors also have advanced resource-slicing paradigms built into the architecture. Making SDN Network Elements Smart and Fast The role each network element must play in forwarding, state distribution, traffic engineering and access-policy management on a per-service slice basis has the potential to adversely affect performance in an SDN. Because companies like LSI work closely with all major vendors of networking equipment, they have particular insight into how these vendors are designing next-generation solutions. And without exception, these vendors see the need for more versatility and greater intelligence in the integrated circuits being used. The programmability requirement for network elements in an SDN is met with the use of multicore processors that native- ly support advanced device-level interconnect and hardware-based virtualization. In addition to multicore processors, intelligent SDN silicon needs to offer deterministic performance, and that can only be achieved using function-specific hardware accelerators interconnected with the CPU cores. Advances in silicon fabrication now make it possible to place more of these service slice-aware, function-specific acceleration engines (along with more processor cores) on a single system-on-chip (SoC) integrated circuit. Examples of acceleration engines include deep packet inspection, packet classification and processing, encryption/decryption, digital signal processing, transcoding and traffic management. And unlike in the past, it is now possible to have these engines accelerating service flows simultaneously, rather than in a serial fashion, which significantly reduces latency as traffic traverses a network element. SDN protocols like OpenFlow require advanced flow-level processing silicon that can perform multiple, iterative lookups per packet according to each packet's header fields. For a given network-element line card, the line-facing port switching devices are generally pipelined and thus need to rely on intelligent packet-processing extensions to offload the newer and more advanced protocols like OpenFlow. In distributed architectures like SDN, the offload can similarly be distributed among servers and storage systems, in effect turning them into fully capable network elements. A pivotal function in an SDN line card is a proxy that essentially translates commands from a central controller, which may also require extending the capabilities of the line card with newer protocols and other functions. Conclusion The data deluge is forcing IT managers to be more creative when scaling data centers. Changes like server virtualization, SAN/NAS convergence and greater use of the cloud are causing data center networks to become both more dynamic and more services oriented. These changes are, in turn, driving a paradigm shift in the way Figure 3: The service orchestration layer will need to interact in a closed feedback loop with the software-defined network and the services-aware solutions in the services-oriented networking architecture. Service Orchestration Layer SDN Abstractions Service Metadata: (Data, Control & Management Function Requirements) Silicon Implications Infrastructure Requirements Forwarding/State Distribution method per slice Resources reservation per slice SDN & Services Aware Solutions Programmable Data & Control planes Resource Virtualization/slicing Traffic Engineering per slice Slice-based functional pipelines Access Policies per slice Intelligent Abstractions www.datacenterjournal.com THE DATA CENTER JOURNAL | 23

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - Volume 27 | May 2013