Data Center Journal

VOLUME 46 | OCTOBER 2016

Issue link: https://cp.revolio.com/i/734838

Contents of this Issue

Navigation

Page 12 of 32

10 | THE DATA CENTER JOURNAL www.datacenterjournal.com B ut that was eons ago in terms of the data center and the changes it has experienced. Internet traffic, for example, averaged 11,200 terabytes a month in 1998 compared to more than 88 million terabytes a month in 2016. ere were far fewer servers, rack densities were below 1 kW and virtualization was years away from making its presence felt. Data center management is so dramatically different now than it was in 1998 that it is pretty amazing that IPMI has en- dured this long. oUtgrowing ipMi e specification has, of course, experienced multiple up- grades and enhancements since its launch, but as any soware de- veloper will attest, there is a limit to how much you can improve something that's fundamentally broken. IPMI, as a byte-code protocol, isn't user friendly. Worse still, IPMI wasn't architected for security, massive scale, or to cleanly support vendor exten- sions. In short, it isn't well suited to support a future that is open, automated and soware defined. facilities vs. it protocols At the same time IPMI was being stretched to accommodate the growing challenges in IT, a parallel evolution was occurring on the operational technology (OT) front. Once somewhat independent of the systems they pro- tected, with limited inputs and intelligence, data center power and cooling systems were becoming more integrated with the environments in which they operate. ey were not only moving physically closer to IT systems, but, driven by the demand for more dynamic management and rising downtime and energy costs, were increasingly being expected to operate with greater ef- ficiency and provide remote visibility into data center operations. UPS and battery systems were configured with remote monitor- ing capabilities, in-rack power monitoring became central to energy management, and intelligent thermal controls networked room, row and rack cooling systems to deliver greater precision and enhanced adaptability. However, just as IPMI was limited by the timing of its initial development, infrastructure systems retained the "facility" legacy they carried into the data center. ey relied primarily on BACnet or Modbus — common among building systems but a foreign language to IT personnel responsible for server management — to communicate with each other and management systems. Now, there's an incredible amount of data being generated by data center systems, but no consistency in the protocols being used to communicate that data. is complicates the challenge of managing the data center as a single system, rather than a collec- tion of interdependent systems, let alone automating responses to changing conditions across systems. Data Center Infrastructure Management (DCIM) has enhanced visibility, which has proven extremely valuable, but as we move from visibility to control the lack of a common language becomes more problematic. enter redfish In 2013, Emerson Network Power and other industry lead- ers initiated an effort to develop the next generation specification for out-of-band server management. By that time, the limitations of IPMI were so well recognized that the effort quickly gained widespread support across the industry, with Intel, Dell and HP all collaborating to define and develop the new specification. e focus of the original development was on addressing the limitations of IPMI in the areas of security, functionality and scalability. Redfish was eventually transferred to the independent Distributed Management Task Force (DMTF), which released DMTF Redfish in August of 2015. By all accounts, Redfish has realized the goals of its develop- ers. It addresses the limitations of IPMI through a purposeful representational state transfer (REST) and JavaScript Object Notification (JSON)-based design that is lightweight, easily main- tainable and scalable. Redfish is immediately familiar to today's developers who are used to working with Web APIs. We are now seeing server products entering the market with Redfish support and you can expect the availability of Redfish- enabled servers to ramp-up throughout this year and next. e fact that the Open Compute Project has embraced Redfish should also accelerate adoption. at acceptance is gratifying for those involved in the conception of the specification; however, even more exciting is the expanded role Redfish can play in the future of data center management. It is now becoming clear that Redfish's simplicity, versatility and functionality will enable it to extend beyond server management to other data center systems. It is poised to become the common data center language we have all been seeking. Moving toward a coMMon langUage e challenge for data center operators remains how to transition from tools using older protocols to Redfish. It will admittedly take time for Redfish-enabled servers to displace the current generation. e timeline for infrastructure systems, which can have a 10-15-year lifecycle, will be even longer. However, the transition can be managed with minimal disruption and that may happen sooner than many expect. In August 2016, at the Intel Developer Forum, Emerson Network Power, Lenovo and OSIso demonstrated an integrated single-rack data center system that used Redfish to simplify management and enable closed-loop control across devices. At the heart of this integrated system is a Connectivity Engine that enables conversion of IPMI, SNMP, Modbus, BACnet and other specifications to Redfish within the device. e Connectivity Engine is packaged as a soware develop- ment kit (SDK) that delivers Redfish as a service. e SDK is integrated into management systems to enable them with Redfish APIs. Just as importantly, it translates Redfish commands and data into legacy protocols through a plugin interface. is plugin approach greatly simplifies application development and acceler- ates support for legacy and third party protocols and devices.

Articles in this issue

Links on this page

Archives of this issue

view archives of Data Center Journal - VOLUME 46 | OCTOBER 2016