Data Center News

Edge will drive change in Datacenter Space in 2019: Vertiv

Vertiv experts anticipate self-sufficient, self-healing edge in service of IoT, 5G

The edge of the network continues to be the epicenter of innovation in the data center space as the calendar turns to 2019, with activity focusing on increased intelligence designed to simplify operations, enable remote management and service, and bridge a widening skills gap. This increasing sophistication of the edge is among the data center trends to watch in 2019 as identified by Vertiv experts from around the globe.

“Today’s edge plays a critical role in data center and network operation and in the delivery of important consumer services,” said Vertiv CEO Rob Johnson. “This is a dramatic and fundamental change to the way we think about computing and data management. It should come as no surprise that activity in the data center space in 2019 will be focused squarely on innovation at the edge.”

  1. Simplifying the Edge: A smarter, simpler, more self-sufficient edge of the network is converging with broader industry and consumer trends, including the Internet of Things (IoT) and the looming rollout of 5G networks, to drive powerful, low-latency computing closer to the end-user.

For many businesses, the edge has become the most mission critical part of their digital ecosystem. Intelligent infrastructure systems with machine learning capabilities working in tandem with cloud-based analytics are fundamentally changing the way we think about edge computing and edge services. The result will be a more robust, efficient edge of the network with enhanced visibility and self-healing capabilities requiring limited active management.

Sharing views on the edge trend, Sunil Khanna, president and managing director at Vertiv, India, said, “Most industries in India are recognizing the limitations of supporting users and emerging technologies through centralized IT infrastructures and are pushing storage and computing closer to users and devices. That shift is becoming necessary because of the increased connectivity of devices and people and the huge volumes of data they generate and consume. We believe this will require profound changes in the compute and storage infrastructure to support the smart and connected future, particularly at the local level.”

  1. Workforce Revolution: A workforce aging into retirement and training programs lagging behind the data center and edge evolution are creating staffing challenges for data centers around the globe. This will trigger parallel actions in 2019. First, organizations will begin to change the way they hire data center personnel, moving away from traditional training programs toward more agile, job-specific instruction with an eye toward the edge. More training will happen in-house. And second, businesses will turn to intelligent systems and machine learning to simplify operations, preserve institutional knowledge, and enable more predictive and efficient service and maintenance.
  2. Smarter, More Efficient UPS Systems: New battery alternatives will present opportunities for the broad adoption of UPS systems capable of more elegant interactions with the grid. In the short term, this will manifest in load management and peak shaving features. Eventually, we will see organizations using some of the stored energy in their UPS systems to help the utility operate the electric grid. The static storage of all of that energy has long been seen as a revenue-generator waiting to happen. We are moving closer to mainstream applications.
  3. Pursuing Normalization: The data center, even in the age of modular and prefabricated design, remains far too complex to expect full-fledged standardization of equipment. However, there is interest on two fronts: standardization of equipment components and normalization across data center builds. The latter is manifesting in the use of consistent architectures and equipment types, with regional differences, to keep systems simple and costs down. In both cases, the goal is to reduce equipment costs, shorten delivery and deployment timelines, and simplify service and maintenance.

5. High-Power Processors and Advanced Cooling: As processor utilization rates increase to run advanced applications such as facial recognition or advanced data analytics, high-power processors create a need for innovative approaches to thermal management. Direct liquid cooling at the chip – meaning the processor or other components are partially or fully immersed in a liquid for heat dissipation – is becoming a viable solution. Although most commonly used in high-performance computing configurations, the benefits – including better server performance, improved efficacy in high densities, and reduced cooling costs – justify additional consideration. Another area of innovation in thermal management is extreme water-free cooling, which is an increasingly popular alternative to traditional chilled water.

Related posts

IIT Bombay partners with ABB India to set up state-of-the-art electrical machines and drives lab

enterpriseitworld

Facial Recognition: Building a Robust Smart Transportation Ecosystem

enterpriseitworld

Tenable Cloud Risk Report Sounds the Alarm on Toxic Cloud Exposures

enterpriseitworld
x