Electrical and hybrid data center

Razieh Heidarian
3 min readFeb 25, 2021

Data center networks have been rapidly evolving in recent years as the nature of their workloads is evolved. Unlike traditional enterprise client-server workloads, the modern data center’s workloads are dominated by server traffic. Servers are the smallest physical unit of the data center that can hold CPU, memory, and storage. The full data center build by interconnecting hundreds of thousands of servers/storage system with complex topology inside the rack, which works with high speed. It means that amount of computing and memory resources depends on the proportionality of these resources in the individual servers . A massive amount of data travel from east-west is more than north-south in the data center. This large communicating data is between servers and storage in the data center rather than inbound and outbound traffic. To increase the number of hosts, we need to increase the number of switching stages. This paper will describe how we can accommodate a high number of hosts that can support 13Pbs bisection bandwidth.

Traditional data center

Clos network is a multistage circuit switching network illustrating a technical idealization of practical, multistage switching systems. Clos topology is applied to build a leaf and spine method of interconnecting leaf switches (data center access switches or ToR switches) together in spine switches. It is designed in a method that any leaf switch is attached to every spine switches straight.

Spine leaf topology introduced by Cisco, as presented in Figure , is including a folded clos full mesh topology in which each leaf switch is joined to each spine switch. The leaf switches are the way switches that are immediately attached to servers. In this position, the unique difference is that the request ought to pass through only one switch if the source and destination servers are joined to the same leaf switch. Thus, this topology is additionally a specific case of clos topology in which both exit and entry are the leaf switches, and spine switches act as the central stage.

Modern data center

Given that by increasing the traffic, optical technology maintains large bandwidth that is much larger than global IP traffic. The optical switch plays a crucial role in the data center in the meaning that they are transparent to the bit rate of optical switches that is one big difference between the hybrid data center and electrical data center. Besides, the power consumption of hybrid data center W/bit is much less than electrical systems. In this way, a hybrid data center helps us with high bandwidth and less power consumption. Introducing optical switches for the hyper-data centers is expected to break the power and bandwidth barriers raised by electrical technologies in the future. Optical networks provide huge bandwidth and reduce the cabling complexity and power consumption compared to electrical networks. A significant point of optical networks is their ability to dynamically reconfigure optical routes between any electrical switches joined using an optical switch. We can use this ability to determine one major challenge in data centers VM (virtual machine) employment problem. As optical tracks between edge-switches (top-of-rack switches, to which server machines are connected) can be generated on-demand, there is more adaptability in putting VMs of a request than in an electrical data center network. On the other hand, creating an all-optical data center that produces synchronous connectivity between every two edge-switches is costly and impossible for large data centers hosting tens of thousands of servers. This, along with the fact that the electrical network is better adapted short and bursty traffic, therefore, makes a hybrid optical-electrical network architecture is the best option for prospective data centers. A hybrid data center provides adaptability in joining edge-switches with great connection demands dynamically utilizing the optical network while managing contacts between edge-switches with burst traffic using the electrical network.
The key attributes of such optical circuit switches are the available port count; if we can utilize large-port count optical switches, flat and single-stage optical switching will become possible, eliminating complicated traffic congestion control needed in the multi-stage (leaf and spine) networks. Indeed, present mega data centers often have latency levels of dozens of microseconds. This paper analyzes how optical-circuit/electrical-packet hybrid switching will impact future data centers and high-performance computing (HPC) networking. And also discusses how we can get optimal large-port count electrical data centers and their connection with optical data centers.

--

--