OIDA Data Center Workshop Blog

After arriving to LAX airport, I went straight to the OIDA workshop with all hopes high that the industry would enlighten my research with their views on optical interconnects. It has been over ten years that I am exploring the field and I was really hoping that things would finally come together, that an optical switch would have been deployed and that optical packets would fly from one server to another. I  missed the first panel in the morning, but there was echo in the afternoon panels that cost remains the primary metric in the industry. My hopes of seeing some of the research demonstrations deployed suddenly darkened. Worst, the customers have no interest in knowing what constitute their large server or data center (DC), copper or optics, as long as it meets their requirements and their aggressive budget.

Panel 2 (Ovum, Finisar, APIC):
When I entered the workshop room, I have missed the presentation by Ovum. Steffen Koehler from Finisar was giving a nice overview of the interconnectivity which is using optical technology. While there is no optical switch, there are certainly a lot of optical active cables used. The intra-data center traffic increased (70% of the Internet traffic is within the DC) shifting the flow of data from a North-South flow to an East-West direction where each server is connected to other servers within the DC. This makes up for a large mesh network which relies on the small diameter of fibers to maintain an open airflow mitigating blocking walls of copper based cables.

Koehler presented interesting information about the length of fibers that constitute DCs nowadays. I was surprised to learn that as much as 88% of the fiber links in DCs were less than 100 meters. However, the optical active cables used are power hungry and while they may enable reach of 100 meters or even beyond, they are often used as 3 meters links. The other theme that Koehler brought up is the proliferation of standards and how this is actually good for the applications. However, this situation is causing headaches within the industry. This was reassuring in some ways. As a researcher, I also find it very difficult to see through the technological comparisons and advantages made in IEEE working group presentations. After some headaches myself, I would often come to realize that the push for one technology is not necessarily purely driven by its technological advantages. In fact, Koehler commented on the fact that key decisions are not made based on metric. There are a lot of good ideas out there, but often a few key players drive the field.  He ended his presentation by mentioning that we should also aim for the right level of integration. Too much integration may not make sense in term of cost and reliability. Hybrid integration remains a valid solution to the industry. That was really good to hear because I think while some ideas are really interesting to explore in an academic world, they just don’t make sense in practice. In fact, I realize that for the industry the driving motivation for Silicon Photonics is not to integrate photonics with the transistors but solely to make use of the low-cost fabrication process. But then I wonder how is this really going to go smoothly if the IC fabrication is a 22nm process while Silicon Photonics does not scale to that level.

Madeleine Glick from APIC followed with a presentation on system level considerations. She brought an important point which is that the optical physical layer cannot be isolated from the other layers. The understanding of the application requirements has become essential in defining the physical layer. For example, how much faster will an application run on an optically interconnected rack of servers compared to an electrically interconnected rack? The answer is not obvious and very application dependent. Light in fiber or polymer is not that much faster than electrons travelling in copper. Of course, optics enable much greater bandwidth-length product but can the rest of the system handle 100 Gb/s of data per channel. Madeleine also presented the roadmap in terms of optical switch port count (1000 by 1000) for 2022 as well as the optical switches speed (100 ps). This should motivate research and development in optical switches. That made my day! 

Panel 3 (Dell, PhoxTroT, IBM):
In the last panel of the workshop, Brad Booth from Dell discussed important factors that affect how optics is being used in DC. Power dissipation is important to Dell. Interestingly, their containers are not cooled. Servers set on the top of buildings in Arizona State have normal temperature of operation of 85F!  The other point mentioned is the relatively large footprint of transceivers. On a server motherboard, the SFP+ takes as much space as two RJ45 jacks. Booth stressed that the relatively larger footprint of optics is an obstacle in optics deployment in DC.

Casimer DeCusatis from IBM ended the workshop presentations by reinforcing the fact that cost and volume are coupled to each other. For optics to be more deployed inside DC, cost parity must be achieved. The open discussion following the presentation was steered to comment on the different metrics. While the metric of latency remains important, Booth and Casimer agreed that it is application dependent. In fact, most customers do not care about latency for their private DC. The metric of energy per bit generated an interesting point. Booth says that Dell do not look at this metric as it is always ambiguous what is included in the metric. However, DeCusatis pointed out that we need to balance the metrics. In other words, we cannot pick 3 out of 4 metrics and totally ignore the fourth one.  

The workshop was overall interesting to get the industry perspective. But as a researcher, these types of discussion are somewhat depressing because the industry cannot offered to think outside the box while this is exactly what I should be doing as a researcher. That said, I should still know what that box looks like.

Blog post authored by Dr. Odile Liboiron-Ladouceur with McGill University.