Fiber’s Uniqueness Inside the Data Center
Fiber’s Uniqueness Inside the Data Center
Delivering near unlimited speeds with easy upgrades at low power and minimal environmental impact, fiber is the only technology option for reliable, resilient communications in today’s digital economy. The needs and requirements for fiber within the data center have steadily evolved over the past decade as the industry has moved from baseline volume server warehousing, to high-performance hyperscaling, to today’s monster AI facilities focusing on extreme compute density with maximal power requirements. At the same time, the need for ultra-low latency and higher capacity data center interconnections is driving more fiber and the introduction of hollow core fiber.

AI data centers with densly packed GPUs use anywhere from 10-to-20-fold more fiber than traditional server configurations. Source: Adobe
For high-end operations, such as GPU providers brokering time to AI developers on demand, fiber is essential for operation. Aether operates over 400,000 GPU containers in over 200 locations across 93 countries, generating over $150 million in revenue per year.
“Fiber is the second-most important component in the overall service. The first being the GPUs themselves,” said Kyle Okamoto, Chief Technology Officer, Aethir. “We have a decentralized physical infrastructure network, so we get all our GPUs contributed to us by cloud hosts. The community monitors and regulates and polices and rewards that network of GPU providers, so it’s completely hands off. We’re facilitating a demand driven, market driven ecosystem where we reward high performance, enterprise grade, SLA back service to enterprise clients, running AI applications.”
Okamoto does extensive due diligence to make sure that both the data center network and the GPUs are able to operate at high performance, looking for the newest facilities. It’s a long way from the early days of data center construction when fiber was a novelty rather than a necessity.
“Before the cloud, banks were building the biggest data centers and generally they were three or four halls of 400 racks,” said Keith Sullivan, Director of Strategic Innovation, AFL. “A thousand racks was a very, very large financial data center. Networking would be 50-50 copper and fiber. Cloud came along; they had copper and multi-mode fiber, very much along the lines of the enterprise data centers. As it grew, the rest of the copper disappeared, then multi-mode disappeared. It’s all single mode fiber today inside of all the hyperscalers. All of the workloads and all the traffic is all on single mode fiber and has been for a long while.”
Today’s data centers can be defined as traditional cloud, AI data centers specifically created for that workload and some hybrid facilities, which will be traditional facilities that incorporate a GPU cluster into the established infrastructure. The explosion for fiber within the boundaries of the data center building to support AI is massive.
“Inside the walls, it’s anything between a 10-to-20-fold increase in fiber,” said Sullivan. “Quite simply it [starts at] the server rack. Once you put the server racks in, you have to put network racks in because the servers need to communicate. Outside of the network racks, you need to put infrastructure or fiber cable racks.”
Connecting everything requires fiber, but the push for more GPU compute density means more communications density to connect servers, racks, and rows. A traditional data center configuration might have 30 racks in a row with a network rack and an infrastructure rack in the middle.
“For that middle-of-row network rack, you might run anything between 288 or 576 fibers to that middle,” said Sullivan. “Then you would run between 30 to 40 fibers out to each server rack. Coming out of a NVIDIA server rack, you have just over 1,000 fibers coming out. You would only have eight to 12 racks in a row because of the power density. Let’s go in the middle and say 10 racks. That means you have over 10,000 fibers coming into that middle-of-row network rack. For a one data center building, you have 80 to 100 rows, 80 rows times 10,000 fibers to each row, it builds up very quickly. That’s why AI is driving fiber inside the data center.”

Data center fiber continues to increase due to the phasing out of copper and the explosive growth of AI. Source: Adobe
Optical operations within the walls of the data center is different from standard telco carrier transmission, using DR4 transmitters transmitting at 1310 nanometers instead of the 1550. Inside the data center, while transmission losses are higher, the chromatic dispersion is essentially zero. For the short distances within the data center, the 1310 transmission optics don’t require the layer of digital signal processing needed, making transmission much cheaper.
“The transceiver count inside of a data center is huge,” stated Sullivan. “You’ve got thousands and thousands of transceivers, so you need to have them as cheap as possible.”
The fiber strand count continues to scale when you move to a multi-building campus, where connectivity is needed to link everything together. “What we’re seeing is multiple 6912 fiber cables going between buildings on a campus in an AI environment, as compared to 3456 or 6912 fiber-count cables between traditional cloud buildings,” said Sullivan.
Moving in and between campuses adds another layer of complexity, depending on the size of the overall facility, with campus-level connectivity being defined as up to 4 kilometers away, metro area connectivity in the 15 to 20 kilometer range, and then regional connectivity up to 80 kilometers away, due to latency. Connecting buildings and campuses is typically established by first leveraging existing dark or leasable service provider fiber, then moving to the more expensive project of building their own dedicated fiber circuits.
Hollow core fiber comes into play when longer distances are involved and latency becomes a significant operating factor in the network, a medium AFL has worked with for one of its higher-end clients. “Hollow core is multiple glass tubes that create a space in the middle that has to be maintained in a vacuum,” Sullivan said. The speed of light in glass is 200,000 kilometers a second and the speed of light in a vacuum is 300,000 kilometers per second. The light travels 50% faster.”
For applications that have a latency-constrained link, such as synchronization of two data centers, the additional 50 percent substantially increases flexibility. “If the distance you can successfully synchronize two data centers is 80 kilometers on a single mode fiber, and you replace that glass fiber with a hollow core fiber, you can now go up to not quite 120 kilometers, around 150 kilometers.”
The additional operational distance hollow core adds substantially increases the ability for regional data center site selection and networking, giving service providers more options for finding suitable regional locations with sufficient power. Hollow core is less likely to show up within the data center in the near future because it is currently more demanding to splice and put connectors on it and its bend radius is quite large, said Sullivan.
