It’s no secret in the submarine cable network industry that Data Center Interconnect (DCI) is driving many new cable builds in numerous regions around the world. DCI is also responsible for increased capacity being turned up on existing cables owned by Internet Content Providers, as they continue expanding Global Content Network (GCN) connectivity between their data centers assets. Traditional wholesale submarine cable operators are also providing user-to-content (access) and content-to-content (DCI) connectivity. Submarine cable operators are key contributors to the largest construction project of humans by distance – the global internet, which interconnects us and puts the knowledge of mankind in the palms of our hands.

Land. Sea. Cloud. Networks unite.

Submarine DCI networks carry data similar to their terrestrial counterparts such as database synchronization, database backup & restore, dynamic load sharing, content caching optimization, access to content, as well as other less publicized internal use-cases. As consumers increasingly hunger for video-centric content and businesses increasingly adopt cloud-based services, DCI traffic, overland and undersea, will continue to proliferate.

It’ll also grow over air as well, via a number of new satellite constellations being launched and our increasing affinity for accessing content via mobile networks over wireless airwaves. This means that it’s imperative that we continue to unite land, sea, and cloud networks for seamless access to content.

As submarine networks evolve, so do data centers

The traditional strategy associated with building data centers is to balance numerous interrelated factors such as a location in relative proximity to end-users (humans), low real-estate costs, access to cheap and abundant  energy for power and cooling, historical and ongoing political stability, far from regions prone to natural disasters (floods, earthquakes), government incentives, and other factors. A common theme was to make data centers big – really big – allowing data center operators to achieve optimized economies of scale.

Traditional cloud-based services are provided by connecting to, and between, mammoth-sized centralized data centers, which are often far away from end-users thereby incurring a latency that for many newer use-cases, is simply too high. You’ve already heard of many such use-cases, such as IoT, AR/VR, self-driving vehicles, telesurgery, immersive learning, industrial automation, gaming, and many more. As the latency requirement for these new and exciting use-cases is increasingly stringent, the location of data centers that enable them is and will continue to evolve. This is precisely why edge data centers are on the rise, globally.

The speed of light is limited

The speed of light isn’t infinite, although for most cases, it just seems to be, so we don’t notice its limitations. For example, when you turn on a light, it seems to instantly light up a room, because light is fast and the distance from the lightbulb to the room walls is very short. At an extreme case, if you wanted to travel to the nearest star, Proxima Centauri, and found a way that broke the laws of physics (kudos!) and travelled at the speed of light, it would still take you over four years to reach your destination.

Although data centers are obviously much closer to us, it still takes time for requested content to be received, typically tens to hundreds of milliseconds. Light travels at approximately 300,000 kilometers per second (km/s) in a vacuum, but travels about 2/3 slower in optical fibers due to their indexes of refraction. For glass, the index of refraction is about 1.5, so the speed of light is reduced to around 200,000 km/s – still fast, but much slower.

Low latency as the enabler of new use-cases

For ease of calculations, it takes content travelling in fiber-optic networks about 5 microseconds (us) to travel a kilometer (km). So, if your content is in a data center 300km away, it takes about 1500us, or about 1.5 milliseconds (ms) to reach you. If your data center is 1000km away, it’ll take about 5ms to reach you. If your content is hosted in a datacenter on a distant continent, meaning it has to traverse a submarine cable, it takes even longer because of the vast distances involved. For a transatlantic cable 6000km long and a transpacific cable of 10000km long, it'd take your content approximately 30ms and 50ms, respectively, in one direction.

Of course, these rough estimates only take into account the approximate speed of light in a fiber-optic core over example distances. We also have to take into account things such as optical-electrical-optical (OEO) transitions as well as processing times across various network elements, such as optical switches, packet switches, routers, firewalls, servers, end-user devices, and so on, which together dictate overall latency experienced by end-users, humans and machines. They also dictate what applications and use-cases are supported, and most importantly, where the data centers should be physically located.

Getting “edgy” about data center locations

Many of the more interesting use-cases mentioned above require very low latency, often guaranteed from end-to-end. This means a data center’s storage and compute resources must be physically located much closer to the network edge and closer to end-users where content is actually created and consumed by humans and machines (IoT).

This also means many more data centers must be designed and deployed when compared to the number of large centralized data centers. There are reasons for data centers to be situated towards the network edge besides providing lower latency, such as regulatory compliance related to where local user data is stored and lowering long-haul network transport costs.

How will Edge Cloud affect submarine networks?

Will the increased popularity and deployment of edge data centers mean the end of centralized data centers and the submarine cables that interconnect them? Will submarine DCI networks be replaced by terrestrial DCI networks? Will they work in harmony? Are they complimentary? People want (and need) to know!

Watch our webinar for insights into the relationships between edge data centers, centralized data centers, and the terrestrial and submarine networks that interconnect them.

Large play button overlay
Man talking with GeoMesh Extreme and globe icon on a white board behind him
Large play button overlay