12 Questions Answered about Submarine Data Center Interconnect
As the recent news that Microsoft and Facebook plan to build a subsea cable across Atlantic demonstrates, the need to optimize intercontinental data center connections is more important than ever.
This was the topic of a recent live event hosted by TeleGeography Senior Analyst Paul Brodsky and Ciena’s submarine networks expert Brian Lavallée. The two covered a wide range of topics that included the growing influence of ICPs in submarine cable builds, the ongoing move to open network architectures, and how agile, mesh networking technologies can keep both submarine and data center operators ahead of the curve when managing their traffic flows.
With private traffic from major Internet Content Providers (ICPs) like Facebook and Google now surpassing traditional voice and Internet traffic, data center and submarine network operators alike find themselves having to rethink how they manage traffic flows between data centers located on different continents. As the recent news that Microsoft and Facebook plan to build a subsea cable across the Atlantic demonstrates, the need to optimize intercontinental data center connections is more important than ever.
This was the topic of a recent live event hosted by TeleGeography Senior Analyst Paul Brodsky and Ciena’s submarine networks expert Brian Lavallée. The two covered a wide range of topics that included the growing influence of ICPs in submarine cable builds, the ongoing move to open network architectures, and how agile, mesh networking technologies can keep both submarine and data center operators ahead of the curve when managing their traffic flows.
You can access the entire session on-demand here.
At the end of the event, Brian and Paul engaged in a lively Q&A covering a variety of questions from the audience. I’ve taken some of the best questions and answers from the discussion and collected them below, which cover topics such as the separation of the Submarine Line Terminating Equipment (SLTE) from the submerged wet plant, the merging of ROADM architectures into submarine networks, and the ongoing efforts to simplify the end-to-end management of the network through new technologies like SDN.
Q: Doesn’t the same vendor supply both the Submarine Line Terminal Equipment (SLTE) and the Power Feed Equipment (PFE)? How does this impact the design?
Brian: Yes, in the past you’d purchase a turnkey system, so the SLTE, PFE, and wet plant, which includes the repeaters and branching units, were all bought from the same vendor. But just over maybe five or six years ago, Ciena lit the first 40 Gbps coherent channel over a third-party wet plant and the industry never looked back.
Everybody today is providing coherent-based modems and the SLTE is often decoupled from the wet plant allowing cable operators to buy best-in-breed technology. When I say “wet plant,” I’m actually including the PFE because it’s actually used to power the submerged repeaters.
So, today you don’t have to buy SLTE from the same vendor who supplied your PFE. Submarine cable operators can and do choose SLTE independent of the PFE.
Q: It seems like the enabling technology for submarine DCI is the ROADM. How is this related to DWDM? Is there any reason to NOT select a ROADM-based solution?
Brian: There are actually two enabling technologies, which are integral parts of our GeoMesh solution. One is the ROADM itself, which gives you the capability to switch paths in the optical domain. But the bigger enabler of all of this is coherent optics. Once the wavelength lands on the other side of the cable and hits the beach, there is typically sufficient link budget margin left that allows you to typically go hundreds of kilometers further inland without regeneration. This means that you can connect data centers on different contents to each other using two rather than six 100G coherent transponders, which save on cost, power, latency, and complexity.
The longer reach supported by the latest coherent optics allows you to go PoP-to-PoP or data center-to-data center, all the while remaining within the optical domain. People are asking about being able to go from Chicago optically all the way to Japan. I’m not sure we’re there today, but it’s something people are asking about because it offers a vastly simplified, lower cost network.
Q: Is a ROADM always required at the cable landing station (CLS)?
Brian: For strictly wavelength switching, the answer is no. If you want the ability to reroute or remotely switch different wavelengths arriving at a CLS to different terrestrial backhaul routes, then you’d need a ROADM. ROADMs, or in particular Wavelength Selective Switches (WSS’s) that are sitting within a ROADM node, are still typically used in the cable landing station to perform optical power management into the wet plant. But the answer is no, you don’t always need a ROADM for wavelength switching, but it does provide increased flexibility.
Q: How does ROADM technology help reduce latency?
Brian: Because you remain in the optical domain as you traverse the CLS, you don’t have to go back into the electrical domain to do a bunch of processing and then take those rearranged bits and put them back into the optical domain. Because you’ve eliminated the optical-electrical-optical (OEO) stages in the Cable Landing Station, you achieve lower latency, but at the expense of losing sub-lambda (ex. DS3, STMx) switching and grooming granularity. With a ROADM, you cannot perform sub-wavelength switching or grooming because you’re switching the entire wavelength between optical ports.
Q: Is the latency saved by removing the OEO stages the most important savings incurred with GeoMesh?
Brian: One of them. Depending on your application, lower latency is either a good thing or a vital thing (think high-speed financial trading). Some people will pay a lot of money for it, but to others it’s simply nice to have. I think some of the other big benefits is far less complexity, as you’ve far fewer equipment in your network, and you have far fewer cards as well. That means you have fewer cards to manage, power, spare, and learn how to use. It’s also much easier and faster to turn up end-to-end services. Less complexity simply makes the whole network a lot easier.
Q: What are the developments on the control plane / end-to-end orchestration / SDN front?
Brian: That has a lot to do with how you perceive your network. If you look at your end-to-end network as three separate segments - you have two terrestrial links connected by a submarine link and you manage it that way - then SDN and control plane orchestration is probably going to be harder to do.
If you want to look at your network and virtualize the whole thing - whether your fiber plant is wet or dry - then you can use the same concepts being developed for pure terrestrial networks to virtualize the entire end-to-end connection. Ciena is working with some data center operators who want to use submarine cable connectivity to not only be able to connect internal enterprises and cloud providers within the same data center, but also between geographically dispersed data centers connected by submarine networks. That was a concept that we first discussed three or four years ago as an enabled opportunity of SDN and multi-domain orchestration across wet and dry plants, regardless of the network vendors, and now it’s starting to become a serious discussion point with data center operators.
Q: Will there need to be two NMSs? One for undersea and one for the third-party SLTE vendor?
Brian: Very good question and a lot of that will depend on the business model you have with terrestrial backhaul partners on either end of the submarine cable. There are submarine cable operators that have the terrestrial backhaul in their control on one end but partner with someone on the other end. So depending on your business partnership, you may be able to integrate into their NMS to have a single network view across the whole end-to-end network.
We have a solution called Blue Planet, which offers a multi-domain service orchestration network management, and it’s that multi-domain part that allows you to have a single NMS manage the end-to-end network where you control all three segments - two terrestrial and one submarine - or just control the submarine cable. So depending on your business model, you may be able to get away with one NMS making it much simpler to own and operate the transoceanic network.
Q: How do subsea networks integrate (architecturally-wise) with 5G backhaul wired and wireless infrastructure?
Brian: I see 5G as a floodgate of access to the global Internet, which will ultimately lead to accessed content being transported over submarine cables. One thing to keep in mind is nearly 100% of all intercontinental traffic is carried by submarine cables – nearly 100%! So if you’re accessing any piece of content not stored in a nearby data center, you could be jumping across an ocean and that’s why sometimes when you download a video, you have increased delay before it’s served up. It’s probably sitting in a data center in another country or on another continent. 5G will give people the ability to download a lot more information, so I think it’s going to impact the submarine networks as well and you’re going to start to see a lot more traffic flowing between data centers under the sea. So I think 5G is a good thing for everybody in our industry.
Q: How many current submarine cable systems support 100G wavelength connectivity?
Brian: I’d say essentially all of them. The question is how many wavelengths in total can they reliably support? One thing to keep in mind is that the actual usable optical spectrum of a submarine cable depends on its age. So, you’ll have a cable spectrum out there that has, say, 20 nanometers of usable bandwidth compared to something newer that has something closer to 40 nanometers, similar to a terrestrial network. The unique personality of the submarine cable will ultimately dictate the total number of supported wavelengths, and thus its total information-carrying capacity, given the latest available technology. As SLTE technology advances, so increases the total capacity of the submarine cable. This is why fewer new cables were deployed over the past few years just to increase capacities between continents.
Q: Do you see satellite links or terrestrial fiber lines increasing their pressure on submarine cables in the mid-term?
Paul: Satellite links, no I don’t think so. They’ve become – submarine cables – the amount of bandwidth they can provide is orders of magnitude more than a satellite link. Which is not to say that satellites can’t address any market, that’s not true at all. Certainly, companies like 03B Networks have done a pretty good job of marketing themselves to certain applications and certain types of customers, but in terms of total throughput that they can send and price per megabit, generally speaking, submarine cables, I think, are going to be a much more effective solution for that.
Q: Can one claim that deploying subsea fiber cables may be more efficient (cost, maintenance, etc.) for connecting two distant locations when compared to terrestrial networks?
Brian: That’s a question that I get a lot, and the answer all depends on what part of the world you’re operating in. You can connect two data centers across a terrestrial link and a lot of people are doing that today. You know, you have data centers on the West Coast in San Jose and you’ve got them in Ashburn, VA in the United States, for example, and it’s terrestrial links that are connecting them together. When you want to connect continents, you typically have to cross an ocean, especially when you’re connecting North American to either APAC (Asia Pacific) or Europe. We have to remember, if you look at the top 5 ICPs, they’re all based in the United States so connecting to US-based data centers is a common requirement. For some applications, like from Asia to Europe, you may be able to cross that long terrestrial fiber route between Europe and Asia, and some people are doing that as well, but politics and security concerns often come into play when traversing multiple countries along the way. It’s actually easier to drop a cable in international waters over 6,000 kilometers than run a 6,000 kilometer cable through 5-6-7-8 countries, get regulatory approvals, resolve security concerns, and so on. When a cable is sitting on the bottom of an ocean 3-4-5 kilometers down, it’s very hard to get to. It’s actually much safer than having a cable crossing multiple countries, as some of these countries may have some political strife. It’s much easier to access the cable on land, so it’s actually far less safe.
Q: What’s the estimated bandwidth for these new cable systems?
Paul: We have some figures on this. So for example, the Faster cable which is one that Google has invested in, there’s six fiber pairs on this cable. Let’s assume these are running 100G waves over it and I think the latest announcement is that they can run over 100 of these 100G waves, so multiply that by six fiber pairs and we’re talking about 60 terabits per second of potential capacity on the cable, again based on the number of fiber pairs and the current technology.
But going forward, who knows? Brian can talk to this, but there’s 400G on the horizon at some point and the sky’s the limit. But the Faster cable right now, for example, we believe has potential capacity of 60 terabits today, and as technology improves, that may, in fact, increase.
Brian: Yeah, and you have some announcements talking about 160 terabit cables across the Atlantic Ocean using 8 fiber pairs, so like Paul says, every time we think we’ve hit the limit, we find a new way to push more bits down that strand of fiber under the ocean. So the ultimate information-carrying capacity on a cable is really a snapshot in time with the technology available at the time of the question.