Mitch SimcoeMitch Simcoe is Director of Global Consulting at Ciena. He primes the consulting practice for DC Interconnect services and multi-tenant datacenter operators.

 

There’s a major transformation occurring in the IT space as enterprises shift more of their IT spend from Private Cloud towards Hybrid and Public Cloud services. This means facing a number of challenges, including how to migrate enormous amounts of content (typically N x 10Tbs) to setup their initial instance in the cloud. Traditional IP networks that connect an enterprise to a cloud datacenter (DC) would be 1Gbps or less. At this rate, uploading N x 10Tbs of content would require weeks.

Take Amazon Web Services for example. As one of the market leaders in cloud services and storage, this issue of backing up huge datasets was such an important need for customers that the company took a page from the retail side of their business and introduced a hard-drive-based Cloud DC Migration service. An enterprise manually transfers 50TB of content onto one or more hard drive devices and then ships it by truck to the Cloud Provider where they upload the content into the Cloud Provider’s DC.

Despite the reduced transport time, the enterprise still requires several weeks to manually transfer the content onto the hard drives, which ensures the data transfer is both secure and reliable. So in reality, this approach really only solves part of the problem—it reduces the transport time from enterprise to cloud DC, but in doing so it shifts much of the heavy lifting to the enterprise.

AWS Import/Export Snowball diagram

 

Out-of-Date on Arrival

One of the key issues with datacenter-to-datacenter backup is that the enterprise’s data continues to change even during the time required to complete the backup, making the customer’s data out of date by the time the backup finishes. For example, if an enterprise in DC 1 wants to backup 3TB of content (the typical daily backup for a large enterprise) to a regional DC in a second metro, this task would take over 8 hours at sub 1Gbps speeds.   

Sure, the enterprise could upgrade its base connection between data centers to reduce the backup time, but this extra bandwidth is likely to be unnecessary for the rest of the day and, therefore, an inefficient use of valuable IT budget.

IP/MPLS Network diagram

 

So how can we address the limitations of today’s IP-based DCI networks? By using a DC Connect Fabric service optimized to meet the high performance applications of datacenter interconnect (DCI).

A DC Connect Fabric service is defined by the following attributes:

  • Provides any-to-any connectivity between DCs on the fabric, whether within the same metro or across metros;
  • Offers direct connectivity to multiple public cloud services such as Google, Microsoft, or Amazon;
  • Rapid turn-up of service in days;
  • On-Demand bandwidth from 1, 10, 40, and 100Gbps; and
  • Low latency and in-flight encryption to meet performance and security concerns.

DC Connect Fabric Illustration

 

Let’s revisit the previous example where an enterprise needs to backup 3TB of data daily. Introducing a DC Connect Fabric service with an on-demand 10Gbps connection allows the backup to be performed in 45 minutes, a 91% reduction over the required 8-hour IP network window. This ensures that, should there be failure, only 45 minutes of changes would be impacted versus 8 hours.

DC Fabric Connect Backup 8 hours and 45 mins

 

Lead Applications for DC Connect Fabric Services

A number of DCI applications (e.g., content distribution, load balancing, storage area networking) demand the high bandwidth / low latency performance characteristics made possible by a DC Connect Fabric service, including:

Offsite Data Backup: Include high bandwidth for recurring, off-hour backups, as well as active-to-active data storage replication traffic.

Disaster Recovery: Following a disaster, an enterprise needs to transfer their entire content library from their backup location to their production environment, which requires a high bandwidth and performance DCI network.

Data Migration: As enterprises shift more of their IT resources to cloud DCs, they need to migrate their full dataset so they can initiate their presence in the cloud. Leveraging an on-demand 10Gbps or more connection via the DC Connect Fabric reduces the migration time to under an hour.

Live Virtual Machine Migrations: As enterprises shift their applications into the cloud, performance requirements will need to adjust to meet user and customer needs in real time. The on-demand characteristics of the DC Connect Fabric can provide the bandwidth needed to enable real-time access to these additional virtual resources.

 

Revenue Ramp Up

Revenue acceleration is a key benefit to DC Operators using this approach. Their customer’s connectivity needs can be addressed in days, which aligns with their business objectives for DC space, power, and cross-connect capabilities. On-demand bandwidth can now enable new agile DC services that only need periodic DCI capabilities, such as business continuity, disaster recovery, or the Amazon data migration example we gave earlier.

For enterprises, the benefits are flexibility and maximizing cloud options. Connectivity into a DC Connect Fabric service provides enterprises with the greatest flexibility to connect to the greatest number of multi-tenant and public datacenters. The enterprise needs this flexibility so it can move IT resources around to the DCs with the most economical terms for space, power, and cross-connect capabilities.

The shift for enterprises to move their IT resources to both multi-tenant and public clouds is only at the beginning of the adoption curve. As this trend accelerates, DCI connectivity from enterprise and in-between cloud DCs will grow accordingly. To meet these requirements, managed service providers need to deliver a DCI service that delivers the maximum flexibility to cloud DCs, rapid provisioning of DCI services in days instead of weeks, and the ability to provide connectivity on an on-demand basis.