Ciena’s Pradap Rajagopal explores why Australia’s rise as an AI infrastructure hub depends on more than land, energy, and compute. As AI factories scale nationwide, high-capacity optical networks will be critical to moving data, connecting GPUs, and enabling distributed AI services.

Australian data centre capacity has seen tremendous growth over the past decade. Despite a population of just under 28 million, Australia has emerged as having one of the largest data centre capacities in the world on a per capita basis, with approximately 1,400 MW of IT load operational, growing to 3,200 MW by 2030.

So, why has Australia become such a disproportionate data centre heavyweight? There are three key factors:

  • Vast geographic space: The sheer landmass required to build sprawling, multi-building hyperscale campuses and their necessary power substations is readily available.
  • Energy availability and stability: AI is power-hungry. Australia offers a well-regulated and stable political environment that guarantees energy security; a critical factor when compute downtime costs millions. Furthermore, as hyperscalers aggressively pursue net-zero emissions, Australia's massive potential for utility-scale renewable energy (solar and wind) makes it an ideal location to sustainably power these operations.
  • Cooling innovation: As the driest inhabited continent, water is constrained. . It has forced local operators to become global pioneers in high-efficiency cooling; moving away from massive water consumption and adopting closed-loop systems, greywater recycling, and direct-to-chip liquid cooling.

A critical, but less spoken of factor is the network connectivity into Australia. The past decade has seen a frenzy of submarine cable builds, enabling the seamless ingress and egress of cloud workloads. This positions Australia as a digital safe-haven, enabling data sovereignty and availability zones with optimal latency.

To date, data centres in Australia have been built for “general purpose” compute; full of CPUs to provide cloud computing, enabling enterprise (including government) and consumer applications.

However, we now see the deployment of large-scale data centres designed for AI workloads in the form of “AI factories”: massive GPU deployments that will straddle the entire continent. This will provide critical sovereign AI capabilities and (when combined with the favourable factors outlined above) drive the need for robust, high-capacity connectivity to support AI workloads operating at national and regional scale.

The three pillars of AI network demand

Land, energy, and water go hand-in-hand with any discussion on AI factory deployments. Throwing GPUs into that mix just adds to the complexity. However, what’s not always apparent, but just as critical, is network connectivity to and from, and between AI factories.

We are seeing three primary use cases driving the massive need for network infrastructure to support AI workloads:

1. Data ingestion – An AI model must be trained before it can do anything. This requires petabytes of raw, unstructured data. Whether it is a government agency uploading classified national archives, a massive enterprise pooling decades of telemetry data, or a hyperscaler looking to train on Australian consumer-specific content, moving this payload from data lakes into the AI factory requires dedicated high-capacity connectivity. This is only possible through high-speed optical networking. The network must handle sustained, terabit-scale bandwidth bursts to ensure expensive GPU clusters aren't sitting idle waiting for data to arrive.

2. Scale across (distributed compute) – As detailed in Brodie Gage’s blog, training the next generation of AI models increasingly require more GPUs than can be practically powered within a single data centre. As a result, GPU clusters are being distributed across multiple facilities, often separated by hundreds or thousands of kilometres. For effective AI training, these distributed GPUs must synchronise their parameters in real-time. This is a departure from standard “Data Centre Interconnect” and requires orders of magnitude higher bandwidth and loss-intolerant, synchronous connectivity.

3. Inference – Historically, AI infrastructure requirements were dominated by training workloads, with inference deployed in a localised and infrastructure-light manner. That balance is now shifting.

AI systems are evolving from simple prompt-and-response interactions into more advanced workflows that involve reasoning, planning, and multistep decision-making. These capabilities rely more on large pools of GPUs operating during runtime, rather than short, isolated inference events.

As inference becomes a sustained, always-on workload, it places new demands on the underlying infrastructure. Rather than being confined to a single facility, inference workloads are distributed across multiple data centres to meet availability, latency, scale and cost-of-compute requirements. This drives the deployment of additional AI facilities across a wider geographic footprint, and closer to the end user.

It is this physical distribution that creates significant new demands on optical networks. Connectivity must scale alongside the expansion of inference infrastructure, providing predictable, high-capacity, low-latency transport between sites. The network must bind these distributed environments together, ensuring AI services operate as a coherent system without being constrained by local or regional bottlenecks.

The role of the network in scaling AI infrastructure

As Australia continues to grow as a hub for AI infrastructure, connectivity is becoming a primary design consideration. Land, energy, and compute remain essential, but the ability to move data efficiently is what enables AI infrastructure to scale. AI workloads introduce sustained high-bandwidth demand with strict latency and reliability requirements.

The use cases of data ingestion, distributed training, and inference reflect a shift in traffic patterns. Networks must support continuous large-scale data movement across dispersed sites. This requires high-capacity optical infrastructure designed for predictable performance at scale.

Australia’s investment in digital infrastructure and its stable operating environment position it well. Realising this potential, however, depends on the network scaling in step with compute, enabling AI systems to operate as a coherent, distributed platform rather than a collection of isolated facilities.