Implications of OpenAIโs five Stargate sites for AI infrastructure
OpenAIโs latest buildout with Oracle and SoftBank accelerates a US-scale AI compute footprint measured in gigawatts, not megawatts.
Stargate expansion: key facts and sites
OpenAI plans five new US data centers under the Stargate umbrella, pushing the initiativeโs planned capacity to nearly 7 gigawattsโroughly equivalent to several utility-scale power plants. Three sitesโShackelford County, Texas; Doรฑa Ana County, New Mexico; and an undisclosed Midwest locationโwill be developed with Oracle following their previously disclosed agreement to add up to 4.5 GW of US capacity on top of the Abilene, Texas flagship. Two additional sites in Lordstown, Ohio and Milam County, Texas will be developed with SB Energy, SoftBankโs renewables and storage arm. OpenAI also expects to expand Abilene by approximately 600 MW, with the broader program claiming tens of thousands of onsite construction jobs, though ongoing operations will need far fewer staff once live.
Operating model, partners, and multiโGW scale
Stargate has evolved into an umbrella brand for OpenAI-operated data center projects outside its separate alignment with Microsoft. In Abilene, Oracle primarily owns and runs the facility on Oracle Cloud Infrastructure (OCI), with OpenAI as an anchor tenant; construction management is led by Crusoe, and the campus is slated to reach about 1.4 GW across eight ~100 MW halls, with more than 400,000 GPUs at full build. In parallel, OpenAI announced a separate strategic pact with Nvidia to deploy up to 10 GW of AI capacity over multiple sites, backed by up to $100 billion in staged investment tied to each gigawatt deployed. The initial Nvidia system, based on the next-generation Vera Rubin GPUs, targets service in the second half of 2026. Internationally, the company has flagged potential large-scale projects under Stargate UK and Stargate UAE.
US policy tailwinds and siting constraints
US policymakers have framed large AI infrastructure as a national competitiveness priority with calls to streamline permitting and interconnection, reflecting an explicit โAI raceโ narrative amid rising investments in China and allied markets. That policy tailwind may help siting and interconnect timelines, but it does not remove grid, supply chain, and local land-use constraints that have slowed other hyperscale projects.
Impact on telecom networks and cloud infrastructure
This wave of AI campuses will stress-test power, network, and supply chains while redrawing where and how high-density compute meets the network edge.
Power and cooling as primary constraints
Nearly 7 GW of incremental load means transmission upgrades, interconnection queue navigation, and stringent power quality and reliability across ERCOT, PJM, and Western interconnects. Texas and New Mexico sites align with abundant land, fast growth in solar and wind, and evolving transmission buildouts; Ohio provides proximity to Midwest manufacturing loads and evolving grid capacity. Expect hybrid power procurementโlong-duration PPAs, utility-scale storage, and potentially dispatchable resourcesโto maintain high uptime for training clusters. Water and cooling are equally material: direct-to-chip liquid cooling, warm-water loops aligned with ASHRAE TC 9.9 guidance, and heat-reuse options will be essential to manage density and sustainability claims.
Backhaul and DCI: 400G/800G and multiโterabit scaling
Training-scale clusters demand massive eastโwest traffic and low-latency, high-throughput connectivity between availability zones and regions. Carriers and dark-fiber providers should anticipate multi-terabit data center interconnect (DCI) between these new campuses and established hubs, accelerating 400G/800G coherent waves, ROADM upgrades, and modern fiber routes with high-count cables. Inside the metros, high-fiber-count laterals, neutral meet-me facilities, and cross-connect density become differentiators. For cloud on-ramps, OCIโs expanding footprint around Stargate locations will attract enterprises seeking proximity for AI training, fine-tuning, and inference, with peering to multiple CSPs and IXPs to reduce egress costs and latency.
Supply, construction, and delivery risks
Even with Nvidiaโs staged investments, GPU availability, liquid cooling supply, switch silicon, and power gear (transformers, switchgear) remain gating items. Construction capacity will be stretched across parallel projects, and cost inflation or trade uncertainty on advanced components could slow timelines. The financing stackโmulti-party JVs, leaseback models, and long-term offtakeโadds accounting complexity that demands disciplined governance and transparency as projects scale.
Strategic guidance for operators and enterprises
Network providers, cloud partners, and large enterprises should align roadmaps now to capture adjacency value and mitigate risk as compute gravitates to these new hubs.
Actions for telecom carriers and fiber operators
Prioritize metro builds around Abilene, Shackelford County, Doรฑa Ana County, Lordstown, and the forthcoming Midwest site with diverse, low-latency routes and SLA-backed DCI. Standardize on 400G/800G optics, deploy open optical line systems for scalability, and reserve conduit in anticipation of follow-on halls. Develop neutral interconnection campuses near Stargate gates with robust cross-connect markets. Bundle power-smart servicesโdemand response, microgrid integration, and heat reuse partnershipsโto bolster sustainability credentials and win RFP points.
Actions for cloud/AI partners and large enterprises
Architect for multi-cloud where OCI proximity to Stargate provides training adjacency and leverage sovereign or sector-specific controls as data gravity increases. Negotiate reserved capacity with clear GPU roadmaps (e.g., migration from current accelerators to Nvidia Vera Rubin) to de-risk model evolution. Lock in long-term renewable energy certificates or direct PPAs and embed power telemetry into FinOps to track cost per token and per inference. Standardize facilities integration around OCP designs and liquid cooling readiness to reduce deployment variance across sites.
Milestones and dependencies through 2027
Key milestones include Abileneโs ramp to multiple halls, the disclosure and permitting of the Midwest site, initial power-on of Vera Rubin clusters in 2H26, and interconnection queue outcomes across ERCOT and PJM. Internationally, monitor Stargate UK/UAE siting and how export controls shape GPU allocations. In the US, track federal and state actions on transmission permitting, data center incentives, and water use regulations, which will influence siting economics.
Key risks and open questions
Despite the scale, several uncertainties could reshape the trajectory and ROI of Stargate-era builds.
Scale vs. efficiency
OpenAI is betting that larger clusters continue to unlock performance and new revenue, but recent models from competitors trained with fewer resources suggest efficiency breakthroughs could erode the advantage of sheer scale. Unit economics will hinge on sustained demand for high-value training and monetizable inference, not just capacity installed.
ESG, community acceptance, and water/land use
Large AI campuses bring land, water, noise, and emissions scrutiny, and some communities have resisted hyperscale projects. Robust sustainability strategiesโclean energy matching, storage-backed firming, water stewardship, and credible PUE/WUE reportingโwill be essential to maintain momentum and social license to operate.
Governance, financing, and vendor concentration
Multi-party financing and offtake structures heighten operational and accounting risk if timelines slip or demand softens, while tight coupling to specific GPU generations increases supply-chain exposure. Diversifying accelerator options where feasible, maintaining transparent KPIs, and staging capex to real demand signals can mitigate concentration risks.
Bottom line: powerโcentric AI buildout
Stargateโs five new sites, paired with Oracleโs operator role and SoftBankโs clean energy alignmentโand amplified by Nvidiaโs 10 GW roadmapโmark a decisive turn toward power-centric AI infrastructure that will rewire networks, supply chains, and siting strategies.
2025 action plan
Secure fiber routes into the named metros, pre-negotiate interconnect and cross-connect capacity, align liquid-cooling and power-readiness standards across facilities, and embed energy and network telemetry into AI FinOps so that capacity growth translates into sustainable, defensible unit economics.