OpenAI, Oracle, SoftBank: 5 Stargate AI Data Centers

OpenAI plans five new US data centers under the Stargate umbrella, pushing the initiativeโ€™s planned capacity to nearly 7 gigawattsโ€”roughly equivalent to several utility-scale power plants. Three sitesโ€”Shackelford County, Texas; Doรฑa Ana County, New Mexico; and an undisclosed Midwest locationโ€”will be developed with Oracle following their previously disclosed agreement to add up to 4.5 GW of US capacity on top of the Abilene, Texas flagship. Two additional sites in Lordstown, Ohio and Milam County, Texas will be developed with SB Energy, SoftBankโ€™s renewables and storage arm. OpenAI also expects to expand Abilene by approximately 600 MW, with the broader program claiming tens of thousands of onsite construction jobs, though ongoing operations will need far fewer staff once live.
OpenAI, Oracle, SoftBank: 5 Stargate AI Data Centers
Image Credit: OpenAI

Implications of OpenAIโ€™s five Stargate sites for AI infrastructure

OpenAIโ€™s latest buildout with Oracle and SoftBank accelerates a US-scale AI compute footprint measured in gigawatts, not megawatts.

Stargate expansion: key facts and sites

OpenAI plans five new US data centers under the Stargate umbrella, pushing the initiativeโ€™s planned capacity to nearly 7 gigawattsโ€”roughly equivalent to several utility-scale power plants. Three sitesโ€”Shackelford County, Texas; Doรฑa Ana County, New Mexico; and an undisclosed Midwest locationโ€”will be developed with Oracle following their previously disclosed agreement to add up to 4.5 GW of US capacity on top of the Abilene, Texas flagship. Two additional sites in Lordstown, Ohio and Milam County, Texas will be developed with SB Energy, SoftBankโ€™s renewables and storage arm. OpenAI also expects to expand Abilene by approximately 600 MW, with the broader program claiming tens of thousands of onsite construction jobs, though ongoing operations will need far fewer staff once live.

Operating model, partners, and multiโ€‘GW scale

Stargate has evolved into an umbrella brand for OpenAI-operated data center projects outside its separate alignment with Microsoft. In Abilene, Oracle primarily owns and runs the facility on Oracle Cloud Infrastructure (OCI), with OpenAI as an anchor tenant; construction management is led by Crusoe, and the campus is slated to reach about 1.4 GW across eight ~100 MW halls, with more than 400,000 GPUs at full build. In parallel, OpenAI announced a separate strategic pact with Nvidia to deploy up to 10 GW of AI capacity over multiple sites, backed by up to $100 billion in staged investment tied to each gigawatt deployed. The initial Nvidia system, based on the next-generation Vera Rubin GPUs, targets service in the second half of 2026. Internationally, the company has flagged potential large-scale projects under Stargate UK and Stargate UAE.

US policy tailwinds and siting constraints

US policymakers have framed large AI infrastructure as a national competitiveness priority with calls to streamline permitting and interconnection, reflecting an explicit โ€œAI raceโ€ narrative amid rising investments in China and allied markets. That policy tailwind may help siting and interconnect timelines, but it does not remove grid, supply chain, and local land-use constraints that have slowed other hyperscale projects.

Impact on telecom networks and cloud infrastructure

This wave of AI campuses will stress-test power, network, and supply chains while redrawing where and how high-density compute meets the network edge.

Power and cooling as primary constraints

Nearly 7 GW of incremental load means transmission upgrades, interconnection queue navigation, and stringent power quality and reliability across ERCOT, PJM, and Western interconnects. Texas and New Mexico sites align with abundant land, fast growth in solar and wind, and evolving transmission buildouts; Ohio provides proximity to Midwest manufacturing loads and evolving grid capacity. Expect hybrid power procurementโ€”long-duration PPAs, utility-scale storage, and potentially dispatchable resourcesโ€”to maintain high uptime for training clusters. Water and cooling are equally material: direct-to-chip liquid cooling, warm-water loops aligned with ASHRAE TC 9.9 guidance, and heat-reuse options will be essential to manage density and sustainability claims.

Backhaul and DCI: 400G/800G and multiโ€‘terabit scaling

Training-scale clusters demand massive eastโ€“west traffic and low-latency, high-throughput connectivity between availability zones and regions. Carriers and dark-fiber providers should anticipate multi-terabit data center interconnect (DCI) between these new campuses and established hubs, accelerating 400G/800G coherent waves, ROADM upgrades, and modern fiber routes with high-count cables. Inside the metros, high-fiber-count laterals, neutral meet-me facilities, and cross-connect density become differentiators. For cloud on-ramps, OCIโ€™s expanding footprint around Stargate locations will attract enterprises seeking proximity for AI training, fine-tuning, and inference, with peering to multiple CSPs and IXPs to reduce egress costs and latency.

Supply, construction, and delivery risks

Even with Nvidiaโ€™s staged investments, GPU availability, liquid cooling supply, switch silicon, and power gear (transformers, switchgear) remain gating items. Construction capacity will be stretched across parallel projects, and cost inflation or trade uncertainty on advanced components could slow timelines. The financing stackโ€”multi-party JVs, leaseback models, and long-term offtakeโ€”adds accounting complexity that demands disciplined governance and transparency as projects scale.

Strategic guidance for operators and enterprises

Network providers, cloud partners, and large enterprises should align roadmaps now to capture adjacency value and mitigate risk as compute gravitates to these new hubs.

Actions for telecom carriers and fiber operators

Prioritize metro builds around Abilene, Shackelford County, Doรฑa Ana County, Lordstown, and the forthcoming Midwest site with diverse, low-latency routes and SLA-backed DCI. Standardize on 400G/800G optics, deploy open optical line systems for scalability, and reserve conduit in anticipation of follow-on halls. Develop neutral interconnection campuses near Stargate gates with robust cross-connect markets. Bundle power-smart servicesโ€”demand response, microgrid integration, and heat reuse partnershipsโ€”to bolster sustainability credentials and win RFP points.

Actions for cloud/AI partners and large enterprises

Architect for multi-cloud where OCI proximity to Stargate provides training adjacency and leverage sovereign or sector-specific controls as data gravity increases. Negotiate reserved capacity with clear GPU roadmaps (e.g., migration from current accelerators to Nvidia Vera Rubin) to de-risk model evolution. Lock in long-term renewable energy certificates or direct PPAs and embed power telemetry into FinOps to track cost per token and per inference. Standardize facilities integration around OCP designs and liquid cooling readiness to reduce deployment variance across sites.

Milestones and dependencies through 2027

Key milestones include Abileneโ€™s ramp to multiple halls, the disclosure and permitting of the Midwest site, initial power-on of Vera Rubin clusters in 2H26, and interconnection queue outcomes across ERCOT and PJM. Internationally, monitor Stargate UK/UAE siting and how export controls shape GPU allocations. In the US, track federal and state actions on transmission permitting, data center incentives, and water use regulations, which will influence siting economics.

Key risks and open questions

Despite the scale, several uncertainties could reshape the trajectory and ROI of Stargate-era builds.

Scale vs. efficiency

OpenAI is betting that larger clusters continue to unlock performance and new revenue, but recent models from competitors trained with fewer resources suggest efficiency breakthroughs could erode the advantage of sheer scale. Unit economics will hinge on sustained demand for high-value training and monetizable inference, not just capacity installed.

ESG, community acceptance, and water/land use

Large AI campuses bring land, water, noise, and emissions scrutiny, and some communities have resisted hyperscale projects. Robust sustainability strategiesโ€”clean energy matching, storage-backed firming, water stewardship, and credible PUE/WUE reportingโ€”will be essential to maintain momentum and social license to operate.

Governance, financing, and vendor concentration

Multi-party financing and offtake structures heighten operational and accounting risk if timelines slip or demand softens, while tight coupling to specific GPU generations increases supply-chain exposure. Diversifying accelerator options where feasible, maintaining transparent KPIs, and staging capex to real demand signals can mitigate concentration risks.

Bottom line: powerโ€‘centric AI buildout

Stargateโ€™s five new sites, paired with Oracleโ€™s operator role and SoftBankโ€™s clean energy alignmentโ€”and amplified by Nvidiaโ€™s 10 GW roadmapโ€”mark a decisive turn toward power-centric AI infrastructure that will rewire networks, supply chains, and siting strategies.

2025 action plan

Secure fiber routes into the named metros, pre-negotiate interconnect and cross-connect capacity, align liquid-cooling and power-readiness standards across facilities, and embed energy and network telemetry into AI FinOps so that capacity growth translates into sustainable, defensible unit economics.


Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore the collaboration between Purdue Research Foundation, Purdue University, Ericsson, and Saab at the Aviation Innovation Hub. Discover how private 5G networks, real-time analytics, and sustainable innovations are shaping the "Airport of the Future" for a smarter, safer, and greener aviation industry....
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...

Partner Events

Explore Magazine

Promote your brand

Subscribe To Our Newsletter

Private Network Solutions - TeckNexus

Subscribe To Our Newsletter

Feature Your Brand in Upcoming Magazines

Showcase your expertise through a sponsored article or executive interview in TeckNexus magazines, reaching enterprise and industry decision-makers.

Scroll to Top