Oracle

Two narratives are converging: Silicon Valleyโ€™s rush to add gigawatts of AI capacity and a quiet revival of bunkers, mines, and mountains as ultra-resilient data hubs. Recent headlines point to unprecedented AI infrastructure spending tied to OpenAI. The draw is physical security, thermal stability, data sovereignty, and a narrative of longevity in an era where outages and cyberโ€‘physical risks are rising. Geopolitics, regulation, and escalating outage impact are reshaping site selection and architectural choices. The AI buildโ€‘out collides with grid interconnection queues, water scarcity, and rising scrutiny of carbon and noise. Set hard thresholds on PUE and WUE; require realโ€‘time telemetry and thirdโ€‘party assurance.
Databricks is adding OpenAIโ€™s newest foundation models to its catalog for use via SQL or API, alongside previously introduced open-weight options gpt-oss 20B and 120B. Customers can now select, benchmark, and fine-tune OpenAI models directly where governed enterprise data already lives. The move raises the stakes in the race to make generative AI a first-class, governed workload inside data platforms rather than an external service tethered by integration and compliance gaps. For telecom and enterprise IT, it reduces friction for AI agents that must safely traverse customer, network, and operational data domains.
OpenAI introduced ChatGPT Pulse, a new capability that assembles personalized morning briefs and agendas without a prompt, indicating a clear shift from reactive chat to proactive, task-oriented assistance. Pulse generates five to ten concise reports while you sleep, then packages them as interactive cards inside ChatGPT. Each card contains an AI-generated summary with source links, and users can drill down, ask follow-up questions, or request new briefs. Beyond public web content, Pulse can tap ChatGPT Connectors, such as Gmail and Google Calendar -to highlight priority emails, synthesize threads, and build agendas from upcoming events. If ChatGPT memory is enabled, Pulse weaves in user preferences and past context to tailor briefs.
OpenAI plans five new US data centers under the Stargate umbrella, pushing the initiativeโ€™s planned capacity to nearly 7 gigawattsโ€”roughly equivalent to several utility-scale power plants. Three sitesโ€”Shackelford County, Texas; Doรฑa Ana County, New Mexico; and an undisclosed Midwest locationโ€”will be developed with Oracle following their previously disclosed agreement to add up to 4.5 GW of US capacity on top of the Abilene, Texas flagship. Two additional sites in Lordstown, Ohio and Milam County, Texas will be developed with SB Energy, SoftBankโ€™s renewables and storage arm. OpenAI also expects to expand Abilene by approximately 600 MW, with the broader program claiming tens of thousands of onsite construction jobs, though ongoing operations will need far fewer staff once live.
The CPU roadmap is strategically important because AI clusters depend on balanced CPU-GPU ratios and fast data pipelines that keep accelerators fed and utilized. Even as GPUs carry training and inference, CPUs govern input pipelines, feature engineering, storage I/O, service meshes, and containerized microservices that wrap models in production. More cores and threads at competitive power envelopes reduce bottlenecks around feeder tasks, scheduling, and data staging, improving accelerator utilization and lowering total cost per token or inference. In this lens, a 256-core Arm-based Kunpeng in 2028 would directly affect how much AI throughput Ascend accelerators can sustain per rack.
OpenAI and NVIDIA unveiled a multiโ€‘year plan to deploy 10 gigawatts of NVIDIA systems, marking one of the largest single commitments to AI compute to date. The partners outlined an ambition to stand up AI โ€œfactoriesโ€ totaling roughly 10GW of power, equating to several million GPUs across multiple sites and phases as capacity and supply chains mature. NVIDIA plans to invest up to $100 billion in OpenAI, with tranches released as milestones are met; the first $10 billion aligns to completion of the initial 1GW. The first waves will use NVIDIAโ€™s nextโ€‘generation Vera Rubin systems beginning in the second half of 2026.
Gartnerโ€™s latest outlook points to global AI spend hitting roughly $1.5 trillion in 2025 and exceeding $2 trillion in 2026, signaling a multi-year investment cycle that will reshape infrastructure, devices, and networks. This is not a short-lived hype curve; it is a capital plan. Hyperscalers are pouring money into data centers built around AI-optimized servers and accelerators, while device makers push on-device AI into smartphones and PCs at scale. For telecom and enterprise IT leaders, the message is clear: capacity, latency, and data gravity will dictate where value lands. Spending is broad-based. AI services and software are growing fast, but the heavy lift is in hardware and cloud infrastructure.
Lumen has introduced Wavelength RapidRoutes, a pre-engineered 100G/400G service with a 20-day delivery SLA aimed at removing months-long bottlenecks from enterprise and hyperscaler connectivity. The company is packaging pre-defined, high-demand optical paths as a catalog of ready-to-deploy waves, removing custom design cycles from many standard routes. Lumenโ€™s RapidRoutes offers 100G and up to 400G wavelength services on prioritized intercity routes with an industry-forward 20-day service delivery SLA, shifting the customer experience from quote-engineer-build to select-provision-activate on pre-engineered paths. A portal-enabled experience with AI-driven tools and more than 300 automated workflows underpins ordering, change management, and capacity scaling.
AI buildouts and multi-cloud scale are stressing data center interconnect, making high-capacity, on-demand metro connectivity a priority for enterprises. Training pipelines, retrieval-augmented generation, and model distribution are shifting traffic patterns from north-south to high-volume east-west across metro clusters of data centers and cloud on-ramps. This is the backdrop for Lumen Technologies push to deliver up to 400Gbps Ethernet and IP Services in more than 70 third-party, cloud on-ramp ready facilities across 16 U.S. metro markets. The draw is operational agility: bandwidth provisioning in minutes, scaling up to 400Gbps per service, and consumption-based pricing that aligns spend with variable AI and data movement spikes.
SoftBank will invest $2 billion in Intel, taking roughly a 2% stake at $23 per share and becoming one of Intels largest shareholders. It is a financial vote of confidence in a company trying to reestablish process leadership, scale a foundry business, and convince marquee customers to commit to external wafer orders. SoftBank has been assembling an AI supply-chain franchise that spans IP, compute, and infrastructure. It owns Arm, agreed to acquire Arm server CPU designer Ampere Computing, injected massive capital into OpenAI, and aligned with Oracle under the Stargate hyperscale AI initiative backed by the current U.S. administration.
OpenAI has confirmed its role in a $30 billion-per-year cloud infrastructure deal with Oracle, marking one of the largest cloud contracts in tech history. Part of the ambitious Stargate project, the deal aims to support OpenAIโ€™s growing demand for compute resources, with 4.5GW of capacity dedicated to training and deploying advanced AI models. The partnership positions Oracle as a major player in the AI cloud arms race while signaling OpenAIโ€™s shift toward vertically integrated infrastructure solutions.
OpenAIโ€™s Stargate projectโ€”a $500B plan to build global AI infrastructureโ€”is facing delays in the U.S. due to rising tariffs and economic uncertainty. While the first phase in Texas slows, OpenAI is shifting focus internationally with โ€œOpenAI for Countries,โ€ a new initiative to co-build sovereign AI data centers worldwide. Backed by Oracle and SoftBank, Stargate is designed to support massive AI workloads and reshape global compute power distribution.

Feature Your Brand with the Winners

In Private Network Magazine Editions

Sponsorship placements open until Oct 31, 2025

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy.ย 

Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...
Scroll to Top