Softbank

Two narratives are converging: Silicon Valley’s rush to add gigawatts of AI capacity and a quiet revival of bunkers, mines, and mountains as ultra-resilient data hubs. Recent headlines point to unprecedented AI infrastructure spending tied to OpenAI. The draw is physical security, thermal stability, data sovereignty, and a narrative of longevity in an era where outages and cyber‑physical risks are rising. Geopolitics, regulation, and escalating outage impact are reshaping site selection and architectural choices. The AI build‑out collides with grid interconnection queues, water scarcity, and rising scrutiny of carbon and noise. Set hard thresholds on PUE and WUE; require real‑time telemetry and third‑party assurance.
Databricks is adding OpenAI’s newest foundation models to its catalog for use via SQL or API, alongside previously introduced open-weight options gpt-oss 20B and 120B. Customers can now select, benchmark, and fine-tune OpenAI models directly where governed enterprise data already lives. The move raises the stakes in the race to make generative AI a first-class, governed workload inside data platforms rather than an external service tethered by integration and compliance gaps. For telecom and enterprise IT, it reduces friction for AI agents that must safely traverse customer, network, and operational data domains.
OpenAI introduced ChatGPT Pulse, a new capability that assembles personalized morning briefs and agendas without a prompt, indicating a clear shift from reactive chat to proactive, task-oriented assistance. Pulse generates five to ten concise reports while you sleep, then packages them as interactive cards inside ChatGPT. Each card contains an AI-generated summary with source links, and users can drill down, ask follow-up questions, or request new briefs. Beyond public web content, Pulse can tap ChatGPT Connectors, such as Gmail and Google Calendar -to highlight priority emails, synthesize threads, and build agendas from upcoming events. If ChatGPT memory is enabled, Pulse weaves in user preferences and past context to tailor briefs.
Wayve’s end-to-end driving AI is now running in Nissan Ariya electric vehicles in Tokyo, marking a pragmatic step toward consumer deployment in 2027. The test vehicles combine a camera-first approach with radar and a lidar unit for redundancy, aligning with Japan’s dense urban environment and complex traffic patterns. The initial commercial target is “eyes on, hands off” Level 2 driver assistance, with drivers remaining responsible and ready to take over. Nvidia has signed a letter of intent for a potential $500 million investment in Wayve’s next funding round, reinforcing the compute-intensive nature of the program.
OpenAI plans five new US data centers under the Stargate umbrella, pushing the initiative’s planned capacity to nearly 7 gigawatts—roughly equivalent to several utility-scale power plants. Three sites—Shackelford County, Texas; Doña Ana County, New Mexico; and an undisclosed Midwest location—will be developed with Oracle following their previously disclosed agreement to add up to 4.5 GW of US capacity on top of the Abilene, Texas flagship. Two additional sites in Lordstown, Ohio and Milam County, Texas will be developed with SB Energy, SoftBank’s renewables and storage arm. OpenAI also expects to expand Abilene by approximately 600 MW, with the broader program claiming tens of thousands of onsite construction jobs, though ongoing operations will need far fewer staff once live.
OpenAI and NVIDIA unveiled a multi‑year plan to deploy 10 gigawatts of NVIDIA systems, marking one of the largest single commitments to AI compute to date. The partners outlined an ambition to stand up AI “factories” totaling roughly 10GW of power, equating to several million GPUs across multiple sites and phases as capacity and supply chains mature. NVIDIA plans to invest up to $100 billion in OpenAI, with tranches released as milestones are met; the first $10 billion aligns to completion of the initial 1GW. The first waves will use NVIDIA’s next‑generation Vera Rubin systems beginning in the second half of 2026.
SoftBank has validated a multi‑cell, end‑to‑end 5G link via a high‑altitude platform payload, marking a concrete step toward stratospheric coverage that works with standard smartphones. In a June field trial over Hachijō Island, Japan, SoftBank mounted a newly developed payload on a light aircraft at 3,000 meters to emulate a High Altitude Platform Station (HAPS) operating around 20 kilometers. The system stitched a millimeter‑wave feeder link at 26 GHz from a ground gateway to the aircraft with a sub‑2 GHz service link at 1.7 GHz from the aircraft to handsets, completing an end‑to‑end path through the 5G core.
2025 has seen major telecom and tech M&A activity, including billion-dollar deals in fiber, AI, cloud, and cybersecurity. This monthly tracker details key acquisitions, like AT&T buying Lumen’s fiber assets and Google’s $32B move for Wiz, highlighting how consolidation is shaping the competitive landscape.
SoftBank will invest $2 billion in Intel, taking roughly a 2% stake at $23 per share and becoming one of Intels largest shareholders. It is a financial vote of confidence in a company trying to reestablish process leadership, scale a foundry business, and convince marquee customers to commit to external wafer orders. SoftBank has been assembling an AI supply-chain franchise that spans IP, compute, and infrastructure. It owns Arm, agreed to acquire Arm server CPU designer Ampere Computing, injected massive capital into OpenAI, and aligned with Oracle under the Stargate hyperscale AI initiative backed by the current U.S. administration.
OpenAI’s Stargate project—a $500B plan to build global AI infrastructure—is facing delays in the U.S. due to rising tariffs and economic uncertainty. While the first phase in Texas slows, OpenAI is shifting focus internationally with “OpenAI for Countries,” a new initiative to co-build sovereign AI data centers worldwide. Backed by Oracle and SoftBank, Stargate is designed to support massive AI workloads and reshape global compute power distribution.
SoftBank has launched the Large Telecom Model (LTM), a domain-specific, AI-powered foundation model built to automate telecom network operations. From base station optimization to RAN performance enhancement, LTM enables real-time decision-making across large-scale mobile networks. Developed with NVIDIA and trained on SoftBank’s operational data, the model supports rapid configuration, predictive insights, and integration with SoftBank’s AITRAS orchestration platform. LTM marks a major step in SoftBank’s AI-first strategy to build autonomous, scalable, and intelligent telecom infrastructure.
SoftBank and Fujitsu are joining forces to advance the commercialization of AI-RAN, integrating AI with Radio Access Networks to enhance communication performance and efficiency. Targeted for deployment by 2026, this collaboration focuses on R&D, vRAN software development, and AI-driven optimization of mobile networks, with trials underway and a dedicated verification lab set to open in Dallas.

Feature Your Brand with the Winners

In Private Network Magazine Editions

Sponsorship placements open until Oct 31, 2025

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy

Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...
Scroll to Top