Oracle

SoftBank has exited Nvidia and is redirecting billions into AI platforms and infrastructure, signaling where it believes the next phase of value will concentrate. SoftBank sold its remaining 32.1 million Nvidia shares in October for approximately $5.83 billion, and also disclosed a separate $9.17 billion sale of T-Mobile US shares as part of a broader reallocation into artificial intelligence. The proceeds are earmarked for a significant expansion of SoftBankโ€™s AI portfolio, including a major investment in OpenAI and potential participation in โ€œStargate,โ€ a next-generation AI data center initiative co-developed by OpenAI and Oracle. Despite exiting Nvidiaโ€™s equity, SoftBank retains about 90% ownership of Arm.
Anthropic will spend $50 billion on U.S.-based AI data centers, signaling a rapid new phase for domestic compute capacity with direct consequences for power, fiber, and cloud interconnects. Anthropic plans a multi-year, $50 billion program to develop custom data center campuses in the United States, beginning with Texas and New York and with additional sites to follow. The initial wave targets 2026 go-lives, with an estimated 800 permanent jobs and roughly 2,400 construction roles tied to the program.
OpenAI has signed a multiโ€‘year, $38 billion capacity agreement with Amazon Web Services (AWS) to run and scale its core AI workloads on NVIDIAโ€‘based infrastructure, signaling a decisive shift toward a multiโ€‘cloud strategy and intensifying the hyperscaler battle for frontier AI. The agreement makes OpenAI a direct AWS customer for largeโ€‘scale compute, starting immediately on existing AWS data centers and expanding as new infrastructure comes online. AWS and OpenAI target the bulk of new capacity to be deployed by the end of 2026, with headroom to extend into 2027 and beyond.
NEC is moving to scale its cloud and SaaS business support capabilities with a $2.9 billion acquisition of CSG Systems International, positioning Netcracker at the center of the combined telecom monetization play. CSG brings a sizable recurring-revenue portfolio in digital BSS, billing, charging, and customer engagement used by communications, cable, media, and digital service providers, complementing Netcrackerโ€™s OSS/BSS, orchestration, and service automation strengths. The all-cash deal values CSG at approximately $2.9 billion on an enterprise value basis and has unanimous board approval, with closing targeted for 2026 pending CSG shareholder approval and customary antitrust and other regulatory reviews.
Two narratives are converging: Silicon Valleyโ€™s rush to add gigawatts of AI capacity and a quiet revival of bunkers, mines, and mountains as ultra-resilient data hubs. Recent headlines point to unprecedented AI infrastructure spending tied to OpenAI. The draw is physical security, thermal stability, data sovereignty, and a narrative of longevity in an era where outages and cyberโ€‘physical risks are rising. Geopolitics, regulation, and escalating outage impact are reshaping site selection and architectural choices. The AI buildโ€‘out collides with grid interconnection queues, water scarcity, and rising scrutiny of carbon and noise. Set hard thresholds on PUE and WUE; require realโ€‘time telemetry and thirdโ€‘party assurance.
Databricks is adding OpenAIโ€™s newest foundation models to its catalog for use via SQL or API, alongside previously introduced open-weight options gpt-oss 20B and 120B. Customers can now select, benchmark, and fine-tune OpenAI models directly where governed enterprise data already lives. The move raises the stakes in the race to make generative AI a first-class, governed workload inside data platforms rather than an external service tethered by integration and compliance gaps. For telecom and enterprise IT, it reduces friction for AI agents that must safely traverse customer, network, and operational data domains.
OpenAI introduced ChatGPT Pulse, a new capability that assembles personalized morning briefs and agendas without a prompt, indicating a clear shift from reactive chat to proactive, task-oriented assistance. Pulse generates five to ten concise reports while you sleep, then packages them as interactive cards inside ChatGPT. Each card contains an AI-generated summary with source links, and users can drill down, ask follow-up questions, or request new briefs. Beyond public web content, Pulse can tap ChatGPT Connectors, such as Gmail and Google Calendar -to highlight priority emails, synthesize threads, and build agendas from upcoming events. If ChatGPT memory is enabled, Pulse weaves in user preferences and past context to tailor briefs.
OpenAI plans five new US data centers under the Stargate umbrella, pushing the initiativeโ€™s planned capacity to nearly 7 gigawattsโ€”roughly equivalent to several utility-scale power plants. Three sitesโ€”Shackelford County, Texas; Doรฑa Ana County, New Mexico; and an undisclosed Midwest locationโ€”will be developed with Oracle following their previously disclosed agreement to add up to 4.5 GW of US capacity on top of the Abilene, Texas flagship. Two additional sites in Lordstown, Ohio and Milam County, Texas will be developed with SB Energy, SoftBankโ€™s renewables and storage arm. OpenAI also expects to expand Abilene by approximately 600 MW, with the broader program claiming tens of thousands of onsite construction jobs, though ongoing operations will need far fewer staff once live.
The CPU roadmap is strategically important because AI clusters depend on balanced CPU-GPU ratios and fast data pipelines that keep accelerators fed and utilized. Even as GPUs carry training and inference, CPUs govern input pipelines, feature engineering, storage I/O, service meshes, and containerized microservices that wrap models in production. More cores and threads at competitive power envelopes reduce bottlenecks around feeder tasks, scheduling, and data staging, improving accelerator utilization and lowering total cost per token or inference. In this lens, a 256-core Arm-based Kunpeng in 2028 would directly affect how much AI throughput Ascend accelerators can sustain per rack.
OpenAI and NVIDIA unveiled a multiโ€‘year plan to deploy 10 gigawatts of NVIDIA systems, marking one of the largest single commitments to AI compute to date. The partners outlined an ambition to stand up AI โ€œfactoriesโ€ totaling roughly 10GW of power, equating to several million GPUs across multiple sites and phases as capacity and supply chains mature. NVIDIA plans to invest up to $100 billion in OpenAI, with tranches released as milestones are met; the first $10 billion aligns to completion of the initial 1GW. The first waves will use NVIDIAโ€™s nextโ€‘generation Vera Rubin systems beginning in the second half of 2026.
Gartnerโ€™s latest outlook points to global AI spend hitting roughly $1.5 trillion in 2025 and exceeding $2 trillion in 2026, signaling a multi-year investment cycle that will reshape infrastructure, devices, and networks. This is not a short-lived hype curve; it is a capital plan. Hyperscalers are pouring money into data centers built around AI-optimized servers and accelerators, while device makers push on-device AI into smartphones and PCs at scale. For telecom and enterprise IT leaders, the message is clear: capacity, latency, and data gravity will dictate where value lands. Spending is broad-based. AI services and software are growing fast, but the heavy lift is in hardware and cloud infrastructure.
Lumen has introduced Wavelength RapidRoutes, a pre-engineered 100G/400G service with a 20-day delivery SLA aimed at removing months-long bottlenecks from enterprise and hyperscaler connectivity. The company is packaging pre-defined, high-demand optical paths as a catalog of ready-to-deploy waves, removing custom design cycles from many standard routes. Lumenโ€™s RapidRoutes offers 100G and up to 400G wavelength services on prioritized intercity routes with an industry-forward 20-day service delivery SLA, shifting the customer experience from quote-engineer-build to select-provision-activate on pre-engineered paths. A portal-enabled experience with AI-driven tools and more than 300 automated workflows underpins ordering, change management, and capacity scaling.

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy.ย 

Tech News & Insight
Enterprises adopting private 5G, LTE, or CBRS networks need more than encryption to stay secure. This article explains the 4 pillars of private network security: core controls, device visibility, real-time threat detection, and orchestration. Learn how to protect SIM and device identities, isolate traffic, secure OT and IoT, and choose...

Sponsored by: OneLayer

     
Whitepaper
Private cellular networks are transforming industrial operations, but securing private 5G, LTE, and CBRS infrastructure requires more than legacy IT/OT tools. This whitepaper by TeckNexus and sponsored by OneLayer outlines a 4-pillar framework to protect critical systems, offering clear guidance for evaluating security vendors, deploying zero trust, and integrating IT,...
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Scroll to Top

Feature Your Brand in Private Network Magazines

With Award-Winning Deployments & Industry Leaders
Sponsorship placements open until Nov 21, 2025