OpenAI

Nvidiaโ€™s latest quarter signals that AI infrastructure spending is not cooling and is, in fact, broadening across clouds, sovereigns, and enterprises. Nvidia delivered $57 billion in revenue for the quarter, up more than 60% year over year, with GAAP net income reaching $32 billion; the data center segment accounted for roughly $51.2 billion, dwarfing gaming, pro visualization, and automotive combined. Management guided next-quarter sales to about $65 billion, exceeding consensus by several billion and underscoring that supply remains tight for cloud GPUs even as deployments ramp across hyperscalers, GPU clouds, national AI initiatives, and large enterprises.
Alphabetโ€™s Google will spend $40 billion to build three AI-focused data centers in Texas, signaling that power access and grid proximity now define hyperscale strategy more than any single technology feature. The build spans one campus in Armstrong County in the Texas Panhandle and two in Haskell County near Abilene, with investments running through 2027. Google expects the program to create thousands of construction and supplier jobs and hundreds of long-term operations roles, consistent with typical hyperscale staffing patterns. Texas offers relatively low-cost power, faster interconnection timelines, abundant land, and pro-investment policies, making it second only to Virginia in U.S. data center count.
Jeff Bezos is stepping back into day-to-day operations as co-CEO of Project Prometheus, a new AI company reportedly funded with $6.2 billion to build โ€œAI for the physical economy.โ€ Project Prometheus will be co-led by Bezos and Vik Bajaj, an operator-scientist with leadership experience at Google X, Verily, and Foresite Labs. Early reports indicate the company is targeting engineering and manufacturing tasks across sectors such as aerospace, automotive, and computing hardware. Headcount is already near 100, drawing researchers from OpenAI, Google DeepMind, and Meta, signaling an aggressive push for top-tier AI talent.
Renewables are emerging as the default option for new AI campuses, but the share that is truly carbon-free around the clock will hinge on siting, storage, and market design. Annual REC matching is no longer sufficient for leading buyers; the bar is shifting toward hourly, 24/7 carbon-free energy matching initiatives. Yet diurnal and seasonal variability limits how much of a siteโ€™s load can be met by solar and batteries alone, especially in non-sunny regions or during prolonged weather events. Expect mixed portfolios: on-site renewables and batteries, off-site PPAs (solar and wind), emerging long-duration storage, and grid purchases backed by hourly certificates where available.
SoftBank has exited Nvidia and is redirecting billions into AI platforms and infrastructure, signaling where it believes the next phase of value will concentrate. SoftBank sold its remaining 32.1 million Nvidia shares in October for approximately $5.83 billion, and also disclosed a separate $9.17 billion sale of T-Mobile US shares as part of a broader reallocation into artificial intelligence. The proceeds are earmarked for a significant expansion of SoftBankโ€™s AI portfolio, including a major investment in OpenAI and potential participation in โ€œStargate,โ€ a next-generation AI data center initiative co-developed by OpenAI and Oracle. Despite exiting Nvidiaโ€™s equity, SoftBank retains about 90% ownership of Arm.
A cascade of offers from OpenAI, Google, and Perplexityโ€”amplified by Airtel and Reliance Jioโ€”signals a deliberate push to convert Indiaโ€™s scale into durable AI usage, data, and future revenue. With more than 900 million internet users, rock-bottom mobile data prices, and a young, mobile-first population, India offers the worldโ€™s deepest top-of-funnel for AI adoption. Giving away premium accessโ€”such as a year of ChatGPTโ€™s low-cost โ€œGoโ€ tier, Jioโ€™s bundling of Gemini, or Airtelโ€™s tie-up with Perplexity Proโ€”maximizes trial, habituation, and data collection across diverse languages and contexts. Even a low single-digit conversion rate translates into millions of subscribers, while non-converters still contribute valuable signals that improve models.
Google has unveiled nextโ€‘generation TPU accelerators with up to a 4x performance boost and secured a multiyear Anthropic commitment reportedly worth billions, signaling a new phase in AI infrastructure competition. Google introduced new Tensor Processing Units that deliver roughly four times the performance of prior generations for training and inference of large models. Beyond speed, the design targets better performance-per-watt, a critical lever as AI energy costs surge. Anthropic has secured access to Google Cloud TPU capacity at massive scale, with reports citing availability up to one million TPU chips over the term of the agreement.
SoftBank and OpenAI have formed SB OAI Japan, a jointly owned entity that will commercialize โ€œCrystal intelligence,โ€ a bundled enterprise AI offering focused on management and operations in Japan. The venture will combine OpenAIโ€™s enterprise-grade models and tooling with localization, integration, and support led by SoftBank in-market. Crystal intelligence is positioned as a turnkey solution that pairs model access with domain-specific implementation, governance, and support. SoftBank plans to deploy the solution across its own group companies, validate outcomes in production, and recycle those learnings back into SB OAI Japanโ€™s offerings.
Apple is reportedly nearing a deal to license Googleโ€™s Gemini for Siri, a move that would reshape assistant architectures and near-term AI roadmaps across devices and networks. Multiple reports indicate Apple is close to licensing a custom version of Googleโ€™s Gemini model, reportedly at a scale of around 1.2 trillion parameters, for roughly $1 billion per year. The model would power a major Siri upgrade while Apple continues building its own foundation models. The objective is clear: boost Siriโ€™s reasoning and task execution in the near term without ceding control over Appleโ€™s system-level integrations or search defaults.
OpenAI has signed a multiโ€‘year, $38 billion capacity agreement with Amazon Web Services (AWS) to run and scale its core AI workloads on NVIDIAโ€‘based infrastructure, signaling a decisive shift toward a multiโ€‘cloud strategy and intensifying the hyperscaler battle for frontier AI. The agreement makes OpenAI a direct AWS customer for largeโ€‘scale compute, starting immediately on existing AWS data centers and expanding as new infrastructure comes online. AWS and OpenAI target the bulk of new capacity to be deployed by the end of 2026, with headroom to extend into 2027 and beyond.
SoftBank has reportedly approved the final $22.5 billion tranche of a planned $30 billion commitment to OpenAI, tied to the AI firmโ€™s shift to a conventional forโ€‘profit structure and a path to IPO. The investment completes a massive $41 billion financing round for OpenAI that began in April, making it one of the largest private capital raises in tech history. This funding and restructuring signal faster enterprise AI adoption, heavier infrastructure demand, and new platform dynamics that will ripple across networks, cloud, and edge. OpenAI is pushing deeper into enterprise tools, security features, and domainโ€‘specific assistants.
Snap has opened its first open-prompt AI image Lens, Imagine, to all U.S. users, signaling a new phase in mainstream generative experiences inside the camera. Imagine Lens lets users write a short prompt and instantly transform a selfie or create an image from scratch, then share it in chats, Stories, or off-platform. The capability was previously limited to Lens+ and Snapchat Platinum subscribers. Camera-native generative features at social scale change traffic patterns, compute placement, and safety obligations for platforms and networks. Provenance standards such as C2PA content credentials are becoming table stakes for enterprise integrations and advertiser trust.

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy.ย 

Whitepaper
Private cellular networks are transforming industrial operations, but securing private 5G, LTE, and CBRS infrastructure requires more than legacy IT/OT tools. This whitepaper by TeckNexus and sponsored by OneLayer outlines a 4-pillar framework to protect critical systems, offering clear guidance for evaluating security vendors, deploying zero trust, and integrating IT,...
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Scroll to Top

Feature Your Brand in Private Network Magazines

With Award-Winning Deployments & Industry Leaders
Sponsorship placements open until Nov 21, 2025