Nvidia

Qualcomm is acquiring Arduino to anchor an end-to-end developer funnel from hobbyist prototypes to commercial robots and industrial IoT systems. As part of the announcement, Arduino introduced the Uno Q, a new board priced around $45–$55 featuring Qualcomm’s Dragonwing QRB2210 processor that runs Linux alongside Arduino tooling and supports vision workloads. By meeting developers at the prototyping bench and offering an upgrade path to production-grade SoCs and modules, Qualcomm aims to convert experimentation into long-term silicon design wins. The Arduino tie-up broadens access to Qualcomm compute for small teams while reinforcing an ecosystem play that spans on-device AI, connectivity, and lifecycle operations at the edge.
Fujitsu is expanding its strategic collaboration with NVIDIA to deliver a full-stack AI infrastructure that pairs domain-specific AI agents with high-performance compute for enterprise and industrial use. The companies will co-develop an AI agent platform and a next-generation computing stack that tightly couples Fujitsu’s FUJITSU-MONAKA CPU series with NVIDIA GPUs using NVIDIA NVLink-Fusion. On the software side, Fujitsu plans to integrate its Kozuchi platform and AI workload orchestrator (built with Fujitsu AI computing broker technology) with the NVIDIA Dynamo platform.
The Bethpage Black Ryder Cup turned a 1,500‑acre golf course into a pop-up smart city, giving HPE a high-stakes stage to showcase end-to-end AI, networking, and edge operations at scale. Golf is a network planner’s stress test: fans are constantly moving, crowd density swings hole-to-hole, and the venue is built from scratch for a few intense days. More than 250,000 spectators demanded seamless connectivity, broadcast-grade reliability, and instant digital services. This environment forced an enterprise-grade blueprint – fast deployment, elastic capacity, airtight security, and automated operations, mirroring the requirements of modern campuses, arenas, and industrial sites.
Two narratives are converging: Silicon Valley’s rush to add gigawatts of AI capacity and a quiet revival of bunkers, mines, and mountains as ultra-resilient data hubs. Recent headlines point to unprecedented AI infrastructure spending tied to OpenAI. The draw is physical security, thermal stability, data sovereignty, and a narrative of longevity in an era where outages and cyber‑physical risks are rising. Geopolitics, regulation, and escalating outage impact are reshaping site selection and architectural choices. The AI build‑out collides with grid interconnection queues, water scarcity, and rising scrutiny of carbon and noise. Set hard thresholds on PUE and WUE; require real‑time telemetry and third‑party assurance.
Hitachi has launched a global AI Factory built on NVIDIA’s reference architecture to speed the development and deployment of “physical AI” spanning mobility, energy, industrial, and technology domains. Hitachi is standardizing a centralized yet globally distributed AI infrastructure on NVIDIA’s full-stack platform, pairing Hitachi iQ systems with NVIDIA HGX B200 platforms powered by Blackwell GPUs, Hitachi iQ M Series with NVIDIA RTX 6000 Server Edition GPUs, and the NVIDIA Spectrum-X Ethernet AI networking platform. The environment is designed to run production AI with NVIDIA AI Enterprise and support simulation and physically accurate digital twins using NVIDIA Omniverse libraries.
Databricks is adding OpenAI’s newest foundation models to its catalog for use via SQL or API, alongside previously introduced open-weight options gpt-oss 20B and 120B. Customers can now select, benchmark, and fine-tune OpenAI models directly where governed enterprise data already lives. The move raises the stakes in the race to make generative AI a first-class, governed workload inside data platforms rather than an external service tethered by integration and compliance gaps. For telecom and enterprise IT, it reduces friction for AI agents that must safely traverse customer, network, and operational data domains.
Wayve’s end-to-end driving AI is now running in Nissan Ariya electric vehicles in Tokyo, marking a pragmatic step toward consumer deployment in 2027. The test vehicles combine a camera-first approach with radar and a lidar unit for redundancy, aligning with Japan’s dense urban environment and complex traffic patterns. The initial commercial target is “eyes on, hands off” Level 2 driver assistance, with drivers remaining responsible and ready to take over. Nvidia has signed a letter of intent for a potential $500 million investment in Wayve’s next funding round, reinforcing the compute-intensive nature of the program.
OpenAI plans five new US data centers under the Stargate umbrella, pushing the initiative’s planned capacity to nearly 7 gigawatts—roughly equivalent to several utility-scale power plants. Three sites—Shackelford County, Texas; Doña Ana County, New Mexico; and an undisclosed Midwest location—will be developed with Oracle following their previously disclosed agreement to add up to 4.5 GW of US capacity on top of the Abilene, Texas flagship. Two additional sites in Lordstown, Ohio and Milam County, Texas will be developed with SB Energy, SoftBank’s renewables and storage arm. OpenAI also expects to expand Abilene by approximately 600 MW, with the broader program claiming tens of thousands of onsite construction jobs, though ongoing operations will need far fewer staff once live.
Alibaba Cloud is integrating Nvidia’s Physical AI toolchain into its Cloud Platform for AI, bringing robotics-grade simulation, training, and deployment capabilities to customers. Alibaba and Nvidia unveiled a partnership that embeds Nvidia’s embodied AI development tools directly into Alibaba’s machine learning platform. The integration targets robotics, autonomous driving, and “connected spaces” such as warehouses and factories. Physical AI refers to software that models the real world in 3D, generates synthetic data, and trains control policies with reinforcement learning before deploying to physical systems. Developers on Alibaba Cloud gain access to toolchains for data processing, simulation-based training, and real-world reinforcement learning.
New analysis from Bain & Company puts a stark number on AI’s economics: by 2030 the industry may face an $800 billion annual revenue shortfall against what it needs to fund compute growth. Bain estimates AI providers will require roughly $2 trillion in yearly revenue by 2030 to sustain data center capex, energy, and supply chain costs, yet current monetization trajectories leave a large gap. The report projects global incremental AI compute demand could reach 200 GW by 2030, colliding with grid interconnect queues, multiyear lead times for transformers, and rising energy prices.
The CPU roadmap is strategically important because AI clusters depend on balanced CPU-GPU ratios and fast data pipelines that keep accelerators fed and utilized. Even as GPUs carry training and inference, CPUs govern input pipelines, feature engineering, storage I/O, service meshes, and containerized microservices that wrap models in production. More cores and threads at competitive power envelopes reduce bottlenecks around feeder tasks, scheduling, and data staging, improving accelerator utilization and lowering total cost per token or inference. In this lens, a 256-core Arm-based Kunpeng in 2028 would directly affect how much AI throughput Ascend accelerators can sustain per rack.
OpenAI and NVIDIA unveiled a multi‑year plan to deploy 10 gigawatts of NVIDIA systems, marking one of the largest single commitments to AI compute to date. The partners outlined an ambition to stand up AI “factories” totaling roughly 10GW of power, equating to several million GPUs across multiple sites and phases as capacity and supply chains mature. NVIDIA plans to invest up to $100 billion in OpenAI, with tranches released as milestones are met; the first $10 billion aligns to completion of the initial 1GW. The first waves will use NVIDIA’s next‑generation Vera Rubin systems beginning in the second half of 2026.

Feature Your Brand with the Winners

In Private Network Magazine Editions

Sponsorship placements open until Oct 31, 2025

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy

Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...
Scroll to Top