AMD

HUMAIN, a Saudi PIF-backed AI company, introduced Horizon Pro, an โ€œagentic AIโ€ PC built on Qualcommโ€™s Snapdragon X Elite, positioning it as a new class of Windows laptop where on-device AI drives workflows, decisions, and user interaction. At Qualcommโ€™s Snapdragon Summit in Maui, HUMAIN CEO Tareq Amin unveiled the Horizon Pro PC and the companyโ€™s agentic software layer, Humain One, which runs on top of Windows 11 and is slated for formal launch at the Future Investment Initiative in Riyadh.
New analysis from Bain & Company puts a stark number on AIโ€™s economics: by 2030 the industry may face an $800 billion annual revenue shortfall against what it needs to fund compute growth. Bain estimates AI providers will require roughly $2 trillion in yearly revenue by 2030 to sustain data center capex, energy, and supply chain costs, yet current monetization trajectories leave a large gap. The report projects global incremental AI compute demand could reach 200 GW by 2030, colliding with grid interconnect queues, multiyear lead times for transformers, and rising energy prices.
The CPU roadmap is strategically important because AI clusters depend on balanced CPU-GPU ratios and fast data pipelines that keep accelerators fed and utilized. Even as GPUs carry training and inference, CPUs govern input pipelines, feature engineering, storage I/O, service meshes, and containerized microservices that wrap models in production. More cores and threads at competitive power envelopes reduce bottlenecks around feeder tasks, scheduling, and data staging, improving accelerator utilization and lowering total cost per token or inference. In this lens, a 256-core Arm-based Kunpeng in 2028 would directly affect how much AI throughput Ascend accelerators can sustain per rack.
OpenAI and NVIDIA unveiled a multiโ€‘year plan to deploy 10 gigawatts of NVIDIA systems, marking one of the largest single commitments to AI compute to date. The partners outlined an ambition to stand up AI โ€œfactoriesโ€ totaling roughly 10GW of power, equating to several million GPUs across multiple sites and phases as capacity and supply chains mature. NVIDIA plans to invest up to $100 billion in OpenAI, with tranches released as milestones are met; the first $10 billion aligns to completion of the initial 1GW. The first waves will use NVIDIAโ€™s nextโ€‘generation Vera Rubin systems beginning in the second half of 2026.
Gartnerโ€™s latest outlook points to global AI spend hitting roughly $1.5 trillion in 2025 and exceeding $2 trillion in 2026, signaling a multi-year investment cycle that will reshape infrastructure, devices, and networks. This is not a short-lived hype curve; it is a capital plan. Hyperscalers are pouring money into data centers built around AI-optimized servers and accelerators, while device makers push on-device AI into smartphones and PCs at scale. For telecom and enterprise IT leaders, the message is clear: capacity, latency, and data gravity will dictate where value lands. Spending is broad-based. AI services and software are growing fast, but the heavy lift is in hardware and cloud infrastructure.
SoftBank will invest $2 billion in Intel, taking roughly a 2% stake at $23 per share and becoming one of Intels largest shareholders. It is a financial vote of confidence in a company trying to reestablish process leadership, scale a foundry business, and convince marquee customers to commit to external wafer orders. SoftBank has been assembling an AI supply-chain franchise that spans IP, compute, and infrastructure. It owns Arm, agreed to acquire Arm server CPU designer Ampere Computing, injected massive capital into OpenAI, and aligned with Oracle under the Stargate hyperscale AI initiative backed by the current U.S. administration.
NVIDIA and AMD will launch AI chips in China by July 2025, including the B20 and Radeon AI PRO R9700, tailored to comply with U.S. export rules. With performance capped under regulatory thresholds, these GPUs aim to support Chinaโ€™s enterprise AI needs without violating tech trade restrictions. NVIDIA is also rolling out a lower-cost chip based on Blackwell architecture, signaling a shift toward compliant yet capable AI compute options in restricted markets.
Indian telecom companies such as Jio and Airtel are moving beyond internal AI use cases to co-develop monetizable, India-focused AI applications in partnership with tech giants like Google, Nvidia, Cisco, and AMD. These collaborations are enabling sector-specific AI tools across healthcare, education, and agriculture, boosting operational efficiency, customer experience, and creating new revenue streams for telecom operators.
AMD and Rapt AI are partnering to improve AI workload efficiency across AMD Instinct GPUs, including MI300X and MI350. By integrating Rapt AI’s intelligent workload automation tools, the collaboration aims to optimize GPU performance, reduce costs, and streamline AI training and inference deployment. This partnership positions AMD as a stronger competitor to Nvidia in the high-performance AI GPU market while offering businesses better scalability and resource utilization.
Fujitsu and AMD have signed a strategic partnership to develop sustainable AI and high-performance computing (HPC) platforms. This collaboration will combine AMDโ€™s advanced GPU technology with Fujitsuโ€™s low-power, high-performance processors, including the FUJITSU-MONAKA. Together, the companies aim to support open-source AI initiatives, promote energy-efficient computing, and expand the AI ecosystem globally, providing a sustainable computing infrastructure for a range of industries and cloud service providers.

Feature Your Brand with the Winners

In Private Network Magazine Editions

Sponsorship placements open until Oct 31, 2025

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy.ย 

Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...
Scroll to Top