GPU

Intel detailed its first client and server products on the new 18A process, positioning the company for AI PCs and powerโ€‘efficient cloud at a time when onshore manufacturing and TCO matter more than ever. Intel previewed Core Ultra series 3 โ€œPanther Lake,โ€ its first client SoC line on 18A, with a multiโ€‘chiplet design that blends new performance and efficient cores with an upgraded Arc GPU and dedicated AI acceleration across the CPU, GPU, and NPU. On the server side, Intel previewed โ€œClearwater Forest,โ€ branded Xeon 6+, its nextโ€‘gen Eโ€‘core product built on 18A and targeted for launch in the first half of 2026.
OpenAI has acquired Roi, a New Yorkโ€“based personal finance startup founded in 2022 that built an AI companion to aggregate and advise on a userโ€™s full financial footprint across stocks, crypto, DeFi, real estate, and NFTs. The move extends a year of acqui-hires at OpenAI, following Context.ai, Crossing Minds, and Alex. Personalization is becoming the moat for AI consumer products. Models are converging in capability, so durable advantage shifts to data, context, and engagement design. OpenAIโ€™s Roi acqui-hire is less about a finance app and more about owning the personalization layer across consumer AI.
Fujitsu is expanding its strategic collaboration with NVIDIA to deliver a full-stack AI infrastructure that pairs domain-specific AI agents with high-performance compute for enterprise and industrial use. The companies will co-develop an AI agent platform and a next-generation computing stack that tightly couples Fujitsuโ€™s FUJITSU-MONAKA CPU series with NVIDIA GPUs using NVIDIA NVLink-Fusion. On the software side, Fujitsu plans to integrate its Kozuchi platform and AI workload orchestrator (built with Fujitsu AI computing broker technology) with the NVIDIA Dynamo platform.
OpenAI is reportedly preparing a standalone app for its next-gen video model, positioning AI-only short video as a consumer format in its own right. The app reportedly delivers a vertical feed with swipe navigation, reactions, and remixing familiar mechanics that lower friction for discovery and creation. Every clip is generated by Sora 2 rather than uploaded, with current limits around 10 seconds per video. A recommendation engine powers a personalized โ€œFor Youโ€ experience, aligning with how short-form attention is won and retained today. A notable feature is identity verification tied to likeness usage. Expect provenance signals and watermarking frameworks (for example, C2PA-style manifests) to become table stakes for platforms that remix human likeness at scale.
South Korea is funding a national AI stack to reduce dependence on foreign models, protect data, and tune AI to its language and industries. The government has committed โ‚ฉ530 billion (about $390 million) to five companies building large-scale foundation models: LG AI Research, SK Telecom, Naver Cloud, NC AI, and Upstage. Progress will be reviewed every six months, with underperformers cut and resources concentrated on the strongest until two leaders remain. The policy goal is clear: build world-class, Korean-first AI capability that supports national security, economic competitiveness, and data sovereignty. For telecoms and enterprise IT, this is a shift from โ€œconsume global modelsโ€ to โ€œoperate domestic AI platformsโ€ integrated with local data, compliance, and services.
AI now depends as much on the network and interconnection layer as it does on GPUs, and this blueprint turns that reality into a repeatable design. Training has concentrated in a few massive regions, while inference is exploding at the edge and in enterprise colocation sites, creating a scale challenge the industry hasnโ€™t codified until now. Zayo and Equinix are proposing a common model that aligns high-capacity transport, neutral interconnection hubs, and specialized training and inference data centers. The aim is to shorten time to market for AI services by providing reference designs that reduce trial-and-error across L1โ€“L3, interconnection, and traffic engineering.
Databricks is adding OpenAIโ€™s newest foundation models to its catalog for use via SQL or API, alongside previously introduced open-weight options gpt-oss 20B and 120B. Customers can now select, benchmark, and fine-tune OpenAI models directly where governed enterprise data already lives. The move raises the stakes in the race to make generative AI a first-class, governed workload inside data platforms rather than an external service tethered by integration and compliance gaps. For telecom and enterprise IT, it reduces friction for AI agents that must safely traverse customer, network, and operational data domains.
Wayveโ€™s end-to-end driving AI is now running in Nissan Ariya electric vehicles in Tokyo, marking a pragmatic step toward consumer deployment in 2027. The test vehicles combine a camera-first approach with radar and a lidar unit for redundancy, aligning with Japanโ€™s dense urban environment and complex traffic patterns. The initial commercial target is โ€œeyes on, hands offโ€ Level 2 driver assistance, with drivers remaining responsible and ready to take over. Nvidia has signed a letter of intent for a potential $500 million investment in Wayveโ€™s next funding round, reinforcing the compute-intensive nature of the program.
OpenAI plans five new US data centers under the Stargate umbrella, pushing the initiativeโ€™s planned capacity to nearly 7 gigawattsโ€”roughly equivalent to several utility-scale power plants. Three sitesโ€”Shackelford County, Texas; Doรฑa Ana County, New Mexico; and an undisclosed Midwest locationโ€”will be developed with Oracle following their previously disclosed agreement to add up to 4.5 GW of US capacity on top of the Abilene, Texas flagship. Two additional sites in Lordstown, Ohio and Milam County, Texas will be developed with SB Energy, SoftBankโ€™s renewables and storage arm. OpenAI also expects to expand Abilene by approximately 600 MW, with the broader program claiming tens of thousands of onsite construction jobs, though ongoing operations will need far fewer staff once live.
Google Labs has launched Mixboard, an AI-powered concepting board that turns text prompts and images into editable visual mood boards now available in U.S. public beta. Mixboard gives users an open canvas to generate, arrange, and iterate on visual ideas, from home decor and event themes to product inspiration and DIY projects. You can start from a text prompt or prebuilt boards, pull in your own images, create new visuals with generative AI, and refine them using natural-language edits. Mixboard signals how fast multimodal AI is moving from chat to visual ideation, with implications for search, commerce, and collaborative workflows.
New analysis from Bain & Company puts a stark number on AIโ€™s economics: by 2030 the industry may face an $800 billion annual revenue shortfall against what it needs to fund compute growth. Bain estimates AI providers will require roughly $2 trillion in yearly revenue by 2030 to sustain data center capex, energy, and supply chain costs, yet current monetization trajectories leave a large gap. The report projects global incremental AI compute demand could reach 200 GW by 2030, colliding with grid interconnect queues, multiyear lead times for transformers, and rising energy prices.
Lumen is accelerating a multi-year, multi-billion-dollar expansion of its U.S. backbone to match the explosive rise of AI-driven traffic. The company plans to add 34 million new intercity fiber miles by the end of 2028, targeting a total of 47 million intercity fiber miles. In 2025, Lumen has already added more than 2.2 million intercity fiber miles across 2,500+ route miles, with a year-end target of 16.6 million intercity fiber miles. Network capacity grew by 5.9+ Pbps year-to-date, and Lumen earmarked more than $100 million to push 400Gbps connectivity across clouds, data centers, and metrosโ€”now covering over 100,000 route miles with 400G-enabled transport.

Feature Your Brand with the Winners

In Private Network Magazine Editions

Sponsorship placements open until Oct 31, 2025

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy.ย 

Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...
Scroll to Top