Hyundai-NVIDIA Blackwell AI Factory for Mobility and Manufacturing

Hyundai Motor Group and NVIDIA are expanding their partnership to build a large-scale โ€œphysical AIโ€ stack that fuses autonomous driving, smart factories, and robotics with national-scale infrastructure in Korea. The companies plan to stand up an AI factory built on 50,000 NVIDIA Blackwell GPUs to unify model training, validation, and deployment across vehicles and plants. Backed by an approximately $3 billion publicโ€“private investment, the effort includes a Physical AI Application Center, an NVIDIA AI Technology Center, and regional data centers developed in concert with Koreaโ€™s Ministry of Science and ICT.
Hyundai-NVIDIA Blackwell AI Factory for Mobility and Manufacturing
Image Source: Nvidia

Hyundai-NVIDIA Blackwell AI Factory for Mobility and Manufacturing

Hyundai Motor Group and NVIDIA are expanding their partnership to build a large-scale โ€œphysical AIโ€ stack that fuses autonomous driving, smart factories, and robotics with national-scale infrastructure in Korea.

Scope: 50,000 Blackwell GPUs and $3B Investment

The companies plan to stand up an AI factory built on 50,000 NVIDIA Blackwell GPUs to unify model training, validation, and deployment across vehicles and plants. Backed by an approximately $3 billion publicโ€“private investment, the effort includes a Physical AI Application Center, an NVIDIA AI Technology Center, and regional data centers developed in concert with Koreaโ€™s Ministry of Science and ICT under a new memorandum of understanding. The goal is to accelerate industrial AI adoption while cultivating local AI talent and a broader supplier ecosystem.


Integrated Physical AI Stack: DGX, Omniverse, DRIVE Thor

Hyundai is standardizing on NVIDIAโ€™s three-tier compute blueprint: DGX-class infrastructure for at-scale model training; Omniverse and Cosmos on NVIDIA RTX PRO Servers for digital twins and high-fidelity simulation; and NVIDIA DRIVE AGX Thor with the safety-certified DriveOS for in-vehicle and robotic intelligence. This aligns the development pipeline end to endโ€”from data generation and synthetic environments to model validation and real-time inference in production.

Generative AI for Software-Defined Vehicles

Hyundai will use NVIDIAโ€™s Nemotron open reasoning models and the NeMo software stack to build proprietary large models that can be updated over the air. Beyond autonomy, these models will underpin in-vehicle assistants, infotainment, and adaptive comfort features so capabilities evolve as data grows, enabling a continuous delivery cadence for user-facing features and safety enhancements.

Why a Blackwell-Powered AI Factory Matters Now

The announcement signals how automakers will operationalize AI at fleet scale by tying together simulation, data engineering, safety validation, and on-device compute into a single lifecycle.

From Pilot to Production-Grade Physical AI

Digital twins of factories and driving environments, coupled with AI-driven robotics, are moving from proof of concept to production. Omniverse Enterprise and Isaac Sim enable closed-loop workflowsโ€”software- and hardware-in-the-loop, discrete event simulation, virtual commissioningโ€”that compress time-to-production and reduce integration risk for new lines, models, and robots.

AV Development Goes Simulation-First

Safety and reliability require coverage of rare and long-tail events that are infeasible to capture on roads alone. By generating and replaying vast scenario libraries with Cosmos and validating on Omniverse, Hyundai can iterate faster on perception, planning, and control while maintaining traceability needed for functional safety and audit.

Koreaโ€™s Sovereign AI Strategy for Industry

The AI centers and data infrastructure create a national capability for physical AI, anchored in Koreaโ€™s manufacturing base. This supports data residency, IP protection, and talent development while giving domestic industries preferential access to high-performance AI compute and tooling.

Telecom and Edge Implications of the AI Factory

The AI factory approach reshapes requirements for networks, edge compute, and data platforms that must connect vehicles, plants, and cloud resources with deterministic performance.

Data Pipelines for Digital Twins and Validation

High-fidelity twins require continuous ingestion of multimodal dataโ€”video, lidar, IMU, PLC telemetryโ€”and fast synchronization with simulation backends. This favors converged IP and time-sensitive networking in plants, alongside high-throughput backhaul and content distribution for global model and map updates. For mobile fleets, 5G Advanced features, sidelink, and future V2X profiles become important to move data efficiently and securely.

MEC and In-Vehicle Split Inference

With DRIVE Thor shouldering real-time perception and planning, adjacent functions like cooperative perception, regional map services, and fleet analytics can shift to multi-access edge computing nodes to cut latency and cost. Operators and OEMs will need placement policies for when to execute on-vehicle, at metro edge, or in centralized AI factories based on bandwidth, latency, and safety constraints.

Private 5G for Software-Defined Factories

As robots and autonomous mobile systems proliferate, private 5G with deterministic QoS, precise positioning, and uplink-heavy profiles complements industrial Ethernet. Tight integration between Omniverse-based twins and network orchestration enables pre-deployment validation of radio layouts, traffic engineering, and failure recovery before changes hit the floor.

AI Factory Challenges: Safety, Security, Sustainability

Translating a unified AI vision into safe, compliant, and cost-efficient operations demands disciplined engineering and governance.

Safety, Security, and Compliance by Design

End-to-end traceability from simulation assets and datasets through trained models to deployed binaries is essential to meet functional safety and cybersecurity regulations for road vehicles and OTA updates. A robust SBOM, model provenance, and continuous monitoring pipeline are mandatory to pass audits and ensure post-deployment assurance.

Energy, Cooling, and Sustainability at Scale

Training at the scale of tens of thousands of GPUs shifts the optimization frontier to power and thermal design. Expect emphasis on liquid cooling, workload scheduling to align with renewable availability, and model efficiency techniques to reduce training and inference costs while meeting emissions targets and ESG disclosures.

Interoperability and Vendor Concentration

While an integrated stack accelerates execution, enterprises should mitigate lock-in with open data formats such as OpenUSD for digital twins, ROS 2 interoperability for robotics where appropriate, and MLOps abstractions that support multi-cloud and hybrid deployment models.

Next Steps for OEMs, Industrial Firms, and Telecom

Enterprises in mobility, industrials, and telecom can use this blueprint to prioritize investments that deliver near-term ROI while laying groundwork for fleet-scale AI.

Build a Digital Twin and Simulation Roadmap

Start with a high-impact line or process, establish a single source of truth for factory data, and connect it to simulation for virtual commissioning and predictive maintenance. Adopt open scene description standards to future-proof assets and enable ecosystem collaboration.

Modernize Network and Edge Architecture

Assess where private 5G, TSN, and MEC can offload or accelerate AI workloads; instrument networks for precise timing, determinism, and observability; and define SLAs for data flows between vehicles, plants, and cloud AI factories.

Build the AI Operations Flywheel

Implement a governed data layer, synthetic data generation, and a model lifecycle that supports safety cases, OTA updates, shadow mode testing, and rollback. Align security operations with software-defined vehicle and factory requirements from day one.

Partner and Procurement Strategy

Evaluate NVIDIAโ€™s stack against your workloads and compliance posture, identify where to use managed services versus on-prem DGX and RTX PRO deployments, and cultivate partnerships with operators and integrators to operationalize edge and connectivity dependencies.

Invest in Talent and Process

Cross-train manufacturing engineers, robotics specialists, and cloud/edge teams around digital twins, MLOps, and safety engineering to ensure the technology stack translates into measurable productivity and safety gains.

Promote your brand in TeckNexus Private Network Magazines. Limited sponsor placements availableโ€”reserve now to be featured in upcoming 2025 editions.

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy.ย 

Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Private Network Solutions - TeckNexus

Subscribe To Our Newsletter

Feature Your Brand in Upcoming Magazines

Showcase your expertise through a sponsored article or executive interview in TeckNexus magazines, reaching enterprise and industry decision-makers.

Scroll to Top

Feature Your Brand in Private Network Magazines

With Award-Winning Deployments & Industry Leaders
Sponsorship placements open until Nov 10, 2025