Private Network Check Readiness - TeckNexus Solutions

SoftBank Launches AI-Powered Large Telecom Model for Network Automation

SoftBank has launched the Large Telecom Model (LTM), a domain-specific, AI-powered foundation model built to automate telecom network operations. From base station optimization to RAN performance enhancement, LTM enables real-time decision-making across large-scale mobile networks. Developed with NVIDIA and trained on SoftBank’s operational data, the model supports rapid configuration, predictive insights, and integration with SoftBank’s AITRAS orchestration platform. LTM marks a major step in SoftBank’s AI-first strategy to build autonomous, scalable, and intelligent telecom infrastructure.
SoftBank Launches AI-Powered Large Telecom Model for Network Automation

SoftBank Launches Telecom-Centric Generative AI Foundation Model

On March 19, 2025, SoftBank Corp. announced the development of a new Large Telecom Model (LTM) — a domain-specific generative AI foundation model built to enhance the design, management, and operation of cellular networks. Leveraging years of expertise and extensive network data, LTM serves as a foundational model for AI innovation across telecom operations.


Trained on a diverse set of datasets — including internal operational data, expert network annotations, and management frameworks — the LTM offers advanced inference capabilities tailored specifically for telecom environments. The model represents a major leap toward AI-native network operations, enabling automation, optimization, and predictive intelligence across the full lifecycle of cellular network management.

AI-Driven Base Station Optimization with LTM

To demonstrate LTM’s practical applications, SoftBank fine-tuned the model to develop AI agents for base station configuration. These agents were tasked with generating optimized configurations for base stations that were not included in the training data. The results were validated by in-house telecom experts and showed over 90% accuracy.

This approach drastically reduces the time needed for configuration tasks — from days to minutes — with similar or improved accuracy. Compared to manual or partially automated methods, the LTM-led models offer:

  • Significant time and cost savings
  • Reduction in human error
  • Scalability across thousands of network nodes

The fine-tuned LTM models are capable of supporting two primary use cases:

1. New Base Station Deployment

In dense urban areas like Tokyo, the model is used to generate optimal configurations for new base stations. It receives input such as the deployment location, existing nearby infrastructure, and network performance metrics, and outputs a set of recommended configurations tailored to maximize performance and coverage.

2. Existing Base Station Reconfiguration

In scenarios like large events that temporarily increase mobile traffic, the model is used to dynamically adjust base station settings. It recommends real-time configuration changes to handle the surge in demand and maintain quality of service.

LTM as the Foundation for “AI for RAN” and Future AI Agents

LTM is not just a standalone model—it is also the foundational layer for SoftBank’s broader “AI for RAN” initiative, which focuses on using AI to enhance Radio Access Network (RAN) performance. Through continued fine-tuning, LTM will enable the creation of domain-specific AI agents capable of:

  • Automated network design
  • Adaptive resource allocation
  • Predictive maintenance
  • Performance optimization across the RAN

These AI agents are designed to be modular and context-aware, making them easier to deploy across different scenarios and geographies.

Collaboration with NVIDIA for LTM Performance Gains and Flexibility

To maximize LTM’s performance, SoftBank partnered with NVIDIA. Training and optimization of LTM were carried out using the NVIDIA DGX SuperPOD, a high-performance AI infrastructure used for distributed model training.

In the inferencing phase, SoftBank adopted NVIDIA NIM (NVIDIA Inference Microservices), which yielded:

  • A 5x improvement in Time to First Token (TTFT)
  • A 5x increase in Tokens Per Second (TPS)

NVIDIA NIM also supports flexible deployment—whether on-premises or in the cloud—offering SoftBank the agility needed for enterprise-scale rollouts.

SoftBank also plans to use NVIDIA’s Aerial Omniverse Digital Twin (AODT) to simulate and validate configuration changes before they’re applied, adding another layer of safety and optimization to the process.

“Human AI” Vision Realized Through LTM

The LTM is an embodiment of SoftBank’s “Human AI” concept, as proposed by its Research Institute of Advanced Technology (RIAT). This vision complements “Machine AI” and emphasizes the integration of human expertise with AI to streamline operations and decision-making in mobile networks.

LTM is designed not just as a model but as a knowledge system, reflecting the insights of SoftBank’s top network specialists. By integrating LTM-based models with AITRAS—SoftBank’s AI-RAN orchestrator—the company aims to build a unified AI framework for operating virtualized RAN and AI systems on the same infrastructure.

AITRAS Integration and Future Roadmap

The orchestration layer, known as AITRAS, is central to SoftBank’s strategy for converged AI and RAN operations. LTM-powered models will eventually feed into AITRAS, enabling intelligent orchestration of both virtualized and AI-native workloads on a unified platform.

This integration is a key part of SoftBank’s plan to build autonomous and self-optimizing networks that can:

  • React to real-time events
  • Predict and mitigate performance issues
  • Continuously evolve based on AI-driven insights

As SoftBank continues development of AITRAS, LTM will serve as its cognitive engine, providing operational intelligence across all layers of the network.

Global Collaboration Fuels LTM’s Telecom AI Expansion

The development of LTM was led by the SoftBank RIAT Silicon Valley Office in collaboration with its Japan-based R&D team. Looking ahead, SoftBank plans to strengthen its global partnerships to scale the adoption of LTM across international markets and contribute to the advancement of next-generation telecom networks.

SoftBank also envisions using LTM to enable new services, enhance operational agility, and deliver superior mobile experiences to its customers.

 Industry Experts Weigh in on LTM’s Impact in Telecom

Ryuji Wakikawa, Vice President and Head of the Research Institute of Advanced Technology at SoftBank, said: “SoftBank’s AI platform model, the ‘Large Telecom Model’ (LTM), significantly transforms how we design, build, and operate communication networks. By fine-tuning LTM, we can create AI agents for specific tasks, improving wireless device performance and automating network operations. We will continue to drive innovation in AI to deliver higher-quality communication services.”

Chris Penrose, Vice President of Telecoms at NVIDIA, added: “Large Telecom Models are foundational to simplifying and accelerating network operations. SoftBank’s rapid progress in building its LTM using NVIDIA technologies sets a strong example for how AI can redefine telecom operations globally.”

LTM Sets a New Standard for AI-Powered Telecom Infrastructure

With the introduction of its Large Telecom Model, SoftBank has laid the foundation for a next-generation, AI-powered telecom infrastructure. LTM not only enhances operational efficiency but also unlocks new possibilities for intelligent automation, predictive optimization, and scalable AI agent deployment.

As SoftBank continues to refine and expand this model—alongside its work on AITRAS and “Human AI”—it is positioning itself as a leader in the future of AI-native mobile networks.


Recent Content

An unsolicited offer from Perplexity to acquire Googles Chrome raises immediate questions about antitrust remedies, AI distribution, and who controls the internets primary access point. Perplexity has proposed a $34.5 billion cash acquisition of Chrome and says backers are lined up to fund the deal despite the startups significantly smaller balance sheet and an estimated $18 billion valuation in recent fundraising. The bid includes commitments to keep Chromium open source, invest an additional $3 billion in the codebase, and preserve current user defaults including leaving Google as the default search engine. The timing aligns with a U.S. Department of Justice push for structural remedies after a court found Google maintained an illegal search monopoly, with a Chrome divestiture floated as a central remedy.
A new Ciena and Heavy Reading study signals that AI will become a primary source of metro and long-haul traffic within three years while most optical networks remain only partially prepared. AI training and inference are shifting from contained data center domains to distributed, edge-to-core workflows that stress transport capacity, latency, and automation end-to-end. Expectations are even higher for long-haul: 52% see AI surpassing 30% of traffic and 29% expect AI to account for more than half. Yet only 16% of respondents rate their optical networks as very ready for AI workloads, underscoring an execution gap that will shape capex priorities, service roadmaps, and partnership models through 2027.
South Korea’s government and its three national carriers are aligning fresh capital to speed AI and semiconductor competitiveness and to anchor a private-led innovation flywheel. SK Telecom, KT, and LG Uplus will seed a new pool exceeding 300 billion won (about $219 million) via the Korea IT Fund (KIF) to back core and foundational AI, AI transformation (AX), and commercialization in ICT. KIF, formed in 2002 by the carriers, will receive 150 billion won in new commitments, matched by at least an equal amount from external fund managers. The platforms lifespan has been extended to 2040 to sustain long-cycle bets.
NTT DATA and Google Cloud expanded their global partnership to speed the adoption of agentic AI and cloud-native modernization across regulated and dataintensive industries. The push emphasizes sovereign cloud options using Google Distributed Cloud, with both airgapped and connected deployments to meet data residency and regulatory needs without stalling innovation. The partners plan to build industry-specific agentic AI solutions on Google Agent space and Gemini models, underpinned by secure data clean rooms and modernized data platforms. NTT DATA is standing up a dedicated Google Cloud Business Group with thousands of engineers and aims to certify 5,000 practitioners to accelerate delivery, migrations, and managed services.
Lumen surpassing 1,000 customers on its Network-as-a-Service platform is a clear marker for where enterprise networking is headed. AI adoption, multi-cloud architectures, and distributed applications are pushing organizations toward on-demand, software-driven connectivity. Lumens platform bundles three core service types under a single digital experience. The platform integrates with major hyperscalers, enabling direct paths to AWS, Microsoft Azure, and Google Cloud. All can be provisioned self-service, scaled up or down based on demand, and stitched to cloud regions and third-party data centers via cloud on-ramps.
Vietnam is entering the hyperscale AI data center map, with VNPT and LG CNS positioning to meet local and regional demand. For telecom operators and enterprises, now is the time to align AI roadmaps with data center strategy: plan for high-density racks and liquid cooling, secure GPU capacity, engineer diverse connectivity, and build energy resilience. As the regions AI infrastructure forms, those who co-design workload placement, interconnect, and power from the outset will gain durable cost and performance advantages.
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore the collaboration between Purdue Research Foundation, Purdue University, Ericsson, and Saab at the Aviation Innovation Hub. Discover how private 5G networks, real-time analytics, and sustainable innovations are shaping the "Airport of the Future" for a smarter, safer, and greener aviation industry....
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...

Download Magazine

With Subscription

Subscribe To Our Newsletter

Private Network Awards 2025 - TeckNexus
Scroll to Top

Private Network Awards

Recognizing excellence in 5G, LTE, CBRS, and connected industries. Nominate your project and gain industry-wide recognition.
Early Bird Deadline: Sept 5, 2025 | Final Deadline: Sept 30, 2025