Private Network Check Readiness - TeckNexus Solutions

SoftBank Launches AI-Powered Large Telecom Model for Network Automation

SoftBank has launched the Large Telecom Model (LTM), a domain-specific, AI-powered foundation model built to automate telecom network operations. From base station optimization to RAN performance enhancement, LTM enables real-time decision-making across large-scale mobile networks. Developed with NVIDIA and trained on SoftBank’s operational data, the model supports rapid configuration, predictive insights, and integration with SoftBank’s AITRAS orchestration platform. LTM marks a major step in SoftBank’s AI-first strategy to build autonomous, scalable, and intelligent telecom infrastructure.
SoftBank Launches AI-Powered Large Telecom Model for Network Automation

SoftBank Launches Telecom-Centric Generative AI Foundation Model

On March 19, 2025, SoftBank Corp. announced the development of a new Large Telecom Model (LTM) — a domain-specific generative AI foundation model built to enhance the design, management, and operation of cellular networks. Leveraging years of expertise and extensive network data, LTM serves as a foundational model for AI innovation across telecom operations.


Trained on a diverse set of datasets — including internal operational data, expert network annotations, and management frameworks — the LTM offers advanced inference capabilities tailored specifically for telecom environments. The model represents a major leap toward AI-native network operations, enabling automation, optimization, and predictive intelligence across the full lifecycle of cellular network management.

AI-Driven Base Station Optimization with LTM

To demonstrate LTM’s practical applications, SoftBank fine-tuned the model to develop AI agents for base station configuration. These agents were tasked with generating optimized configurations for base stations that were not included in the training data. The results were validated by in-house telecom experts and showed over 90% accuracy.

This approach drastically reduces the time needed for configuration tasks — from days to minutes — with similar or improved accuracy. Compared to manual or partially automated methods, the LTM-led models offer:

  • Significant time and cost savings
  • Reduction in human error
  • Scalability across thousands of network nodes

The fine-tuned LTM models are capable of supporting two primary use cases:

1. New Base Station Deployment

In dense urban areas like Tokyo, the model is used to generate optimal configurations for new base stations. It receives input such as the deployment location, existing nearby infrastructure, and network performance metrics, and outputs a set of recommended configurations tailored to maximize performance and coverage.

2. Existing Base Station Reconfiguration

In scenarios like large events that temporarily increase mobile traffic, the model is used to dynamically adjust base station settings. It recommends real-time configuration changes to handle the surge in demand and maintain quality of service.

LTM as the Foundation for “AI for RAN” and Future AI Agents

LTM is not just a standalone model—it is also the foundational layer for SoftBank’s broader “AI for RAN” initiative, which focuses on using AI to enhance Radio Access Network (RAN) performance. Through continued fine-tuning, LTM will enable the creation of domain-specific AI agents capable of:

  • Automated network design
  • Adaptive resource allocation
  • Predictive maintenance
  • Performance optimization across the RAN

These AI agents are designed to be modular and context-aware, making them easier to deploy across different scenarios and geographies.

Collaboration with NVIDIA for LTM Performance Gains and Flexibility

To maximize LTM’s performance, SoftBank partnered with NVIDIA. Training and optimization of LTM were carried out using the NVIDIA DGX SuperPOD, a high-performance AI infrastructure used for distributed model training.

In the inferencing phase, SoftBank adopted NVIDIA NIM (NVIDIA Inference Microservices), which yielded:

  • A 5x improvement in Time to First Token (TTFT)
  • A 5x increase in Tokens Per Second (TPS)

NVIDIA NIM also supports flexible deployment—whether on-premises or in the cloud—offering SoftBank the agility needed for enterprise-scale rollouts.

SoftBank also plans to use NVIDIA’s Aerial Omniverse Digital Twin (AODT) to simulate and validate configuration changes before they’re applied, adding another layer of safety and optimization to the process.

“Human AI” Vision Realized Through LTM

The LTM is an embodiment of SoftBank’s “Human AI” concept, as proposed by its Research Institute of Advanced Technology (RIAT). This vision complements “Machine AI” and emphasizes the integration of human expertise with AI to streamline operations and decision-making in mobile networks.

LTM is designed not just as a model but as a knowledge system, reflecting the insights of SoftBank’s top network specialists. By integrating LTM-based models with AITRAS—SoftBank’s AI-RAN orchestrator—the company aims to build a unified AI framework for operating virtualized RAN and AI systems on the same infrastructure.

AITRAS Integration and Future Roadmap

The orchestration layer, known as AITRAS, is central to SoftBank’s strategy for converged AI and RAN operations. LTM-powered models will eventually feed into AITRAS, enabling intelligent orchestration of both virtualized and AI-native workloads on a unified platform.

This integration is a key part of SoftBank’s plan to build autonomous and self-optimizing networks that can:

  • React to real-time events
  • Predict and mitigate performance issues
  • Continuously evolve based on AI-driven insights

As SoftBank continues development of AITRAS, LTM will serve as its cognitive engine, providing operational intelligence across all layers of the network.

Global Collaboration Fuels LTM’s Telecom AI Expansion

The development of LTM was led by the SoftBank RIAT Silicon Valley Office in collaboration with its Japan-based R&D team. Looking ahead, SoftBank plans to strengthen its global partnerships to scale the adoption of LTM across international markets and contribute to the advancement of next-generation telecom networks.

SoftBank also envisions using LTM to enable new services, enhance operational agility, and deliver superior mobile experiences to its customers.

 Industry Experts Weigh in on LTM’s Impact in Telecom

Ryuji Wakikawa, Vice President and Head of the Research Institute of Advanced Technology at SoftBank, said: “SoftBank’s AI platform model, the ‘Large Telecom Model’ (LTM), significantly transforms how we design, build, and operate communication networks. By fine-tuning LTM, we can create AI agents for specific tasks, improving wireless device performance and automating network operations. We will continue to drive innovation in AI to deliver higher-quality communication services.”

Chris Penrose, Vice President of Telecoms at NVIDIA, added: “Large Telecom Models are foundational to simplifying and accelerating network operations. SoftBank’s rapid progress in building its LTM using NVIDIA technologies sets a strong example for how AI can redefine telecom operations globally.”

LTM Sets a New Standard for AI-Powered Telecom Infrastructure

With the introduction of its Large Telecom Model, SoftBank has laid the foundation for a next-generation, AI-powered telecom infrastructure. LTM not only enhances operational efficiency but also unlocks new possibilities for intelligent automation, predictive optimization, and scalable AI agent deployment.

As SoftBank continues to refine and expand this model—alongside its work on AITRAS and “Human AI”—it is positioning itself as a leader in the future of AI-native mobile networks.


Recent Content

The Open Compute Project (OCP) has launched a centralized AI portal offering infrastructure tools, white papers, deployment blueprints, and open hardware standards. Designed to support scalable AI data centers, the portal features contributions from Meta, NVIDIA, and more, driving open innovation in AI cluster deployments.
In 2025, data centers are at the forefront of AI innovation, balancing the explosive growth of AI workloads with urgent sustainability goals. This article explores how brownfield and greenfield developments help operators manage demand, support low-latency AI services, and drive toward net-zero carbon targets.
There’s immense pressure for companies in every industry to adopt AI, but not everyone has the in-house expertise, tools, or resources to understand where and how to deploy AI responsibly. Bloomberg hopes this taxonomy – when combined with red teaming and guardrail systems – helps to responsibly enable the financial industry to develop safe and reliable GenAI systems, be compliant with evolving regulatory standards and expectations, as well as strengthen trust among clients.
A focus on efficiency and cost-cutting, often driven by “bean counters” and “time and motion” experts, stifles innovation and leads to job losses, mirroring the current AI discourse. Overemphasis on efficiency, like the race to the bottom, can ultimately harms everyone except the initial beneficiaries. For example, distributed energy where building new infrastructure and expanding into new sectors, like solar, generates jobs in manufacturing, installation, and new industries. Instead of solely fearing job displacement, we should prioritize investment in innovation, education, entrepreneurship, and just transition policies to create a future where progress benefits all through job creation. I advocate for strategic investment to build the future, instead of just shrinking the present.
AI promises major gains for telecom operators, but most initiatives stall due to outdated, fragmented inventory systems. Discover why unified, service-aware inventory is the missing link for successful AI in telecom—and how operators can build a smarter, impact-ready foundation for automation with VC4’s Service2Create (S2C) platform.
As networks grow more complex, traditional management models fall short. This article explores how AIOps (Artificial Intelligence for IT Operations) enables autonomous networks that self-configure, self-optimize, and self-heal. Learn how service providers can use AIOps frameworks to achieve predictive maintenance, dynamic resource management, enhanced customer experiences, and operational scalability to thrive in the era of 5G, IoT, and beyond.

Currently, no free downloads are available for related categories. Search similar content to download:

  • Reset

It seems we can't find what you're looking for.

Download Magazine

With Subscription

Subscribe To Our Newsletter

Private Network Awards 2025 - TeckNexus
Scroll to Top

Private Network Awards

Recognizing excellence in 5G, LTE, CBRS, and connected industries. Nominate your project and gain industry-wide recognition.
Early Bird Deadline: Sept 5, 2025 | Final Deadline: Sept 30, 2025