LLM

AI agents are transforming enterprise operations, acting as autonomous digital coworkers that enhance productivity, reduce costs, and support strategic decision-making. With a projected 327% growth by 2027, enterprises must adopt AI agents to remain competitive in an AI-first economy.
Huaweiโ€™s new AI chip, the Ascend 910D, has raised concerns about Nvidiaโ€™s China business, but analysts say it lacks the global performance, ecosystem, and efficiency to compete with Nvidiaโ€™s H100 GPU. Built on 7nm technology with limited software support, Huaweiโ€™s chip may gain local traction but poses no major international threatโ€”yet.
There’s immense pressure for companies in every industry to adopt AI, but not everyone has the in-house expertise, tools, or resources to understand where and how to deploy AI responsibly. Bloomberg hopes this taxonomy โ€“ when combined with red teaming and guardrail systems โ€“ helps to responsibly enable the financial industry to develop safe and reliable GenAI systems, be compliant with evolving regulatory standards and expectations, as well as strengthen trust among clients.
Confidencial.io will unveil its unified AI data governance platform at RSAC 2025. Designed to secure unstructured data in AI workflows, the system applies object-level Zero Trust encryption and seamless compliance with NIST/ISO frameworks. It protects AI pipelines and agentic systems from sensitive data leakage while supporting safe, large-scale innovation.
NVIDIA has launched a major U.S. manufacturing expansion for its next-gen AI infrastructure. Blackwell chips will now be produced at TSMCโ€™s Arizona facilities, with AI supercomputers assembled in Texas by Foxconn and Wistron. Backed by partners like Amkor and SPIL, NVIDIA is localizing its AI supply chain from silicon to system integrationโ€”laying the foundation for โ€œAI factoriesโ€ powered by robotics, Omniverse digital twins, and real-time automation. By 2029, NVIDIA aims to manufacture up to $500B in AI infrastructure domestically.
In AI in Telecom: Strategic Themes, Maturity, and the Road Ahead, we explore how AI has shifted from buzzword to backbone for global telecom leaders. From AI-native networks and edge inferencing, to domain-specific LLMs and behavioral cybersecurity, this article maps out the strategic pillars, real-world use cases, and monetization models driving the AI-powered telecom era. Featuring CxO insights from Telefรณnica, KDDI, MTN, Telstra, and Orange, it captures the voice of a sector transforming infrastructure into intelligence.
SK Telecomโ€™s AI assistant, adot, now features Googleโ€™s Gemini 2.0 Flash, unlocking real-time Google search, source verification, and support for 12 large language models. The integration boosts user trust, expands adoption from 3.2M to 8M users, and sets a new standard in AI transparency and multi-model flexibility for digital assistants in the telecom sector.
SoftBank has launched the Large Telecom Model (LTM), a domain-specific, AI-powered foundation model built to automate telecom network operations. From base station optimization to RAN performance enhancement, LTM enables real-time decision-making across large-scale mobile networks. Developed with NVIDIA and trained on SoftBankโ€™s operational data, the model supports rapid configuration, predictive insights, and integration with SoftBankโ€™s AITRAS orchestration platform. LTM marks a major step in SoftBankโ€™s AI-first strategy to build autonomous, scalable, and intelligent telecom infrastructure.
AI stirs both excitement and concern. While some companies rush to take advantage of it, many are cautious due to the challenges and costs. However, there may be a better approach: using Assistive Intelligence with small, specialized models instead of Large Language Models. This method is more affordable and can benefit businesses and society. Emphasizing open-source technology respects privacy and fosters true innovation. By focusing on solving real problems, we enable growth and empower people to explore Assistive AI without high costs.
The GSMA Foundry has launched Open-Telco LLM Benchmarks, an open-source AI evaluation framework designed to enhance telecom-specific large language models (LLMs). Supported by Hugging Face, The Linux Foundation, Deutsche Telekom, SK Telecom, and more, this initiative aims to improve AI efficiency, security, and compliance in 5G and 6G networks. Learn how this industry-wide benchmark is shaping the future of telecom AI innovation.
Recent advancements in artificial intelligence training methodologies are challenging traditional assumptions about computational requirements and efficiency. Researchers have discovered an “Occam’s Razor” characteristic in neural network training, where models favor simpler solutions over complex ones, leading to superior generalization capabilities. This trend towards efficient training is expected to democratize AI development, reduce environmental impact, and lead to market restructuring, with a shift from hardware to software focus. The emergence of efficient training patterns and distributed training approaches is likely to have significant implications for companies like NVIDIA, which could face valuation adjustments despite strong fundamentals.

LLM News Feed

Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore the collaboration between Purdue Research Foundation, Purdue University, Ericsson, and Saab at the Aviation Innovation Hub. Discover how private 5G networks, real-time analytics, and sustainable innovations are shaping the "Airport of the Future" for a smarter, safer, and greener aviation industry....
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...

Download Magazine

With Subscription

Subscribe To Our Newsletter

Scroll to Top