Private Network Check Readiness - TeckNexus Solutions

TELUS AI Platform: Enterprise-Scale Telecom Innovation

TELUS moved beyond experiments to enterprise adoption: 57,000 employees actively use gen AI, more than 13,000 custom AI solutions are in production, and 47 large-scale solutions have generated over $90 million in benefits to date. Time savings exceed 500,000 hours, driven by an average of roughly 40 minutes saved per AI interaction. The scale is notable: Fuel iX now processes on the order of 100 billion tokens per month, a signal that the platform is embedded in day-to-day work rather than isolated to innovation teams. TELUS designed for trust from the start: its Fuel iXpowered customer support tool achieved ISO 31700-1 Privacy by Design certification, a first for a gen AI solution.
TELUS AI Platform: Enterprise-Scale Telecom Innovation
Image Credit: Telus

Why TELUS’s enterprise AI platform matters now

TELUS has operationalized generative AI at enterprise scale, showing telecom and adjacent industries how to balance speed, safety, and measurable impact.

Telecom AI stakes and timing


Carriers face margin pressure, fragmented IT estates, and rising expectations for digital experiences; TELUS responds with Fuel iX, an internal AI platform that turns these constraints into a system advantage. The platform brokers access to 40+ models via Google Cloud’s Vertex AI Model Garden, including Anthropics Claude and Google’s Gemini, so teams can select the best tool per task without vendor lock-in. This multi-model strategy fits a market where model performance, cost, and safety evolve monthly, and it positions TELUS to optimize for latency, accuracy, and governance across use cases from software delivery to customer operations.

From AI pilots to production-scale outcomes

TELUS moved beyond experiments to enterprise adoption: 57,000 employees actively use gen AI, more than 13,000 custom AI solutions are in production, and 47 large-scale solutions have generated over $90 million in benefits to date. Time savings exceed 500,000 hours, driven by an average of roughly 40 minutes saved per AI interaction. The scale is notable: Fuel iX now processes on the order of 100 billion tokens per month, a signal that the platform is embedded in day-to-day work rather than isolated to innovation teams.

Inside Fuel iX: multi-model AI architecture on Google Cloud

Fuel iX abstracts model choice, security, and tooling into one governed platform that meets enterprise requirements for performance and privacy.

Multi-model orchestration and enterprise tool use

Fuel iX centralizes access to models through Vertex AI while preserving optionality; roughly 90% of model traffic runs through Vertex AI, giving TELUS standardized controls and rapid model updates without new procurement cycles. Anthropics Claude is used where tool orchestration and multi-step reasoning are critical, with strong performance on parallel tool calling across enterprise systems like documentation search, Jira, GitHub, and web retrieval. Gemini is favored when latency and smooth user experiences are paramount, with Gemini Flash helping external use cases and Deep Research supporting long-form analysis with citations. By integrating AI into existing workflows, Google Chat, Slack, VS Code, and other developer tools, TELUS lowers adoption friction and increases quality through context from the tools people already use.

Enterprise AI governance, privacy, and responsibility

TELUS designed for trust from the start: its Fuel iXpowered customer support tool achieved ISO 31700-1 Privacy by Design certification, a first for a gen AI solution. The company aligns with emerging policy frameworks, contributing to the Hiroshima AI Process reporting aligned to the G7 AI Code of Conduct, and enforces enterprise-grade controls via Vertex AI. The governance model spans data access, prompt and output handling, model routing, cost management, and auditability, enabling scale without sacrificing compliance or customer trust.

Engineering velocity and workforce transformation with AI

The platform elevates developer velocity and democratizes solution building across functions while maintaining control and reliability.

Quantified productivity gains with AI

Engineering teams report shipping code about 30% faster by using AI to plan, scaffold, and review work, removing bottlenecks across development and resource allocation. TELUS amplifies the gains by embedding AI into collaboration and ticketing systems, turning conversations into code and documentation into automation. Gemini’s responsiveness supports customer-facing flows, while Claude’s tool use improves answer quality for compound queries that require several systems. The result is fewer handoffs, tighter feedback loops and higher-quality outcomes across product and operations.

Democratizing software creation with governed AI

Fuel iX reframes who can build: designers, product managers, and business leaders now create automations and lightweight applications without deep programming expertise. Model Context Protocol (MCP) plays a pivotal role, letting models securely connect to previously siloed systems so teams can assemble solutions that once required costly, bespoke integrations. This approach reduces shadow IT risk because solutions are built within a governed platform with standardized connectors, policy enforcement, and observability.

Key AI lessons for telecom and large enterprises

TELUS’s approach offers a blueprint for scaling AI beyond pilots while preserving flexibility and trust.

Adopt a brokered, multi-model AI strategy

Abstract model choice behind a governed platform so teams can optimize for task fit, performance, safety, and cost as models evolve. Prioritize tool-capable models for enterprise scenarios, and standardize connectors to core systems to enable parallel tool use and compound task execution.

Embed AI in existing workflows and tools

Integrate assistants into chat, IDEs, ITSM, and CRM rather than creating new destinations, and measure adoption through tokens, active users, and time saved to guide investment. Treat prompts, tools, and policies as productized components that can be reused across teams under consistent governance.

Make trust and privacy a product feature

Build for certification and audit from day one, including privacy-by-design controls, access governance, content safety, and lineage. Align with industry frameworks like ISO 31700-1 and G7 reporting practices to accelerate approvals and customer acceptance.

What’s next for TELUS’s AI platform

TELUS’s roadmap points to broader operational integration, richer tooling, and continued focus on speed-to-value.

Operational expansion and low-latency AI experiences

Expect deeper integration of AI into finance, planning, and program management, paired with low-latency models to support external interactions at scale. Continued use of Gemini for responsive UX and Claude for complex orchestration should help balance speed with depth.

Open, interoperable tooling and AI cost governance

Watch for wider adoption of MCP-style interoperability, expanded enterprise search and retrieval patterns, and policy-driven cost controls to keep unit economics in check as usage scales. For peers, the call to action is clear: stand up a brokered platform, prioritize tool-enabled use cases with measurable KPIs, and treat governance and privacy as accelerators, not constraints for AI in production.

More details on Generative AI and Telecom here


Recent Content

SK Telecom is partnering with VAST Data to power the Petasus AI Cloud, a sovereign GPUaaS built on NVIDIA accelerated computing and Supermicro systems, designed to support both training and inference at scale for government, research, and enterprise users in South Korea. By placing VAST Data’s AI Operating System at the heart of Petasus, SKT is unifying data and compute services into a single control plane, turning legacy bare-metal workflows that took days or weeks into virtualized environments that can be provisioned in minutes and operated with carrier-grade resilience.
Beijing’s first World Humanoid Robot Games is more than a spectacle. It is a live systems trial for embodied AI, connectivity, and edge operations at scale. Over three days at the Beijing National Speed Skating Oval, more than 500 humanoid robots from roughly 280 teams representing 16 countries are competing in 26 events that span athletics and applied tasks, from soccer and boxing to medicine sorting and venue cleanup. The games double as a staging ground for 5G-Advanced (5G-A) capabilities designed for uplink-intensive, low-latency, high-reliability robotics traffic. Indoors, a digital system with 300 MHz of spectrum delivers multi-Gbps peaks and sustains uplink above 100 Mbps.
India has cleared a high-capacity semiconductor fabrication plant slated to produce up to 50,000 300mm wafers per month, a cornerstone move to localize chip supply for telecom, cloud, automotive, and industrial electronics. India’s electronics and IT leadership confirmed plans for a large-scale silicon fab with a targeted capacity of 50,000 wafers per month. The project is being led by Tata Group, with technology partnership support widely expected from a specialty foundry player, aligning with earlier approvals for mature-node logic and power processes. The fab is planned in Gujarat’s industrial corridor, building on India’s recent momentum in assembly, test, and packaging investments.
Infosys will acquire a 75% stake in Telstra’s Versent Group for approximately $153 million to launch an AI-led cloud and digital joint venture aimed at Australian enterprises and public sector agencies. Infosys will hold operational control with 75% ownership, while Telstra retains a 25% minority stake. The JV blends Telstra’s connectivity footprint, Versents local engineering depth and Infosys global scale and AI stack. With Topaz and Cobalt, Infosys can pair model development and orchestration with landing zones, FinOps, and MLOps on major hyperscaler platforms. Closing is expected in the second half of FY 2026, subject to regulatory approvals and customary conditions.
New data shows AI-native startups hitting ARR milestones faster than cloud cohorts, reshaping SaaS and telecom with agents, memory and 2025 priorities.
A leading power utility in EMEA has tapped Ceragon to refresh its nationwide missioncritical communications backbone with highpower microwave, signaling a broader acceleration in utility OT network upgrades. The multiphase program, initiated earlier in 2025, is expected to deliver approximately $8 million in revenue for Ceragon and centers on replacing endoflife systems and scaling capacity across a countrywide private network. Following a detailed technical assessment led by a regional solutions provider, the utility selected Ceragons radios over competing global vendors.
Whitepaper
Explore RADCOM's whitepaper 'Unleashing the Power of 5G Analytics' to understand how telecom operators can drive cost savings and revenue with 5G. Learn about NWDAF's role in network efficiency, innovative use cases, and analytics monetization strategies. Download now for key insights into optimizing 5G network performance....
Radcom Logo

It seems we can't find what you're looking for.

Download Magazine

With Subscription

Subscribe To Our Newsletter

Private Network Awards 2025 - TeckNexus
Scroll to Top

Private Network Awards

Recognizing excellence in 5G, LTE, CBRS, and connected industries. Nominate your project and gain industry-wide recognition.
Early Bird Deadline: Sept 5, 2025 | Final Deadline: Sept 30, 2025