Why TELUS’s enterprise AI platform matters now
TELUS has operationalized generative AI at enterprise scale, showing telecom and adjacent industries how to balance speed, safety, and measurable impact.
Telecom AI stakes and timing
Carriers face margin pressure, fragmented IT estates, and rising expectations for digital experiences; TELUS responds with Fuel iX, an internal AI platform that turns these constraints into a system advantage. The platform brokers access to 40+ models via Google Cloud’s Vertex AI Model Garden, including Anthropics Claude and Google’s Gemini, so teams can select the best tool per task without vendor lock-in. This multi-model strategy fits a market where model performance, cost, and safety evolve monthly, and it positions TELUS to optimize for latency, accuracy, and governance across use cases from software delivery to customer operations.
From AI pilots to production-scale outcomes
TELUS moved beyond experiments to enterprise adoption: 57,000 employees actively use gen AI, more than 13,000 custom AI solutions are in production, and 47 large-scale solutions have generated over $90 million in benefits to date. Time savings exceed 500,000 hours, driven by an average of roughly 40 minutes saved per AI interaction. The scale is notable: Fuel iX now processes on the order of 100 billion tokens per month, a signal that the platform is embedded in day-to-day work rather than isolated to innovation teams.
Inside Fuel iX: multi-model AI architecture on Google Cloud
Fuel iX abstracts model choice, security, and tooling into one governed platform that meets enterprise requirements for performance and privacy.
Multi-model orchestration and enterprise tool use
Fuel iX centralizes access to models through Vertex AI while preserving optionality; roughly 90% of model traffic runs through Vertex AI, giving TELUS standardized controls and rapid model updates without new procurement cycles. Anthropics Claude is used where tool orchestration and multi-step reasoning are critical, with strong performance on parallel tool calling across enterprise systems like documentation search, Jira, GitHub, and web retrieval. Gemini is favored when latency and smooth user experiences are paramount, with Gemini Flash helping external use cases and Deep Research supporting long-form analysis with citations. By integrating AI into existing workflows, Google Chat, Slack, VS Code, and other developer tools, TELUS lowers adoption friction and increases quality through context from the tools people already use.
Enterprise AI governance, privacy, and responsibility
TELUS designed for trust from the start: its Fuel iXpowered customer support tool achieved ISO 31700-1 Privacy by Design certification, a first for a gen AI solution. The company aligns with emerging policy frameworks, contributing to the Hiroshima AI Process reporting aligned to the G7 AI Code of Conduct, and enforces enterprise-grade controls via Vertex AI. The governance model spans data access, prompt and output handling, model routing, cost management, and auditability, enabling scale without sacrificing compliance or customer trust.
Engineering velocity and workforce transformation with AI
The platform elevates developer velocity and democratizes solution building across functions while maintaining control and reliability.
Quantified productivity gains with AI
Engineering teams report shipping code about 30% faster by using AI to plan, scaffold, and review work, removing bottlenecks across development and resource allocation. TELUS amplifies the gains by embedding AI into collaboration and ticketing systems, turning conversations into code and documentation into automation. Gemini’s responsiveness supports customer-facing flows, while Claude’s tool use improves answer quality for compound queries that require several systems. The result is fewer handoffs, tighter feedback loops and higher-quality outcomes across product and operations.
Democratizing software creation with governed AI
Fuel iX reframes who can build: designers, product managers, and business leaders now create automations and lightweight applications without deep programming expertise. Model Context Protocol (MCP) plays a pivotal role, letting models securely connect to previously siloed systems so teams can assemble solutions that once required costly, bespoke integrations. This approach reduces shadow IT risk because solutions are built within a governed platform with standardized connectors, policy enforcement, and observability.
Key AI lessons for telecom and large enterprises
TELUS’s approach offers a blueprint for scaling AI beyond pilots while preserving flexibility and trust.
Adopt a brokered, multi-model AI strategy
Abstract model choice behind a governed platform so teams can optimize for task fit, performance, safety, and cost as models evolve. Prioritize tool-capable models for enterprise scenarios, and standardize connectors to core systems to enable parallel tool use and compound task execution.
Embed AI in existing workflows and tools
Integrate assistants into chat, IDEs, ITSM, and CRM rather than creating new destinations, and measure adoption through tokens, active users, and time saved to guide investment. Treat prompts, tools, and policies as productized components that can be reused across teams under consistent governance.
Make trust and privacy a product feature
Build for certification and audit from day one, including privacy-by-design controls, access governance, content safety, and lineage. Align with industry frameworks like ISO 31700-1 and G7 reporting practices to accelerate approvals and customer acceptance.
What’s next for TELUS’s AI platform
TELUS’s roadmap points to broader operational integration, richer tooling, and continued focus on speed-to-value.
Operational expansion and low-latency AI experiences
Expect deeper integration of AI into finance, planning, and program management, paired with low-latency models to support external interactions at scale. Continued use of Gemini for responsive UX and Claude for complex orchestration should help balance speed with depth.
Open, interoperable tooling and AI cost governance
Watch for wider adoption of MCP-style interoperability, expanded enterprise search and retrieval patterns, and policy-driven cost controls to keep unit economics in check as usage scales. For peers, the call to action is clear: stand up a brokered platform, prioritize tool-enabled use cases with measurable KPIs, and treat governance and privacy as accelerators, not constraints for AI in production.
More details on Generative AI and Telecom here