Ofcom investigation into BT/EE and Three UK voice outages
UK regulator Ofcom has opened formal investigations into BT (including EE) and Three after nationwide voice service outages this summer impaired access to other networks and to emergency services.
Timeline and scope of BT and Three UK outages
BT notified Ofcom of a software-related failure that disrupted interconnect voice services to and from the EE mobile network on 24โ25 July 2025. During the incident, many BT and EE customers could not complete mobile calls to other networks or reach emergency services. Three separately reported a UK-wide disruption to voice services on 25 June 2025, which also affected some customersโ ability to contact emergency services. Both incidents met regulatory thresholds that require operators to report material outages to Ofcom.
Impact on 999 emergency call access
Any impairment to 999 connectivity is a critical public safety issue and a high-stakes compliance concern. The July event followed a separate Ofcom investigation announced in June into BTโs 999 service reliability after earlier technical faults. The latest probes will test whether operators took appropriate measures to prevent, detect, and mitigate failures that affect access to emergency organizations, as required by UK telecom rules.
Why it matters for UK telecom resilience
The outages highlight growing operational risks as mobile cores, interconnects, and emergency call handling become more software-driven and interdependent.
UK telecom security obligations and compliance
Under Ofcomโs General Conditions and the UK Telecommunications (Security) Act regime, providers must take appropriate and proportionate steps to manage risks to the availability, performance, and functionality of their networks and services. That includes identifying vulnerabilities, preventing adverse effects, and, when incidents occur, restoring service and mitigating impactโespecially for emergency calls. Ofcom will assess whether BT and Three complied with these requirements across prevention, detection, response, and recovery.
Software-driven network change risk in cloud-native cores
Both incidents underscore the fragility of change in cloudified, software-centric networks. Interconnect and emergency call routing now traverse complex stacksโIMS cores, policy control, session border controllers, numbering databases, and third-party software. Misconfigurations, vendor updates, or traffic mis-estimation can ripple across domains. As operators sunset 3G and consolidate voice on VoLTE/IMS, the blast radius of core or interconnect software issues increases unless change management, canarying, and rollback practices mature at the pace of software release cycles.
What Ofcom will examine in the probes
The regulator will focus on root cause, controls, customer harm, and the robustness of mitigation and recovery.
Root cause, interconnect failures, and emergency call routing
Key questions include how software or configuration changes triggered the failures; whether safeguards such as pre-deployment testing, staged rollouts, and automated rollback were in place; and how interconnect signaling and emergency call routing pathways failed. Ofcom will want evidence that redundancy, geo-diversity, and path diversity existed not just in hardware but across logical and software layers.
Incident detection, transparency, and customer impact
Investigators will assess how quickly incidents were detected, how promptly authorities and customers were informed, and whether contingency communications were clear about emergency access workarounds. They will also examine the scale and duration of impact, including any disproportionate harm to vulnerable users or critical national infrastructure customers.
Remediation plans and future-proofing resilience
Beyond immediate fixes, Ofcom often looks for durable improvements: strengthened vendor governance, enhanced monitoring and observability, fault isolation between interconnect and emergency services paths, and regular resilience testing. Operators may be required to evidence scenario-based drills that simulate outages with emergency call traffic loads.
Immediate actions for operators and vendors
Regardless of the investigation outcomes, the incidents set a near-term checklist for UK mobile providers and their suppliers.
Strengthen change management and vendor governance
Adopt progressive delivery for network software: canary releases, blue/green deployments, and automated rollback tied to real-time voice and 999-specific health metrics. Enforce rigorous third-party change controls, including pre-change risk scoring, joint kill-switch protocols, and contractually mandated post-incident reviews. Maintain version pinning and back-out plans across all interconnect-facing functions.
Build 999 resilience and conduct regular drills
Segment emergency call handling from general interconnect wherever feasible, with independent scaling and failover. Validate end-to-end 999 call flows through synthetic probes from multiple access types (VoLTE, Wi-Fi calling, and circuit fallback where available). Run quarterly cross-operator resilience exercises with emergency services stakeholders, capturing mean time to detect and restore as compliance KPIs.
Interconnect resilience and traffic engineering best practices
Stress-test SBCs, ENUM/number portability integrations, and routing policies against abnormal signaling spikes and malformed sessions. Implement admission control and rate limiting that preserve emergency call priority under duress. Ensure geo-redundant interconnect trunks and diverse SS7/SIP paths with continuous path validation. Tie observability to business outcomesโcompleted call rate, post-dial delay, and emergency setup successโnot just node-level health.
Implications for enterprises and critical services
Enterprises that rely on mobile voice should revisit business continuity assumptions in light of core and interconnect failure modes.
Multi-carrier voice resilience strategies
Architect carrier diversity for mission-critical users and sites: dual-SIM devices across different networks, eSIM fleets for rapid switching, and SIP trunk or UCaaS fallback for emergency calling from premises. For field workforces, establish clear procedures to use fixed lines or alternative communication channels during mobile voice incidents.
Incident communications and procurement requirements
Ask providers for their emergency call resilience posture, change management controls, and results of recent failover tests. Require rapid incident notification SLAs, explicit 999 continuity plans, and post-incident reporting commitments. Ensure employee communications templates are ready for voice service degradations, including guidance on alternative contact methods.
What to watch next for UK operators
The investigations will shape how UK operators evidence network resilience and emergency call reliability through 2025.
Enforcement outcomes and industry impact
Potential outcomes range from no further action to mandated undertakings, with heightened scrutiny of emergency call routing, interconnect diversity, and software change controls. Even without penalties, operators should expect deeper audits and tighter expectations around reporting and resilience testing.
Standards alignment and best-practice convergence
Expect greater emphasis on aligning operational practices with 3GPP emergency services specifications and industry resilience guidance from bodies such as ETSI and the GSMA, particularly for cloud-native cores. The direction of travel is clear: provable, end-to-end resilience for emergency calling and interconnect, underpinned by disciplined software operations and transparent incident management.





