AWS

October’s job-cut announcements surged, with AI and cost control reshaping staffing plans across technology and adjacent sectors. Planned layoffs spiked to roughly 153,000 in October, up more than 180% from September and about 175% from a year ago, according to the latest Challenger job-cuts tally. Year-to-date announcements for 2025 have crossed 1.09 million, the highest October-through-period since the pandemic shock of 2020 and above comparable 2009 levels. The cuts reflect a pivot from growth-at-any-cost to profitability, with AI rebalancing roles and budgets across the stack. Across reasons given, cost reduction led by a wide margin, and AI adoption was the second-largest driver, underscoring both macro pressure and structural transformation.
LG Uplus is working with AWS on agentic AI that automates installation of cloud‑native network software, with early claims of up to 80% faster turn‑ups versus manual methods. LG Uplus and AWS partnered to develop an AI-driven approach that installs complex network software stacks without human intervention. The system uses Amazon Bedrock alongside AWS’s Strands-Agents SDK to orchestrate multiple cooperating AI agents. These agents are pre-trained on network design and implementation documents so they can execute the full workflow – provisioning cloud infrastructure, collecting device and network parameters, generating configurations, performing installation, and troubleshooting.
OpenAI has signed a multi‑year, $38 billion capacity agreement with Amazon Web Services (AWS) to run and scale its core AI workloads on NVIDIA‑based infrastructure, signaling a decisive shift toward a multi‑cloud strategy and intensifying the hyperscaler battle for frontier AI. The agreement makes OpenAI a direct AWS customer for large‑scale compute, starting immediately on existing AWS data centers and expanding as new infrastructure comes online. AWS and OpenAI target the bulk of new capacity to be deployed by the end of 2026, with headroom to extend into 2027 and beyond.
At SK AI Summit 2025, CEO Jung Jaihun outlined plans to expand the Ulsan artificial intelligence data center (AIDC) to 1GW-class capacity, stand up a nationwide trio of hubs (Gasan in the Seoul metro, Ulsan in the south, and a new southwest site), and take the model into Southeast Asia starting with Vietnam. The operator is also deepening technology collaborations with Amazon Web Services (AWS) on Edge AI and with NVIDIA on AI-RAN and a Manufacturing AI Cloud; it intends to buy more than 2,000 NVIDIA RTX PRO 6000 Blackwell GPUs and scale Korea’s largest GPU cluster, Haein, as core compute for industrial AI workloads.
NEC is moving to scale its cloud and SaaS business support capabilities with a $2.9 billion acquisition of CSG Systems International, positioning Netcracker at the center of the combined telecom monetization play. CSG brings a sizable recurring-revenue portfolio in digital BSS, billing, charging, and customer engagement used by communications, cable, media, and digital service providers, complementing Netcracker’s OSS/BSS, orchestration, and service automation strengths. The all-cash deal values CSG at approximately $2.9 billion on an enterprise value basis and has unanimous board approval, with closing targeted for 2026 pending CSG shareholder approval and customary antitrust and other regulatory reviews.
MTN has launched StarEdge Horizon, a Layer 2 service over SpaceX’s Starlink designed to move enterprise traffic on a private path to MTN points of presence (PoPs), bypassing the public internet and reducing latency, jitter, and operational complexity. The service extends a private Layer 2 domain from remote sites over Starlink into MTN regional PoPs, where enterprises can centralize internet egress, security, and policy. QoS and segmentation protect prioritized traffic, while multi-link redundancy reduces site-level downtime risks. By bringing a private Layer 2 architecture to Starlink, MTN’s StarEdge Horizon turns LEO from best-effort internet into a controllable enterprise transport.
Qualcomm is moving from mobile NPUs into rack-scale AI infrastructure, positioning its AI200 (2026) and AI250 (2027) to challenge Nvidia/AMD on the economics of large-scale inference. The company is translating its Hexagon neural processing unit heritage—refined across phones and PCs—into data center accelerators tuned for inferencing, not training. AI200 and AI250 will ship in liquid-cooled, rack-scale configurations designed to operate as a single logical system. Qualcomm is leaning into that constraint with a redesigned memory subsystem and high-capacity cards supporting up to 768 GB of onboard memory—positioning that as a differentiator versus current GPU offerings.
A new partnership between Palantir and Lumen Technologies signals a shift from internal AI pilots to packaged enterprise services delivered over a telecom-grade edge and network footprint. Palantir will provide its Foundry and Artificial Intelligence Platform (AIP) as the data and decisioning layer for Lumen’s enterprise AI offerings, which Lumen plans to deliver on top of its edge computing nodes, broadband infrastructure, and managed digital services. The companies position this as a multi-year, strategic collaboration focused on operational AI use cases, not just experimentation. While exact terms were not disclosed, multiple reports indicate Lumen’s total spend could exceed $200 million over several years.
Netflix is expanding generative AI across recommendations, ads, and production workflows, signaling how big media will operationalize AI at scale without replacing human creativity. The company highlighted recent use in final footage, de-aging in a new film, and pre-visualization for set and wardrobe design. This is not about automating storytelling; it is about compressing timelines, lowering iteration costs, and enabling more variants for testing and localization. Expect AI to touch asset creation, trailer and thumbnail generation, dubbing and subtitling, quality control, and promotional creative — all tied to measurable uplift in engagement and ad yield.
AWS experienced a major outage centered on its US-EAST-1 region in Northern Virginia, triggering cascading failures across dozens of cloud services and dependent applications worldwide. The incident began in the early hours of Monday and was initially mitigated within a few hours, though residual errors and recovery backlogs persisted through the morning in US-EAST-1. Engineering updates point to a DNS resolution problem affecting a key database endpoint (DynamoDB) alongside internal network and gateway errors in EC2, which then propagated across dependent services such as SQS and Amazon Connect. When a foundational component like DNS or an internal networking fabric falters, service discovery and API calls fail in bulk.
T-Mobile US expanded its Advanced Network Solutions portfolio with Edge Control and T-Platform, aiming to deliver private network-like performance over its nationwide 5G-Advanced footprint while simplifying how enterprises deploy, govern, and scale edge workloads. Edge Control enables cellular traffic to exit locally and flow directly into an enterprise’s edge compute environment, rather than traversing centralized cores or the public internet. T-Platform is T-Mobile’s customer portal for managing business services, including Edge Control. Traditional MEC offers low-latency access to hyperscaler edge zones but often relies on internet or backhaul paths that add jitter and sovereignty concerns.
South Korea is funding a national AI stack to reduce dependence on foreign models, protect data, and tune AI to its language and industries. The government has committed ₩530 billion (about $390 million) to five companies building large-scale foundation models: LG AI Research, SK Telecom, Naver Cloud, NC AI, and Upstage. Progress will be reviewed every six months, with underperformers cut and resources concentrated on the strongest until two leaders remain. The policy goal is clear: build world-class, Korean-first AI capability that supports national security, economic competitiveness, and data sovereignty. For telecoms and enterprise IT, this is a shift from “consume global models” to “operate domestic AI platforms” integrated with local data, compliance, and services.

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy

Tech News & Insight
Enterprises adopting private 5G, LTE, or CBRS networks need more than encryption to stay secure. This article explains the 4 pillars of private network security: core controls, device visibility, real-time threat detection, and orchestration. Learn how to protect SIM and device identities, isolate traffic, secure OT and IoT, and choose...

Sponsored by: OneLayer

     
Whitepaper
Private cellular networks are transforming industrial operations, but securing private 5G, LTE, and CBRS infrastructure requires more than legacy IT/OT tools. This whitepaper by TeckNexus and sponsored by OneLayer outlines a 4-pillar framework to protect critical systems, offering clear guidance for evaluating security vendors, deploying zero trust, and integrating IT,...
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Scroll to Top

Feature Your Brand in Private Network Magazines

With Award-Winning Deployments & Industry Leaders
Sponsorship placements open until Nov 21, 2025