Private Network Check Readiness - TeckNexus Solutions

Sferical AI: Sweden’s Sovereign AI Supercomputer with Ericsson, Saab, SEB, and Wallenberg Investments

AstraZeneca, Ericsson, Saab, SEB, and Wallenberg Investments have launched Sferical AI to build and operate a sovereign AI supercomputer that anchors Sweden's next phase of industrial digitization. Sferical AI plans to deploy two NVIDIA DGX Super PODs based on the latest DGX GB300 systems in Linkping. The installation will combine 1,152 tightly interconnected GPUs, designed for fast training and fine-tuning of large, complex models. Sovereign infrastructure addresses data residency, IP protection, and regulatory alignment, while reducing exposure to public cloud capacity swings. For Swedish and European firms navigating GDPR, NIS2, and sector-specific rules like DORA in finance, a trusted, high-performance platform can accelerate AI adoption without compromising compliance.
Sferical AI: Sweden’s Sovereign AI Supercomputer with Ericsson, Saab, SEB, and Wallenberg Investments

Sferical AI: Sweden’s Sovereign AI Supercomputer for Industry

AstraZeneca, Ericsson, Saab, SEB, and Wallenberg Investments have launched Sferical AI to build and operate a sovereign AI supercomputer that anchors Swedens next phase of industrial digitization.

Founders, Leadership, and Mandate


The consortium spans life sciences, telecoms, defense, and finance, signaling a cross-sector mandate: drive competitive advantage with shared, secure, high-end compute. The initiative follows a Swedish industry collaboration announced earlier this year involving NVIDIA leadership and aims to channel national strengths into AI at scale. Jenny Nordlw has been appointed CEO, with professor Anders Ynnerman as Executive Chairman, pairing commercial delivery with deep scientific stewardship.

NVIDIA DGX SuperPOD Deployment in Linköping

Sferical AI plans to deploy two NVIDIA DGX SuperPODs based on the latest DGX GB300 systems in Linkping. The installation will combine 1,152 tightly interconnected GPUs, designed for fast training and fine-tuning of large, complex models. Alongside, an NVIDIA AI Technology Centre in Sweden will co-develop applications with member companies and ramp industry skills through training programs, expanding benefits beyond the founding group.

Business, Compliance, and Competitive Rationale

AI leadership increasingly depends on proximity to compute, data, and talentunder strong governance. Sovereign infrastructure addresses data residency, IP protection, and regulatory alignment, while reducing exposure to public cloud capacity swings. For Swedish and European firms navigating GDPR, NIS2, and sector-specific rules like DORA in finance, a trusted, high-performance platform can accelerate AI adoption without compromising compliance.

Strategic AI Impact for Telecom, 5G, and Critical Sectors

For telecom operators and adjacent industries, Sferical AI creates a national-scale platform to develop, test, and deploy AI where latency, safety, and data control are non-negotiable.

Priority AI Use Cases by Sector

Telecoms can leverage large-scale training for RAN energy optimization, traffic forecasting, self-optimizing networks, and service assurancethen distill models for real-time inference at the edge. Defense and aerospace can accelerate sensor fusion and autonomy simulation. Life sciences can scale multimodal discovery pipelines. Financial services can build risk, AML, and fraud models that stay within jurisdictional boundaries.

Sovereign Data Governance, Compliance, and Security

A sovereign facility enables privacy-preserving development on sensitive datasets, with clearer audit trails and policy enforcement. Multi-tenant governance will be critical: access controls, model isolation, and supply chain assurance should align with European cybersecurity frameworks and sector standards. Expect demand for confidential computing, dataset lineage tracking, and robust red-teaming for model safety.

Ecosystem Development and AI Skills Pipeline

The NVIDIA AI Technology Centre can catalyze co-innovation across academia and industry, complementing EuroHPC and other European initiatives. Skills programs, such as deep learning training, will help close gaps in applied MLOps and AI safety, while creating a local talent pipeline for edge-to-core deployments.

Architecture, Performance, and Capacity Planning

The technical profile of the Linkping installation suggests a focus on frontier model development with pragmatic pathways to enterprise deployment.

DGX SuperPOD Scale, Throughput, and Model Training

With 1,152 GPUs across two SuperPODs, Sferical AI will support training of large language and vision models, domain-specific foundation models, and high-throughput fine-tuning. The architecture is designed for rapid iteration on large batches, enabling shorter development cycles for industry-grade AI.

High-Speed Networking, Storage, Power, and Cooling

Success depends on more than GPU counts. High-performance interconnects, scalable parallel storage, and resilient power and cooling will determine sustained throughput. Enterprises should plan for robust data pipelines into the facility and efficient distillation or quantization workflows to run derived models on edge GPUs or SmartNICs in 5G sites, factories, and vehicles.

Software Stack, MLOps, and SLAs

Enterprises should align toolchains around containerized training and inference, model registries, and reproducible pipelines. Benchmarking with MLPerf-like methodologies, disciplined model evaluation, and clear SLAs for multi-tenant scheduling will be essential to maximize ROI and ensure fair access across participants.

Risks, Constraints, and Open Considerations

Governance and operating model choices now will shape adoption, economics, and ecosystem trust for years.

Fair Access, Allocation, and Neutral Governance

Balancing capacity among founding companies while enabling broader industry access is non-trivial. Transparent prioritization, neutral governance, and clear IP frameworks are needed to maintain credibility and avoid crowding out SMEs and startups.

Vendor Lock-In, Portability, and Multicloud Interop

The stack will be optimized for NVIDIA acceleration, raising questions on long-term portability. Enterprises should abstract training pipelines where possible, document dependencies, and plan for interoperability with hybrid and multicloud environments to reduce lock-in risk.

Energy Efficiency, Sustainability, and TCO

Power density, cooling, and lifecycle emissions matter. Expect scrutiny of energy sourcing and efficiency metrics. Cost models should weigh shared sovereign capacity against public cloud options, accounting for data egress, compliance overheads, and time-to-results.

Next Steps for Enterprises

Executives should map near-term AI value to this sovereign capability while preparing their organizations for sustained, secure scaling.

Prioritize High-Impact, High-Compliance Workloads

Identify use cases constrained by data sensitivity or compute scarcity: RAN optimization, network planning digital twins, predictive maintenance, pharmacovigilance, fraud detection. Define measurable outcomes and align budgets to multiquarter training cycles.

Design Edge-to-Core AI Pipelines

Design pipelines where foundation or domain models train in Linkping, then compress for inference at the edge. Implement continuous retraining with feedback loops, robust observability, and drift management across 5G, enterprise, and industrial environments.

Build Teams, Governance, and Training Programs

Stand up cross-functional squads for data engineering, MLOps, security, and compliance. Establish model governance, evaluation, and incident response processes. Engage early with Sferical AI to secure capacity windows, co-innovation opportunities at the NVIDIA AI Technology Centre, and targeted skills programs for your workforce.


Recent Content

Kyndryls’ three-year, $2.25 billion plan signals an aggressive push to anchor AI-led infrastructure modernization in India’s digital economy and to scale delivery across regulated industries. The $2.25 billion commitment, anchored by the Bengaluru AI lab and tied to governance and skilling programs, should accelerate enterprise-grade AI and hybrid modernization across India. Expect more co-created reference architectures, deeper public-sector engagements, and tighter integration with network and cloud partners through 2026. For telecom and large enterprises, this is a timely opportunity to industrialize AI, modernize core platforms, and raise operational resilience provided programs are governed with clear metrics, strong security, and a pragmatic path from pilot to production.
Apple’s fall software updates introduce admin-grade switches to govern how corporate users access ChatGPT and other external AI services across iPhone, iPad, and Mac. Apple is enabling IT teams to explicitly allow or block the use of an enterprise-grade ChatGPT within Apple Intelligence, with a design that treats OpenAI as one of several possible external providers. Practically, that means admins can set policy to route requests either to Apples own stack or to a sanctioned third-party provider, and disable external routing entirely when required.
India’s AI oversight for telecom is moving from recommendations to implementation, with policy review and technical workstreams running in parallel. The Telecom Regulatory Authority of India has issued recommendations on leveraging artificial intelligence and big data in telecom, including the creation of an independent statutory authority for AI governance. The proposed Artificial Intelligence and Data Authority of India (AIDAI) is envisioned to promote responsible AI development and regulate sectoral use cases. The Ministry of Electronics and Information Technology has initiated projects with research bodies and universities focused on how to ensure and test AI trustworthiness.
Nvidia has reportedly paused production activities tied to its H20 data center AI GPUs for China as Beijing intensifies national-security scrutiny, clouding a long-anticipated reentry into the market. Multiple suppliers have been asked to suspend work related to the H20, Nvidia’s made-for-China accelerator designed to meet U.S. export rules. The pause arrives shortly after Washington signaled it would grant export licenses for the H20, reversing an earlier halt that triggered unsold inventory write downs at Nvidia. The H20 is Nvidia’s linchpin for retaining a foothold in the worlds second-largest AI market; any prolonged disruption has material revenue and ecosystem consequences.
Fresh polling signals rising public concern that AI could upend employment, destabilize politics, and strain social and energy systems. A recent Reuters/Ipsos survey of 4,446 U.S. adults found that 71% worry AI will permanently displace too many workers. Seventy-seven percent of respondents fear AI will fuel political instability if hostile actors exploit the technology. The poll also shows broad worry about AIs indirect costs: 66% are concerned about AI companions displacing human relationships, and 61% are concerned about the technology’s energy footprint. Bottom line: Public concern is high, and that increases the cost of missteps.
According to telecom experts, 6G communication is expected to be path-breaking in its offerings. Artificial intelligence (AI) is being portrayed as the prime contributor to the enormous success of 6G networks. AI is set to play a pivotal role in shaping 6G to be relevant and rewarding for businesses and individuals. Several other digital technologies gel well to present 6G as the game-changing phenomenon in the communication world. One noteworthy facet is that the recent concept of semantic communication is to be elegantly realised through 6G networks. In this AI-first 6G book, we have elucidated how the predictive, generative, and agentic capabilities of AI are to make 6G communication penetrative, pervasive and persuasive too.
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore the collaboration between Purdue Research Foundation, Purdue University, Ericsson, and Saab at the Aviation Innovation Hub. Discover how private 5G networks, real-time analytics, and sustainable innovations are shaping the "Airport of the Future" for a smarter, safer, and greener aviation industry....
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...

Download Magazine

With Subscription

Subscribe To Our Newsletter

Private Network Awards 2025 - TeckNexus
Scroll to Top

Private Network Awards

Recognizing excellence in 5G, LTE, CBRS, and connected industries. Nominate your project and gain industry-wide recognition.
Early Bird Deadline: Sept 5, 2025 | Final Deadline: Sept 30, 2025