Arm opens Armv9 edge AI via Flexible Access

Arm has expanded its Flexible Access licensing model to include its Armv9 edge AI platform, lowering the cost and friction for OEMs and startups to develop on-device AI at scale. The platform combines the ultra‑efficient Arm Cortex‑A320 CPU with the Arm Ethos‑U85 NPU, enabling on‑device inference for models with roughly billion‑parameter complexity while maintaining tight power budgets. Security is a first‑class feature set, with architectural protections such as Pointer Authentication, Branch Target Identification, and Memory Tagging to harden critical software at the edge. Availability is staged: Cortex‑A320 will enter Flexible Access in November 2025, followed by Ethos‑U85 in early 2026.
Arm opens Armv9 edge AI via Flexible Access
Image Source: Arm

Armv9 edge AI enters Flexible Access for on-device development

Arm has expanded its Flexible Access licensing model to include its Armv9 edge AI platform, lowering the cost and friction for OEMs and startups to develop on-device AI at scale.

Armv9 in Flexible Access: Cortex-A320 + Ethos-U85 details

The company is bringing its newest Armv9 edge AI platform into Flexible Access, giving partners early design freedom and pay-when-you-ship economics. The platform combines the ultra‑efficient Arm Cortex‑A320 CPU with the Arm Ethos‑U85 NPU, enabling on‑device inference for models with roughly billion‑parameter complexity while maintaining tight power budgets. Security is a first‑class feature set, with architectural protections such as Pointer Authentication, Branch Target Identification, and Memory Tagging to harden critical software at the edge.


Availability is staged: Cortex‑A320 will enter Flexible Access in November 2025, followed by Ethos‑U85 in early 2026. The move builds on a program that Arm says now counts 300+ active members and roughly 400 tape‑outs since launch, with partners like Raspberry Pi, Hailo, Weeteq and SiMa.ai using Flexible Access to accelerate product cycles.

Why on-device AI at the edge matters now

AI is shifting from cloud-only to a hybrid model where inference increasingly executes on devices, gateways and field systems. For telecom and enterprise buyers, on-device AI reduces backhaul, improves latency and availability, and keeps sensitive data local—benefits that align with 5G Advanced, private cellular, and regulatory pressure on data movement. By lowering entry costs and bundling the latest Armv9 blocks, Arm is pushing more of the AI stack into power‑constrained endpoints that sit closest to sensor data and user interaction.

Armv9 edge AI architecture: performance, efficiency, security

The Armv9 edge AI platform aims to deliver higher ML throughput per watt alongside built-in protections required for connected, safety-critical deployments.

CPU+NPU design optimized for ML inference at the edge

Cortex‑A320 targets compact, energy‑sensitive designs, bringing Armv9 instruction improvements and vector extensions to uplift ML and signal‑processing workloads. Support for Scalable Vector Extension 2 (SVE2) enables efficient math, image, and audio pipelines on the CPU, which pairs with Ethos‑U85 to handle bulk tensor operations. This CPU+NPU approach helps run rich human‑machine interfaces—vision, voice, and gesture—while staying within strict thermal and battery envelopes typical of cameras, wearables, smart home devices, robotic endpoints, and industrial nodes.

Ethos‑U85 extends Arm’s microNPU line with higher performance density for compact silicon footprints. Together with quantization, sparsity, and compiler optimizations, the stack is designed to support large parameter counts at the edge without relying on continuous cloud connectivity. Keeping inference local minimizes jitter and is well suited to field environments where links are intermittent or costly.

Built-in Armv9 security for edge and IoT devices

Edge devices increasingly act as policy enforcement points, which raises the bar for memory safety, control‑flow integrity, and runtime isolation. Armv9 adds hardware capabilities to mitigate common exploit classes and contain faults. Features like Memory Tagging help detect use‑after‑free and buffer overflows; Pointer Authentication and Branch Target Identification constrain return‑oriented and jump‑oriented attacks. For telecom gateways, industrial controllers, and smart city cameras, these protections can reduce incident impact and aid compliance as software stacks expand to include AI runtimes, accelerators, and third‑party models.

Flexible Access licensing: evaluate, iterate, pay on ship

Flexible Access lets teams evaluate multiple IP blocks, run RTL simulations, and iterate hardware‑software co-design before committing to a specific core license. Fees are due only for IP that ships in the final design, and qualified startups can access IP and tools at little to no upfront cost. This model shortens learning cycles, aligns spend with milestones, and de‑risks bets on emerging AI workloads. The program’s track record—hundreds of tape‑outs and a broad member base—signals market confidence and a robust support ecosystem.

Business impact for telecom, 5G, and enterprise IT

On-device AI built on Armv9 intersects directly with network economics, edge computing strategy, and product roadmaps across operators, OEMs, and industrial buyers.

Benefits for operators and network equipment vendors

AI at the endpoint lightens backhaul and compute loads at the mobile edge, which can improve SLA adherence for video analytics, anomaly detection, and customer experience functions. In 5G and private networks, Armv9‑based CPE, gateways, and small-cell controllers can classify traffic, run local security inference, or perform sensor fusion with predictable latency even under constrained uplinks. Hardened Armv9 security features also strengthen device attestation and lifecycle management, which are vital to zero‑trust architectures extending from core to RAN to edge.

Advantages for device makers and industrial IoT

Access to Cortex‑A320 and Ethos‑U85 via Flexible Access helps teams hit aggressive BOM and power targets while unlocking richer local AI features. Smart cameras can run higher‑fidelity detection on‑device, voice interfaces gain robustness in noisy settings, and robots can execute perception and control loops without cloud round‑trips. The licensing model encourages rapid A/B testing of CPU‑NPU configurations before committing to volume, which is useful when evaluating compression techniques and model architectures tailored to edge constraints.

Toolchains, runtimes, and OTA model management

Success hinges on software. Teams should validate toolchain support across common runtimes, compilers, and model conversion flows for the NPU and SVE2, including quantization and sparsity paths. Evaluate support in Linux and RTOS distributions, as well as integration with MLOps pipelines that manage model updates over-the-air. Given the staged availability—CPU first, NPU shortly after—plan silicon and software roadmaps to ensure feature continuity across product generations.

Next steps for CTOs and product leaders

Use the announcement to revisit your edge AI strategy, hardware roadmap, and security posture with concrete next steps.

Map models to PPA goals and prototype

Profile priority models—vision, NLP, multimodal—and map them to performance, power, and area goals for your form factors. Prototype on Cortex‑A320 reference platforms and Ethos toolchains to validate end‑to‑end latency, memory footprint, and thermal behavior under real workloads.

Integrate Armv9 security features early

Adopt Armv9 features like Memory Tagging and Pointer Authentication in your software baseline, and align with secure boot, attestation, and SBOM processes. For regulated sectors, ensure your design path supports certification frameworks commonly used for connected devices.

Align procurement to Arm Flexible Access

Exploit design‑time flexibility: evaluate multiple IP mixes, negotiate upgrade paths from Armv8 to Armv9, and use the pay‑on‑final‑design structure to gate investments. For startups and new business units, leverage the low‑upfront access to accelerate proof‑of‑concepts and de‑risk first silicon.

Bottom line: by opening its Armv9 edge AI platform through Flexible Access, Arm is compressing the time and cost to put capable, secure on‑device AI into market—precisely where telecom and enterprise strategies now need intelligence to live.


Feature Your Brand with the Winners

In Private Network Magazine Editions

Sponsorship placements open until Oct 31, 2025

TeckNexus Newsletters

I acknowledge and agree to receive TeckNexus communications in line with the T&C and privacy policy

Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Private Network Solutions - TeckNexus

Subscribe To Our Newsletter

Tech News & Insight
Tech News & Insight
Tech News & Insight
Tech News & Insight
Tech News & Insight

Feature Your Brand in Upcoming Magazines

Showcase your expertise through a sponsored article or executive interview in TeckNexus magazines, reaching enterprise and industry decision-makers.

Scroll to Top