Armv9 edge AI enters Flexible Access for on-device development
Arm has expanded its Flexible Access licensing model to include its Armv9 edge AI platform, lowering the cost and friction for OEMs and startups to develop on-device AI at scale.
Armv9 in Flexible Access: Cortex-A320 + Ethos-U85 details
The company is bringing its newest Armv9 edge AI platform into Flexible Access, giving partners early design freedom and pay-when-you-ship economics. The platform combines the ultraโefficient Arm CortexโA320 CPU with the Arm EthosโU85 NPU, enabling onโdevice inference for models with roughly billionโparameter complexity while maintaining tight power budgets. Security is a firstโclass feature set, with architectural protections such as Pointer Authentication, Branch Target Identification, and Memory Tagging to harden critical software at the edge.
Availability is staged: CortexโA320 will enter Flexible Access in November 2025, followed by EthosโU85 in early 2026. The move builds on a program that Arm says now counts 300+ active members and roughly 400 tapeโouts since launch, with partners like Raspberry Pi, Hailo, Weeteq and SiMa.ai using Flexible Access to accelerate product cycles.
Why on-device AI at the edge matters now
AI is shifting from cloud-only to a hybrid model where inference increasingly executes on devices, gateways and field systems. For telecom and enterprise buyers, on-device AI reduces backhaul, improves latency and availability, and keeps sensitive data localโbenefits that align with 5G Advanced, private cellular, and regulatory pressure on data movement. By lowering entry costs and bundling the latest Armv9 blocks, Arm is pushing more of the AI stack into powerโconstrained endpoints that sit closest to sensor data and user interaction.
Armv9 edge AI architecture: performance, efficiency, security
The Armv9 edge AI platform aims to deliver higher ML throughput per watt alongside built-in protections required for connected, safety-critical deployments.
CPU+NPU design optimized for ML inference at the edge
CortexโA320 targets compact, energyโsensitive designs, bringing Armv9 instruction improvements and vector extensions to uplift ML and signalโprocessing workloads. Support for Scalable Vector Extension 2 (SVE2) enables efficient math, image, and audio pipelines on the CPU, which pairs with EthosโU85 to handle bulk tensor operations. This CPU+NPU approach helps run rich humanโmachine interfacesโvision, voice, and gestureโwhile staying within strict thermal and battery envelopes typical of cameras, wearables, smart home devices, robotic endpoints, and industrial nodes.
EthosโU85 extends Armโs microNPU line with higher performance density for compact silicon footprints. Together with quantization, sparsity, and compiler optimizations, the stack is designed to support large parameter counts at the edge without relying on continuous cloud connectivity. Keeping inference local minimizes jitter and is well suited to field environments where links are intermittent or costly.
Built-in Armv9 security for edge and IoT devices
Edge devices increasingly act as policy enforcement points, which raises the bar for memory safety, controlโflow integrity, and runtime isolation. Armv9 adds hardware capabilities to mitigate common exploit classes and contain faults. Features like Memory Tagging help detect useโafterโfree and buffer overflows; Pointer Authentication and Branch Target Identification constrain returnโoriented and jumpโoriented attacks. For telecom gateways, industrial controllers, and smart city cameras, these protections can reduce incident impact and aid compliance as software stacks expand to include AI runtimes, accelerators, and thirdโparty models.
Flexible Access licensing: evaluate, iterate, pay on ship
Flexible Access lets teams evaluate multiple IP blocks, run RTL simulations, and iterate hardwareโsoftware co-design before committing to a specific core license. Fees are due only for IP that ships in the final design, and qualified startups can access IP and tools at little to no upfront cost. This model shortens learning cycles, aligns spend with milestones, and deโrisks bets on emerging AI workloads. The programโs track recordโhundreds of tapeโouts and a broad member baseโsignals market confidence and a robust support ecosystem.
Business impact for telecom, 5G, and enterprise IT
On-device AI built on Armv9 intersects directly with network economics, edge computing strategy, and product roadmaps across operators, OEMs, and industrial buyers.
Benefits for operators and network equipment vendors
AI at the endpoint lightens backhaul and compute loads at the mobile edge, which can improve SLA adherence for video analytics, anomaly detection, and customer experience functions. In 5G and private networks, Armv9โbased CPE, gateways, and small-cell controllers can classify traffic, run local security inference, or perform sensor fusion with predictable latency even under constrained uplinks. Hardened Armv9 security features also strengthen device attestation and lifecycle management, which are vital to zeroโtrust architectures extending from core to RAN to edge.
Advantages for device makers and industrial IoT
Access to CortexโA320 and EthosโU85 via Flexible Access helps teams hit aggressive BOM and power targets while unlocking richer local AI features. Smart cameras can run higherโfidelity detection onโdevice, voice interfaces gain robustness in noisy settings, and robots can execute perception and control loops without cloud roundโtrips. The licensing model encourages rapid A/B testing of CPUโNPU configurations before committing to volume, which is useful when evaluating compression techniques and model architectures tailored to edge constraints.
Toolchains, runtimes, and OTA model management
Success hinges on software. Teams should validate toolchain support across common runtimes, compilers, and model conversion flows for the NPU and SVE2, including quantization and sparsity paths. Evaluate support in Linux and RTOS distributions, as well as integration with MLOps pipelines that manage model updates over-the-air. Given the staged availabilityโCPU first, NPU shortly afterโplan silicon and software roadmaps to ensure feature continuity across product generations.
Next steps for CTOs and product leaders
Use the announcement to revisit your edge AI strategy, hardware roadmap, and security posture with concrete next steps.
Map models to PPA goals and prototype
Profile priority modelsโvision, NLP, multimodalโand map them to performance, power, and area goals for your form factors. Prototype on CortexโA320 reference platforms and Ethos toolchains to validate endโtoโend latency, memory footprint, and thermal behavior under real workloads.
Integrate Armv9 security features early
Adopt Armv9 features like Memory Tagging and Pointer Authentication in your software baseline, and align with secure boot, attestation, and SBOM processes. For regulated sectors, ensure your design path supports certification frameworks commonly used for connected devices.
Align procurement to Arm Flexible Access
Exploit designโtime flexibility: evaluate multiple IP mixes, negotiate upgrade paths from Armv8 to Armv9, and use the payโonโfinalโdesign structure to gate investments. For startups and new business units, leverage the lowโupfront access to accelerate proofโofโconcepts and deโrisk first silicon.
Bottom line: by opening its Armv9 edge AI platform through Flexible Access, Arm is compressing the time and cost to put capable, secure onโdevice AI into marketโprecisely where telecom and enterprise strategies now need intelligence to live.





