OpenAI’s ChatGPT Plus Introduces Advanced Voice Mode

OpenAI has launched an alpha version of Advanced Voice Mode for ChatGPT Plus users, available on the ChatGPT mobile app for iOS and Android. This feature offers real-time conversations with AI-generated voices, enhancing user interaction. Initially, it's accessible to a select group, with a broader rollout planned by fall.
OpenAI's ChatGPT Plus Introduces Advanced Voice Mode
Image Credit: OpenAI

OpenAI has initiated the alpha rollout of its new Advanced Voice Mode for a select group of ChatGPT Plus users, allowing them to engage in more natural conversations with the AI chatbot on the official ChatGPT mobile app for iOS and Android.

Limited Rollout to ChatGPT Plus Users


The company announced on X that the mode would be available to โ€œa small group of ChatGPT Plus users,โ€ adding that more users would be added on a rolling basis, with plans for all ChatGPT Plus subscribers to have access by fall. ChatGPT Plus, priced at $20 per month, provides enhanced access to OpenAI’s large language model (LLM)-powered chatbot, along with other subscription tiers like Free, Team, and Enterprise.

It remains unclear how OpenAI is selecting users for the initial access to Advanced Voice Mode. However, the company noted that โ€œusers in this alpha will receive an email with instructions and a message in their mobile appโ€ for ChatGPT, so interested users should check their inboxes and app notifications.

Advanced Voice Mode Features

The Advanced Voice Mode, showcased at OpenAIโ€™s Spring Update event in May 2024, enables real-time conversation with four AI-generated voices on ChatGPT. The chatbot can handle interruptions and detect, respond to, and convey different emotions in its utterances and intonations.

OpenAI demonstrated various potential use cases for this conversational feature, including acting as a tutoring aid, fashion adviser, and guide for the visually impaired, especially when combined with its Vision capabilities.

System Requirements and Usage

Advanced Voice Mode is currently available on the iOS and Android ChatGPT apps. To use this feature, Android users need app version 1.2024.206 or later, and iOS users need app version 1.2024.205 or later with iOS 16.4 or later.

To start a conversation, users should select the Voice icon at the bottom-right of the screen. During the conversation, users can mute or unmute the microphone by selecting the microphone icon at the bottom-left of the screen and end the conversation by pressing the red icon at the bottom-right. Users need to provide the ChatGPT app with microphone permission to use this feature.

Usage Limits and Current Constraints

Advanced Voice Mode is currently in a limited alpha and may make mistakes. Usage of advanced Voice Mode (audio inputs and outputs) is limited daily, with precise limits subject to change. The ChatGPT app will issue a warning when three minutes of audio usage remain. Once the limit is reached, the conversation will end, and users will be invited to switch to the standard voice mode.

Advanced Voice Mode cannot create or access previous memories or custom instructions. Conversations in this mode can be resumed in advanced Voice, text, or standard Voice. However, due to the lack of memory and custom instruction support, conversations in text or standard Voice cannot be resumed in advanced Voice Mode.

How to Minimize Conversation Interruptions

To minimize interruptions during conversations in advanced Voice Mode, OpenAI recommends using headphones. iPhone users can enable Voice Isolation mic mode by opening the Control Panel, selecting Mic Mode, and switching to Voice Isolation. If issues persist, restarting the app, increasing the assistantโ€™s volume, or moving to a quieter environment may help. The feature is not optimized for use with in-car Bluetooth or speakerphone.

Data Usage and Privacy Considerations

During the alpha phase, audio from advanced Voice Mode conversations will be used to train OpenAIโ€™s models if users have shared their audio. Users can opt out by disabling โ€œImprove voice for everyoneโ€ in their Data Controls Settings. If this setting is not visible, it means the user hasnโ€™t shared their audio, and it will not be used for training.

With Standard Voice Mode, if users share their audio, OpenAI will store audio from voice chats rather than deleting clips after transcription. Efforts will be made to reduce personal information in the audio used for training, and the team may review shared audio.

No Support for GPTs, Music, and Video

Advanced Voice Mode is not yet available for use with GPTs and cannot generate musical content due to protections for creators’ rights. Video and screen-sharing support are also not part of the current alpha but will be available in future updates.

How Advanced Voice Mode Stands Out

The release of ChatGPT Advanced Voice Mode differentiates OpenAI from competitors such as Metaโ€™s new Llama model and Anthropicโ€™s Claude, and pressures emotive voice-focused AI startups like Hume. In recent months, OpenAI has released numerous papers on safety and AI model alignment, following the disbanding of its superalignment team and criticisms from former and current employees about prioritizing new products over safety.

Future Availability

OpenAI plans for all ChatGPT Plus users to have access to Advanced Voice Mode by fall, contingent on meeting safety and reliability standards. The company is also working on rolling out the new video and screen-sharing capabilities, which are demoed separately, and will keep users updated on the timeline.

Clearly, the cautious rollout of Advanced Voice Mode aims to address these criticisms and reassure users, regulators, and lawmakers that OpenAI is committed to prioritizing safety alongside innovation and profitability.


Recent Content

The integration of tariffs and the EU AI Act creates a challenging environment for the advancement of AI and automation. Tariffs, by increasing the cost of essential hardware components, and the EU AI Act, by increasing compliance costs, can significantly raise the barrier to entry for new AI and automation ventures. European companies developing these technologies may face a double disadvantage: higher input costs due to tariffs and higher compliance costs due to the AI Act, making them less competitive globally. This combined pressure could discourage investment in AI and automation within the EU, hindering innovation and slowing adoption rates. The resulting slower adoption could limit the availability of crucial real-world data for training and improving AI algorithms, further impacting progress.
NVIDIA has launched a major U.S. manufacturing expansion for its next-gen AI infrastructure. Blackwell chips will now be produced at TSMCโ€™s Arizona facilities, with AI supercomputers assembled in Texas by Foxconn and Wistron. Backed by partners like Amkor and SPIL, NVIDIA is localizing its AI supply chain from silicon to system integrationโ€”laying the foundation for โ€œAI factoriesโ€ powered by robotics, Omniverse digital twins, and real-time automation. By 2029, NVIDIA aims to manufacture up to $500B in AI infrastructure domestically.
Samsung has launched two new rugged devicesโ€”the Galaxy XCover7 Pro smartphone and the Tab Active5 Pro tabletโ€”designed for high-intensity fieldwork in sectors like logistics, healthcare, and manufacturing. These devices offer military-grade durability, advanced 5G connectivity, and enterprise-ready security with Samsung Knox Vault. Features like hot-swappable batteries, gloved-touch sensitivity, and AI-powered tools enhance productivity and reliability in harsh environments.
Nokia, Digita, and CoreGo have partnered to roll out private 5G networks and edge computing solutions at high-traffic event venues. Using Nokia’s Digital Automation Cloud (DAC) and CoreGoโ€™s payment and access tech, the trio delivers real-time data flow, reliable connectivity, and enhanced guest experience across Finland and international locationsโ€”serving over 2 million attendees to date.
OpenAI is developing a prototype social platform featuring an AI-powered content feed, potentially placing it in direct competition with Elon Musk’s X and Metaโ€™s AI initiatives. Spearheaded by Sam Altman, the project aims to harness user-generated content and real-time interaction to train advanced AI systemsโ€”an approach already used by rivals like Grok and Llama.
AI Pulse: Telecomโ€™s Next Frontier is a definitive guide to how AI is reshaping the telecom landscape โ€” strategically, structurally, and commercially. Spanning over 130 pages, this MWC 2025 special edition explores AIโ€™s growing maturity in telecom, offering a comprehensive look at the technologies and trends driving transformation.

Explore strategic AI pillarsโ€”from AI Ops and Edge AI to LLMs, AI-as-a-Service, and governanceโ€”and learn how telcos are building AI-native architectures and monetization models. Discover insights from 30+ global CxOs, unpacking shifts in leadership thinking around purpose, innovation, and competitive advantage.

The edition also examines connected industries at the intersection of Private 5G, AI, and Satelliteโ€”fueling transformation in smart manufacturing, mobility, fintech, ports, sports, and more. From fan engagement to digital finance, from smart cities to the industrial metaverse, this is the roadmap to telecomโ€™s next eraโ€”where intelligence is the new infrastructure, and telcos become the enablers of everything connected.

It seems we can't find what you're looking for.

Download Magazine

With Subscription

Subscribe To Our Newsletter

Scroll to Top