OpenAI’s ChatGPT Plus Introduces Advanced Voice Mode

OpenAI has launched an alpha version of Advanced Voice Mode for ChatGPT Plus users, available on the ChatGPT mobile app for iOS and Android. This feature offers real-time conversations with AI-generated voices, enhancing user interaction. Initially, it's accessible to a select group, with a broader rollout planned by fall.
OpenAI's ChatGPT Plus Introduces Advanced Voice Mode
Image Credit: OpenAI

OpenAI has initiated the alpha rollout of its new Advanced Voice Mode for a select group of ChatGPT Plus users, allowing them to engage in more natural conversations with the AI chatbot on the official ChatGPT mobile app for iOS and Android.

Limited Rollout to ChatGPT Plus Users


The company announced on X that the mode would be available to “a small group of ChatGPT Plus users,” adding that more users would be added on a rolling basis, with plans for all ChatGPT Plus subscribers to have access by fall. ChatGPT Plus, priced at $20 per month, provides enhanced access to OpenAI’s large language model (LLM)-powered chatbot, along with other subscription tiers like Free, Team, and Enterprise.

It remains unclear how OpenAI is selecting users for the initial access to Advanced Voice Mode. However, the company noted that “users in this alpha will receive an email with instructions and a message in their mobile app” for ChatGPT, so interested users should check their inboxes and app notifications.

Advanced Voice Mode Features

The Advanced Voice Mode, showcased at OpenAI’s Spring Update event in May 2024, enables real-time conversation with four AI-generated voices on ChatGPT. The chatbot can handle interruptions and detect, respond to, and convey different emotions in its utterances and intonations.

OpenAI demonstrated various potential use cases for this conversational feature, including acting as a tutoring aid, fashion adviser, and guide for the visually impaired, especially when combined with its Vision capabilities.

System Requirements and Usage

Advanced Voice Mode is currently available on the iOS and Android ChatGPT apps. To use this feature, Android users need app version 1.2024.206 or later, and iOS users need app version 1.2024.205 or later with iOS 16.4 or later.

To start a conversation, users should select the Voice icon at the bottom-right of the screen. During the conversation, users can mute or unmute the microphone by selecting the microphone icon at the bottom-left of the screen and end the conversation by pressing the red icon at the bottom-right. Users need to provide the ChatGPT app with microphone permission to use this feature.

Usage Limits and Current Constraints

Advanced Voice Mode is currently in a limited alpha and may make mistakes. Usage of advanced Voice Mode (audio inputs and outputs) is limited daily, with precise limits subject to change. The ChatGPT app will issue a warning when three minutes of audio usage remain. Once the limit is reached, the conversation will end, and users will be invited to switch to the standard voice mode.

Advanced Voice Mode cannot create or access previous memories or custom instructions. Conversations in this mode can be resumed in advanced Voice, text, or standard Voice. However, due to the lack of memory and custom instruction support, conversations in text or standard Voice cannot be resumed in advanced Voice Mode.

How to Minimize Conversation Interruptions

To minimize interruptions during conversations in advanced Voice Mode, OpenAI recommends using headphones. iPhone users can enable Voice Isolation mic mode by opening the Control Panel, selecting Mic Mode, and switching to Voice Isolation. If issues persist, restarting the app, increasing the assistant’s volume, or moving to a quieter environment may help. The feature is not optimized for use with in-car Bluetooth or speakerphone.

Data Usage and Privacy Considerations

During the alpha phase, audio from advanced Voice Mode conversations will be used to train OpenAI’s models if users have shared their audio. Users can opt out by disabling “Improve voice for everyone” in their Data Controls Settings. If this setting is not visible, it means the user hasn’t shared their audio, and it will not be used for training.

With Standard Voice Mode, if users share their audio, OpenAI will store audio from voice chats rather than deleting clips after transcription. Efforts will be made to reduce personal information in the audio used for training, and the team may review shared audio.

No Support for GPTs, Music, and Video

Advanced Voice Mode is not yet available for use with GPTs and cannot generate musical content due to protections for creators’ rights. Video and screen-sharing support are also not part of the current alpha but will be available in future updates.

How Advanced Voice Mode Stands Out

The release of ChatGPT Advanced Voice Mode differentiates OpenAI from competitors such as Meta’s new Llama model and Anthropic’s Claude, and pressures emotive voice-focused AI startups like Hume. In recent months, OpenAI has released numerous papers on safety and AI model alignment, following the disbanding of its superalignment team and criticisms from former and current employees about prioritizing new products over safety.

Future Availability

OpenAI plans for all ChatGPT Plus users to have access to Advanced Voice Mode by fall, contingent on meeting safety and reliability standards. The company is also working on rolling out the new video and screen-sharing capabilities, which are demoed separately, and will keep users updated on the timeline.

Clearly, the cautious rollout of Advanced Voice Mode aims to address these criticisms and reassure users, regulators, and lawmakers that OpenAI is committed to prioritizing safety alongside innovation and profitability.


Recent Content

AMD and Rapt AI are partnering to improve AI workload efficiency across AMD Instinct GPUs, including MI300X and MI350. By integrating Rapt AI’s intelligent workload automation tools, the collaboration aims to optimize GPU performance, reduce costs, and streamline AI training and inference deployment. This partnership positions AMD as a stronger competitor to Nvidia in the high-performance AI GPU market while offering businesses better scalability and resource utilization.
Observe.AI has unveiled VoiceAI agents—intelligent, realistic voice-powered AI tools designed to automate contact center operations. These AI agents manage routine customer interactions using advanced voice technology, reduce support costs by up to 80%, and integrate easily with tools like Salesforce and Zendesk. With features like interruption detection and robust data security, VoiceAI agents mark a leap forward in contact center automation.
At the ETTelecom 5G Congress 2025, top Indian telecom players shared strategies for 5G growth, AI integration, and future tech like 6G. Bharti Airtel emphasized Fixed Wireless Access (FWA), Jio highlighted AI and its 6G roadmap, while Vodafone Idea focused on delivering high-quality 5G user experiences. With 84% population 5G coverage and India targeting 1 billion users by 2030, the telecom industry is at a pivotal moment.
The emergence of “vibe coding,” a term representing AI-driven software development, presents both opportunities and risks to the industry. This approach, emphasizing prompt engineering and AI-generated code, can potentially increase productivity and democratize development, but it also introduces concerns about code reliability, skill degradation, and dependence on AI. To harness the benefits of AI while mitigating these risks, developers must prioritize robust testing, clear coding standards, and a balance between intuitive insights and rigorous technical practices, ensuring that the fundamentals of software development are not lost.
Looking to learn AI in 2025 without breaking the bank? This blog breaks down the best free AI courses and certifications from top platforms like Google, IBM, and Harvard. Whether you’re a beginner, teacher, or tech professional, you’ll find career-relevant learning paths, direct course links, and tips to get certified and start building AI projects today.
Explore the transformative potential of Open Radio Access Networks (O-RAN) as it integrates AI, enhances security, and fosters interoperability to reshape mobile network infrastructure. In this article, we explore the advancements and challenges of O-RAN, revealing how it sets the stage for future mobile communications with smarter, more secure, and highly adaptable network solutions. Dive into the strategic implications for the telecommunications industry and learn why O-RAN is critical for the next generation of digital connectivity.

Download Magazine

With Subscription
Whitepaper
How IoT is driving cellular and enterprise network convergence and creating new risks and attack vectors?...
OneLayer Logo
Whitepaper
The combined power of IoT and 5G technologies will empower utilities to accelerate existing digital transformation initiatives while also opening the door to innovation opportunities that were previously impossible. However, utilities must also balance the pressure to innovate quickly with their responsibility to ensure the security of critical infrastructure and...
OneLayer Logo

It seems we can't find what you're looking for.

Subscribe To Our Newsletter

Scroll to Top