FCC Rules Aim to Make AI-Generated Calls and Text Transparent

The FCC is proposing new rules that require companies to disclose when AI is used to generate calls or texts. This move aims to protect consumers from the potential misuse of AI, such as fraudulent communications. The rules would mandate clear disclosure at the start of AI-generated calls, empowering consumers to make informed decisions. This proposal follows a significant enforcement action against AI-driven scams and is part of a broader effort to enhance transparency and security in telecommunications. The FCC is also encouraging innovation while ensuring consumer protection, with the public invited to comment on the proposed regulations.
FCC Rules Aim to Make AI-Generated Calls and Text Transparent

FCC Aims to Protect Consumers by Requiring AI Disclosure in Telecommunication

The Federal Communications Commission (FCC) is ramping up its efforts to regulate the use of artificial intelligence (AI) in telecommunications. As AI technology increasingly integrates into various communication channels, the FCC has proposed new rules that would require companies to disclose when they use AI to generate phone calls or texts. This move is part of a broader initiative to protect consumers from potential misuse of AI in the form of fraudulent or misleading communications.

FCC Demands AI Disclosure for Consumer Protection

In a strong push for transparency, FCC Chair Jessica Rosenworcel stated, “Before any one of us gives our consent for calls from companies and campaigns, they need to tell us if they are using this technology.” This directive underscores the FCC’s commitment to ensuring that consumers are fully informed about the nature of the communications they receive.


The proposed rules would also mandate that any caller using AI-generated voices must clearly disclose this information at the start of a call. This requirement is designed to give consumers the necessary information to make informed decisions about their engagement with such communications.

Context: The FCC’s Battle Against AI-Generated Calls and Text Misuse

The FCC’s proposal comes in the wake of a significant enforcement action earlier this year. The agency imposed a $6 million fine on Democratic consultant Steve Kramer for allegedly using an AI-generated deepfake of President Joe Biden’s voice during the New Hampshire presidential primary. This incident highlighted the potential dangers of AI in political and commercial communications, prompting the FCC to take a stronger stance against AI-generated robocalls. The agency cited a 1991 law, originally designed to protect consumers from pre-recorded automated calls, as the basis for its decision to declare such AI-generated communications illegal.

Strengthening Regulations to Combat AI-Generated Robocalls

The FCC’s proposed rules represent an expansion of its ongoing efforts to combat fraudulent and misleading robocalls. In late June, Chair Rosenworcel sent letters to the CEOs of nine major telecom providers, seeking detailed information on the steps they were taking to prevent the proliferation of fraudulent AI-driven robocalls. These inquiries are part of a broader strategy to hold telecom companies accountable for the security of their networks and the integrity of the communications they facilitate.

The new rules would require explicit disclosures about the use of AI in calls and texts, which the FCC believes will empower consumers to better identify and avoid communications that carry a higher risk of fraud or scams. The proposed definition of an AI-generated call is broad, encompassing any communication that uses technology or tools to create artificial or prerecorded voices, or to generate texts using computational methods such as machine learning, predictive algorithms, or large language models. This comprehensive definition ensures that the rules will cover a wide range of AI applications in telecommunications.

Navigating AI Regulation: Consumer Safety vs. Innovation

While the primary goal of the proposed rules is to protect consumers, the FCC is also mindful of the positive uses of AI, particularly in helping individuals with disabilities to communicate more effectively. The agency is committed to ensuring that any new regulations do not inadvertently hinder these beneficial applications. To this end, the FCC is seeking public input on the proposed rules, allowing stakeholders to voice their concerns and suggestions before a final decision is made.

The proposal has garnered support from other members of the FCC, though there is some caution, particularly from Republican commissioners. Commissioner Brendan Carr has expressed concerns about the potential for over-regulation, warning that excessive constraints on AI at this early stage could stifle innovation. Similarly, Commissioner Nathan Simington has raised alarms about the possibility of the FCC endorsing widespread third-party monitoring of phone calls under the guise of safety, arguing that such an approach would be “beyond the pale.”

AI Tools Leading the Fight Against Scam Calls and Text

As part of its broader effort to protect consumers, the FCC is also highlighting several emerging technologies designed to detect and prevent scam calls. For example, Google is developing a solution called Gemini Nano, which would operate directly on smartphones without the need for an internet connection. This local processing capability could offer a robust defense against AI-generated scam calls, enhancing user privacy and security. Meanwhile, Microsoft is offering Azure Operator Call Protection, a solution designed specifically for telecom operators, which could provide additional layers of protection against fraudulent communications.

These technological advancements reflect the industry’s recognition of the growing threat posed by AI-generated scams and the need for innovative solutions to combat them. By promoting the development and deployment of such technologies, the FCC is signaling its commitment to staying ahead of potential threats while also encouraging responsible innovation in the telecommunications sector.

Shaping AI Regulations: FCC Seeks Public Input

The FCC’s proposal is now open for public comment, allowing stakeholders from across the industry and the general public to weigh in on the proposed rules. This period of consultation is crucial for refining the regulations and ensuring that they strike the right balance between protecting consumers and fostering innovation. The agency’s final decision will likely be influenced by the feedback received during this comment period, as well as by ongoing developments in AI and telecommunications.

Chair Rosenworcel’s proposal has set the stage for a significant shift in how AI is regulated in the telecommunications industry. As AI continues to evolve and play an increasingly prominent role in communication, the FCC’s efforts to ensure transparency and protect consumers will be critical in shaping the future of the industry. By requiring companies to disclose their use of AI, the FCC aims to create a more transparent and trustworthy communication environment, where consumers can feel confident that they are fully informed about the nature of the communications they receive.


Recent Content

ZTE and e& UAE have completed a successful Private 5G Network trial, showcasing high uplink speeds, multi-band adaptability, and ZTE’s NodeEngine Edge Computing platform. This trial enables rapid deployment, stronger enterprise connectivity, and practical use cases for smart industries, aligning with the UAE’s goal of becoming a digital innovation leader.
Hrvatski Telekom’s NextGen 5G Airports project will deploy Private 5G Networks at Zagreb, Zadar, and Pula Airports to boost safety, efficiency, and airport automation. By combining 5G Standalone, Edge Computing, AI, and IoT, the initiative enables drones, smart cameras, and AI tablets to digitize inspections, secure perimeters, and streamline operations, redefining aviation connectivity in Croatia.
SK Group and AWS are partnering to build South Korea’s largest AI data center in Ulsan with a $5.13 billion investment. The facility will launch with 60,000 GPUs and 103 MW capacity, expanding to 1 GW, creating up to 78,000 jobs. This milestone boosts South Korea’s AI leadership, data sovereignty, and positions Ulsan as a major AI hub in Asia.
This article critiques the common practice of exhaustive data cleaning before implementing AI, labeling it a consultant-driven “scam.” Data cleaning is a never-ending and expensive process, delaying AI implementation while competitors move forward. Instead, I champion a “clean as you go” approach, emphasizing starting with a specific AI use case and cleaning data only as needed. Smart companies prioritize iterative improvement by using AI to fill in data gaps and building safeguards around imperfect data, ultimately achieving faster results. The core message is it’s more important to prioritize action over perfection, enabling quicker AI adoption and thereby competitive advantage.
Edge AI is reshaping broadband customer experience by powering smart routers, proactive troubleshooting, conversational AI, and personalized Wi-Fi management. Learn how leading ISPs like Comcast and Charter use edge computing to boost reliability, security, and customer satisfaction.
The pressure to adopt artificial intelligence is intense, yet many enterprises are rushing into deployment without adequate safeguards. This article explores the significant risks of unchecked AI deployment, highlighting examples like the UK Post Office Horizon scandal, Air Canada’s chatbot debacle, and Zillow’s real estate failure to demonstrate the potential for financial, reputational, and societal damage. It examines the pitfalls of bias in training data, the problem of “hallucinations” in generative AI, and the economic and societal costs of AI failures. Emphasizing the importance of human oversight, data quality, explainability, ethical guidelines, and robust security, the article urges organizations to proactively navigate the challenges of AI adoption. It advises against delaying implementation, as competitors are already integrating AI, and advocates for a cautious, informed approach to mitigate risks and maximize the potential for success in the AI era.
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore the collaboration between Purdue Research Foundation, Purdue University, Ericsson, and Saab at the Aviation Innovation Hub. Discover how private 5G networks, real-time analytics, and sustainable innovations are shaping the "Airport of the Future" for a smarter, safer, and greener aviation industry....
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...

Download Magazine

With Subscription

Subscribe To Our Newsletter

Scroll to Top

Private Network Readiness Assessment

Run your readiness check now — for enterprises, operators, OEMs & SIs planning and delivering Private 5G solutions with confidence.