Is AI free to say whatever it wants?

Large Language Models are beginning to ‘express opinions’ on controversial topics. But do they have the right to free speech? What happens if an AI defames someone? Find out in this article looking into how GenAI models, and in particular SCOTi, answered some controversial questions.
Is AI free to say whatever it wants?

Recently SCOTi answered the following controversial questions:

 


If large language models (LLMs) like SCOTi are beginning to ‘express opinions’ on such controversial topics, then does SCOTi have the right to free speech? Who is responsible for SCOTi’s opinions and what right(s) does SCOTi have to express them? With the first case of libel being filled against an AI company (Mark Walters vs OpenAI) concerning many, we will be exploring to what extent you may be liable for your AIs speech and how this case is forming the foundations of AI speech responsibility.

The first amendment (which protects the right to free speech) does not reset itself after each technological advance. Therefore, just as individuals have the right to publish their ideas, so too do they have the right to publish computer code. In the US, the case of Bernstein v. Department of Justice established that computer code is considered speech and therefore is protected by free speech rights. The question becomes more complicated when we consider whether that computer code, that becomes an AI, has its own right to self-perpetuation of speech. In other words, whether the AI has the right to freedom of speech?

The answer is we don’t officially know. James B. Garvey presents a strong case for why AI should/will be granted the right to free speech. According to Garvey, the Supreme Court’s extension of free speech rights to non-human actors in Citizens United v FEC provides a compelling framework for granting free speech rights to AI. While the principle of speaker equivalence may not require the same protection for every type of speaker, it does suggest that novel speakers should have the same standard analytical framework applied to them. Furthermore, the court has stated that it would err on the side of overprotection when a claim for free speech involves novel technology. These factors all indicate that there is a high likelihood of a future case determining that AI does have the right to free speech.

Yet, the reality is that all we can do for now is hypothesize. There are other scholars, like Professor Wu, that don’t believe AI would be given the right to free speech as it lacks certain qualities that human speakers have. Specifically, Wu argues that AI either acts as a communicative tool or a conduit for speech. While Garvey rejects this argument on the basis that advances in AI technology mean that AI’s will soon meet these standards for speech, for now all we can really do is speculate.

This issue is becoming more and more pertinent, particularly as GPT models begin to produce defamatory or controversial messages/images. If you take a look at some of the most recent headlines the issue becomes obvious:

A chatbot that lets you talk with Jesus and Hitler is the latest controversy in the AI gold rush”

“Google Chatbot’s A.I. Images Put People of Color in Nazi-Era Uniforms”

“NCAA athlete claims she was scolded by AI over message about women’s sports”

If AI has the right to free speech, then surely the few exceptions to this right should also apply to an AI. In the US, categories of speech which are either not protected or given lesser protection include: incitement, defamation, fraud, obscenity, child pornography, fighting words, and threats. Just as defamatory messages are considered a tort through more traditional media like television or newspaper, then so too should they be impermissible through an AI.

If we decide to hold AI to the same standards as us humans, then the question becomes who is responsible for breaches of these standards? Who is liable for defamatory material produced by an AI? The company hosting the AI? The user of the AI? What degree of intention is required to impose liability when an AI program lacks human intention?

The first case of libel has been filled in the US by a man named Mark Walters against ‘OpenAI LLC’ (also known as Open AI the company responsible for ChatGPT). ChatGPT hallucinated (in other words fabricated information) about Mark Walters which was libelous and harmful to his reputation and was in no way based on any real information. This case is extraordinary as it is the first of its kind and might shed some light on whether AIs are liable, through their company, for any of the information they publish or provide on the web.

The outcome is bound to have widespread effects on legal issues generally related to AI, such as issues surrounding copyright law which we addressed recently in one of our blogs concerning the legal ownership of content produced by an AI. For the moment all we can do is wait for courts or the legal process to provide some certain answers to the questions we have considered in this blog. In the meantime, companies and organizations should take note of the Mark Walters case and consider how they might be responsible for information published by their AIs.

AI might be given the right to free speech, but with it may come the responsibility to respect its exceptions.


Recent Content

In AI in Telecom: Strategic Themes, Maturity, and the Road Ahead, we explore how AI has shifted from buzzword to backbone for global telecom leaders. From AI-native networks and edge inferencing, to domain-specific LLMs and behavioral cybersecurity, this article maps out the strategic pillars, real-world use cases, and monetization models driving the AI-powered telecom era. Featuring CxO insights from Telefónica, KDDI, MTN, Telstra, and Orange, it captures the voice of a sector transforming infrastructure into intelligence.
In The Gateway to a New Future, top global telecom leaders—Marc Murtra (Telefónica), Vicki Brady (Telstra), Sunil Bharti Mittal (Airtel), Biao He (China Mobile), and Benedicte Schilbred Fasmer (Telenor)—share bold visions for reshaping the industry. From digital sovereignty and regulatory reform in Europe, to AI-powered smart cities in China and fintech platforms in Africa, these executives reveal how telecom is evolving into a driving force of global innovation, inclusion, and collaboration. The telco of tomorrow is not just a network—it’s a platform for economic and societal transformation.
In Beyond Connectivity: The Telco to Techco Transformation, leaders from e&, KDDI, and MTN reveal how telecoms are evolving into technology-first, platform-driven companies. These digital pioneers are integrating AI, 5G, cloud, smart infrastructure, and fintech to unlock massive value—from AI-powered smart cities in Japan, to inclusive fintech platforms in Africa, and cloud-first enterprise solutions in the Middle East. This piece explores how telcos are reshaping their role in the digital economy—building intelligent, scalable, and people-first tech ecosystems.
In Balancing Innovation and Regulation: Global Perspectives on Telecom Policy, top leaders including Jyotiraditya Scindia (India), Henna Virkkunen (European Commission), and Brendan Carr (U.S. FCC) explore how governments are aligning policy with innovation to future-proof their digital infrastructure. From India’s record-breaking 5G rollout and 6G ambitions, to Europe’s push for AI sovereignty and U.S. leadership in open-market connectivity, this piece outlines how nations can foster growth, security, and inclusion in a hyperconnected world.
In Driving Europe’s Digital Future, telecom leaders Margherita Della Valle (Vodafone), Christel Heydemann (Orange), and Tim Höttges (Deutsche Telekom) deliver a unified message: Europe must reform telecom regulation, invest in AI and infrastructure, and scale operations to remain globally competitive. From lagging 5G rollout to emerging AI-at-the-edge opportunities, they urge policymakers to embrace consolidation, cut red tape, and drive fair investment frameworks. Europe’s path to digital sovereignty hinges on bold leadership, collaborative policy, and future-ready infrastructure.
In The AI Frontier: Transformative Visions and Societal Impact, global AI leaders explore the next phase of artificial intelligence—from Ray Kurzweil’s prediction of AGI by 2029 and bio-integrated computing, to Alessandra Sala’s call for inclusive, ethical model design, and Vilas Dhar’s vision of AI as a tool for systemic human good. Martin Kon of Cohere urges businesses to go beyond the hype and ground AI in real enterprise value. Together, these voices chart a path for AI that centers values, equity, and impact—not just innovation.

Download Magazine

With Subscription
Whitepaper
Explore how Generative AI is transforming telecom infrastructure by solving critical industry challenges like massive data management, network optimization, and personalized customer experiences. This whitepaper offers in-depth insights into AI and Gen AI's role in boosting operational efficiency while ensuring security and regulatory compliance. Telecom operators can harness these AI-driven...
Supermicro and Nvidia Logo
Whitepaper
The whitepaper, "How Is Generative AI Optimizing Operational Efficiency and Assurance," provides an in-depth exploration of how Generative AI is transforming the telecom industry. It highlights how AI-driven solutions enhance customer support, optimize network performance, and drive personalized marketing strategies. Additionally, the whitepaper addresses the challenges of integrating AI into...
RADCOM Logo
Article & Insights
Non-terrestrial networks (NTNs) have evolved from experimental satellite systems to integral components of global connectivity. The transition from geostationary satellites to low Earth orbit constellations has significantly enhanced mobile broadband services. With the adoption of 3GPP standards, NTNs now seamlessly integrate with terrestrial networks, providing expanded coverage and new opportunities,...

Subscribe To Our Newsletter

Scroll to Top