Private Network Check Readiness - TeckNexus Solutions

Is AI free to say whatever it wants?

Large Language Models are beginning to ‘express opinions’ on controversial topics. But do they have the right to free speech? What happens if an AI defames someone? Find out in this article looking into how GenAI models, and in particular SCOTi, answered some controversial questions.
Is AI free to say whatever it wants?

Recently SCOTi answered the following controversial questions:

 


If large language models (LLMs) like SCOTi are beginning to ‘express opinions’ on such controversial topics, then does SCOTi have the right to free speech? Who is responsible for SCOTi’s opinions and what right(s) does SCOTi have to express them? With the first case of libel being filled against an AI company (Mark Walters vs OpenAI) concerning many, we will be exploring to what extent you may be liable for your AIs speech and how this case is forming the foundations of AI speech responsibility.

The first amendment (which protects the right to free speech) does not reset itself after each technological advance. Therefore, just as individuals have the right to publish their ideas, so too do they have the right to publish computer code. In the US, the case of Bernstein v. Department of Justice established that computer code is considered speech and therefore is protected by free speech rights. The question becomes more complicated when we consider whether that computer code, that becomes an AI, has its own right to self-perpetuation of speech. In other words, whether the AI has the right to freedom of speech?

The answer is we don’t officially know. James B. Garvey presents a strong case for why AI should/will be granted the right to free speech. According to Garvey, the Supreme Court’s extension of free speech rights to non-human actors in Citizens United v FEC provides a compelling framework for granting free speech rights to AI. While the principle of speaker equivalence may not require the same protection for every type of speaker, it does suggest that novel speakers should have the same standard analytical framework applied to them. Furthermore, the court has stated that it would err on the side of overprotection when a claim for free speech involves novel technology. These factors all indicate that there is a high likelihood of a future case determining that AI does have the right to free speech.

Yet, the reality is that all we can do for now is hypothesize. There are other scholars, like Professor Wu, that don’t believe AI would be given the right to free speech as it lacks certain qualities that human speakers have. Specifically, Wu argues that AI either acts as a communicative tool or a conduit for speech. While Garvey rejects this argument on the basis that advances in AI technology mean that AI’s will soon meet these standards for speech, for now all we can really do is speculate.

This issue is becoming more and more pertinent, particularly as GPT models begin to produce defamatory or controversial messages/images. If you take a look at some of the most recent headlines the issue becomes obvious:

A chatbot that lets you talk with Jesus and Hitler is the latest controversy in the AI gold rush”

“Google Chatbot’s A.I. Images Put People of Color in Nazi-Era Uniforms”

“NCAA athlete claims she was scolded by AI over message about women’s sports”

If AI has the right to free speech, then surely the few exceptions to this right should also apply to an AI. In the US, categories of speech which are either not protected or given lesser protection include: incitement, defamation, fraud, obscenity, child pornography, fighting words, and threats. Just as defamatory messages are considered a tort through more traditional media like television or newspaper, then so too should they be impermissible through an AI.

If we decide to hold AI to the same standards as us humans, then the question becomes who is responsible for breaches of these standards? Who is liable for defamatory material produced by an AI? The company hosting the AI? The user of the AI? What degree of intention is required to impose liability when an AI program lacks human intention?

The first case of libel has been filled in the US by a man named Mark Walters against ‘OpenAI LLC’ (also known as Open AI the company responsible for ChatGPT). ChatGPT hallucinated (in other words fabricated information) about Mark Walters which was libelous and harmful to his reputation and was in no way based on any real information. This case is extraordinary as it is the first of its kind and might shed some light on whether AIs are liable, through their company, for any of the information they publish or provide on the web.

The outcome is bound to have widespread effects on legal issues generally related to AI, such as issues surrounding copyright law which we addressed recently in one of our blogs concerning the legal ownership of content produced by an AI. For the moment all we can do is wait for courts or the legal process to provide some certain answers to the questions we have considered in this blog. In the meantime, companies and organizations should take note of the Mark Walters case and consider how they might be responsible for information published by their AIs.

AI might be given the right to free speech, but with it may come the responsibility to respect its exceptions.


Recent Content

This article critiques the common practice of exhaustive data cleaning before implementing AI, labeling it a consultant-driven “scam.” Data cleaning is a never-ending and expensive process, delaying AI implementation while competitors move forward. Instead, I champion a “clean as you go” approach, emphasizing starting with a specific AI use case and cleaning data only as needed. Smart companies prioritize iterative improvement by using AI to fill in data gaps and building safeguards around imperfect data, ultimately achieving faster results. The core message is it’s more important to prioritize action over perfection, enabling quicker AI adoption and thereby competitive advantage.
Edge AI is reshaping broadband customer experience by powering smart routers, proactive troubleshooting, conversational AI, and personalized Wi-Fi management. Learn how leading ISPs like Comcast and Charter use edge computing to boost reliability, security, and customer satisfaction.
The pressure to adopt artificial intelligence is intense, yet many enterprises are rushing into deployment without adequate safeguards. This article explores the significant risks of unchecked AI deployment, highlighting examples like the UK Post Office Horizon scandal, Air Canada’s chatbot debacle, and Zillow’s real estate failure to demonstrate the potential for financial, reputational, and societal damage. It examines the pitfalls of bias in training data, the problem of “hallucinations” in generative AI, and the economic and societal costs of AI failures. Emphasizing the importance of human oversight, data quality, explainability, ethical guidelines, and robust security, the article urges organizations to proactively navigate the challenges of AI adoption. It advises against delaying implementation, as competitors are already integrating AI, and advocates for a cautious, informed approach to mitigate risks and maximize the potential for success in the AI era.
A global IBM study reveals 81% of CMOs see AI as critical for growth, yet 54% underestimated the operational complexity. Only 22% have set clear AI usage guidelines, despite 64% now being responsible for profitability. Siloed systems, talent gaps, and lack of collaboration hinder translating AI strategies into results, highlighting a major execution gap as marketing leaders adapt to increased accountability for profit and revenue growth.
Elon Musk’s generative AI firm, xAI, is targeting $4.3 billion in new equity funding, following its previous $6 billion raise and a $5 billion debt effort. The capital will support high-cost AI models like Grok and Aurora, expand massive GPU-powered data centers, and drive xAI’s ambition to compete with leaders like OpenAI and DeepMind. Investors remain interested despite concerns over spending, betting on Musk’s strategy to blend social media and AI under one ecosystem.
The emergence of 6G networks marks a paradigm shift in the way wireless systems are conceived and managed. Unlike its predecessors, 6G will embed Artificial Intelligence (AI) as a native capability across all network layers, enabling real-time adaptability, intelligent orchestration, and autonomous decision-making. This paper explores the symbiosis between AI and 6G, highlighting key applications such as predictive analytics, alarm correlation, and edge-native intelligence. Detailed insights into AI model selection and architecture are provided to bridge the current technical gap. Finally, the cultural and organizational changes required to realize AI-driven 6G networks are discussed. A graphical abstract is suggested to visually summarize the proposed architecture.
Whitepaper
This 5G network assurance white paper, sponsored by RADCOM covers critical requirements, technologies, and approaches that assurance solutions must support....

It seems we can't find what you're looking for.

Download Magazine

With Subscription

Subscribe To Our Newsletter

Private Network Awards 2025 - TeckNexus
Scroll to Top

Private Network Awards

Recognizing excellence in 5G, LTE, CBRS, and connected industries. Nominate your project and gain industry-wide recognition.
Early Bird Deadline: Sept 5, 2025 | Final Deadline: Sept 30, 2025