Private Network Check Readiness - TeckNexus Solutions

Is AI free to say whatever it wants?

Large Language Models are beginning to ‘express opinions’ on controversial topics. But do they have the right to free speech? What happens if an AI defames someone? Find out in this article looking into how GenAI models, and in particular SCOTi, answered some controversial questions.
Is AI free to say whatever it wants?

Recently SCOTi answered the following controversial questions:

 


If large language models (LLMs) like SCOTi are beginning to ‘express opinions’ on such controversial topics, then does SCOTi have the right to free speech? Who is responsible for SCOTi’s opinions and what right(s) does SCOTi have to express them? With the first case of libel being filled against an AI company (Mark Walters vs OpenAI) concerning many, we will be exploring to what extent you may be liable for your AIs speech and how this case is forming the foundations of AI speech responsibility.

The first amendment (which protects the right to free speech) does not reset itself after each technological advance. Therefore, just as individuals have the right to publish their ideas, so too do they have the right to publish computer code. In the US, the case of Bernstein v. Department of Justice established that computer code is considered speech and therefore is protected by free speech rights. The question becomes more complicated when we consider whether that computer code, that becomes an AI, has its own right to self-perpetuation of speech. In other words, whether the AI has the right to freedom of speech?

The answer is we don’t officially know. James B. Garvey presents a strong case for why AI should/will be granted the right to free speech. According to Garvey, the Supreme Court’s extension of free speech rights to non-human actors in Citizens United v FEC provides a compelling framework for granting free speech rights to AI. While the principle of speaker equivalence may not require the same protection for every type of speaker, it does suggest that novel speakers should have the same standard analytical framework applied to them. Furthermore, the court has stated that it would err on the side of overprotection when a claim for free speech involves novel technology. These factors all indicate that there is a high likelihood of a future case determining that AI does have the right to free speech.

Yet, the reality is that all we can do for now is hypothesize. There are other scholars, like Professor Wu, that don’t believe AI would be given the right to free speech as it lacks certain qualities that human speakers have. Specifically, Wu argues that AI either acts as a communicative tool or a conduit for speech. While Garvey rejects this argument on the basis that advances in AI technology mean that AI’s will soon meet these standards for speech, for now all we can really do is speculate.

This issue is becoming more and more pertinent, particularly as GPT models begin to produce defamatory or controversial messages/images. If you take a look at some of the most recent headlines the issue becomes obvious:

A chatbot that lets you talk with Jesus and Hitler is the latest controversy in the AI gold rush”

“Google Chatbot’s A.I. Images Put People of Color in Nazi-Era Uniforms”

“NCAA athlete claims she was scolded by AI over message about women’s sports”

If AI has the right to free speech, then surely the few exceptions to this right should also apply to an AI. In the US, categories of speech which are either not protected or given lesser protection include: incitement, defamation, fraud, obscenity, child pornography, fighting words, and threats. Just as defamatory messages are considered a tort through more traditional media like television or newspaper, then so too should they be impermissible through an AI.

If we decide to hold AI to the same standards as us humans, then the question becomes who is responsible for breaches of these standards? Who is liable for defamatory material produced by an AI? The company hosting the AI? The user of the AI? What degree of intention is required to impose liability when an AI program lacks human intention?

The first case of libel has been filled in the US by a man named Mark Walters against ‘OpenAI LLC’ (also known as Open AI the company responsible for ChatGPT). ChatGPT hallucinated (in other words fabricated information) about Mark Walters which was libelous and harmful to his reputation and was in no way based on any real information. This case is extraordinary as it is the first of its kind and might shed some light on whether AIs are liable, through their company, for any of the information they publish or provide on the web.

The outcome is bound to have widespread effects on legal issues generally related to AI, such as issues surrounding copyright law which we addressed recently in one of our blogs concerning the legal ownership of content produced by an AI. For the moment all we can do is wait for courts or the legal process to provide some certain answers to the questions we have considered in this blog. In the meantime, companies and organizations should take note of the Mark Walters case and consider how they might be responsible for information published by their AIs.

AI might be given the right to free speech, but with it may come the responsibility to respect its exceptions.


Recent Content

At MWC 2025 Keynote 12: Future of Work and Economic Growth, industry leaders explored how AI, talent shortages, and startup growth are reshaping global markets. From Europe’s role in applied AI to the importance of scaling startups internationally, the discussions offered crucial insights for entrepreneurs, investors, and tech professionals. Discover key takeaways on AI-driven industries, workforce transformation, and economic innovation. Featuring Euan Blair (Multiverse), Saadia Zahidi (WEF), Yoram Wijngaarde (Dealroom.co), Renate Nikolay (European Commission), and Jordi Romero (Factorial), this session explores workforce transformation, AI’s role in labor markets, and strategies to boost Europe’s innovation and competitiveness.
At MWC 2025 Keynote 11: Disinformation, Trust & Security, leading experts explore the growing challenges of AI-driven misinformation, online safety, and trust in the digital age. Featuring Ross Frenett (Moonshot), Nina Dos Santos (Ctrl Alt Deceit Podcast), Sachin Dev Duggal (Builder.ai), Marieke Snoep (KPN), and Boris Nihom (Dentsu Benelux), this session covers misinformation detection, media literacy, and corporate responsibility in building a safer internet.
At MWC 2025 Keynote 10: Innovation in Action, top industry leaders discussed how AI is transforming media, journalism, and enterprise automation. Featuring Jessica Sibley (TIME), Nicholas Johnston (Axios), and Bret Taylor (Sierra), the session explored AI-powered newsrooms, the ethical implications of AI-driven content, and the rise of AI agents in business operations. Learn how AI is reshaping the future of work and media while maintaining human oversight and editorial integrity.
At MWC 2025 Keynote 9: Technology, Climate Change & Justice, top leaders explored how AI, business leadership, and innovation can address the climate crisis. Featuring Leah Seligmann (The B Team), Ami Badani (Arm), Anna Borg (Vattenfall), and Peter Sarlin (AMD Silo AI), discussions focused on AI’s rising energy demands, sustainable business models, and corporate responsibility. Discover key insights on how technology can be a force for climate action and environmental justice.
At MWC 2025 Keynote 8: Global Shifts, industry experts will analyze how technology, AI, and semiconductor advancements are reshaping global power structures. As the U.S.-China tech rivalry intensifies, this session will explore its economic, political, and security implications. Featuring Keyu Jin (Harvard University), Jerry Sheehan (OECD), and Gregory C. Allen (CSIS), moderated by Jason Karaian (The New York Times).
At MWC 2025 Keynote 7: Tech Game Changers, industry pioneers including Peggy Johnson (Agility Robotics), Yuanqing Yang (Lenovo), Naveen Rao (Databricks), Arthur Mensch (Mistral AI), and Kate Ryder (Maven Clinic) shared insights on AI, robotics, and digital transformation. Key topics included humanoid robotics, AI-driven UI, healthcare innovation, and enterprise automation. Discover how AI, data intelligence, and open-source models are revolutionizing industries worldwide.

Currently, no free downloads are available for related categories. Search similar content to download:

  • Reset

It seems we can't find what you're looking for.

Download Magazine

With Subscription

Subscribe To Our Newsletter

Private Network Awards 2025 - TeckNexus
Scroll to Top

Private Network Awards

Recognizing excellence in 5G, LTE, CBRS, and connected industries. Nominate your project and gain industry-wide recognition.
Early Bird Deadline: Sept 5, 2025 | Final Deadline: Sept 30, 2025