Spanish English French German Italian Portuguese
Social Marketing
HomeTechnologyArtificial IntelligenceCan AI commit defamation?

Can AI commit defamation?

The tech world's hottest new toy may find itself in legal trouble as AI's tendency to make up news articles and events clashes with defamation laws. Can an AI model like ChatGPT commit defamation? Like many things related to technology, it is unknown and unprecedented, but upcoming legal challenges may change that.

Defamation is broadly defined as posting or saying harmful and false statements about someone. It's complex and nuanced legal territory that also differs widely between jurisdictions: a defamation case in the US is very different from one in the UK or Spain.

Generative AI has already produced numerous unanswered legal questions, such as whether the use of copyrighted material amounts to fair use or infringement. But until a year ago, neither images nor text-generating AI models were good enough to produce something that would be mistaken for reality, so questions about misrepresentations were purely academic.

Not so now: the great language model behind ChatGPT and Bing Chat is an artist operating on a massive scale, and its integration with major products like search engines (and increasingly with just about everything else) could be said to elevate the system from a failed experiment to a massive experiment.

So what happens when the tool/platform writes that a government official was accused in a misconduct case or that a university professor was accused of sexual harassment?

A year ago, with no extensive integrations and some rather lame language, few would say such false claims could be taken seriously. But today, these models answer questions confidently and convincingly on widely accessible consumer platforms, even when those answers are hallucinations or falsely attributed to non-existent articles. They attribute false statements to real items, or true statements to made-up items, or all false.

Due to the nature of how these models work, they don't know or care if something is true, only that it seems true. That's a problem when you use it to do your homework, sure, but when it accuses you of a crime you didn't commit, it may as well be defamation at this point.

That's the claim Brian Hood, the mayor of Australia's Hepburn Shire, made when told he was found guilty by ChatGPT in a 20-year-old bribery scandal. The scandal was real, and the Hood was involved. But he was the one who went to the authorities about it and was never charged with any crime, as Reuters reports conveyed by their lawyers.

Now, it is clear that this statement is false and unquestionably damaging to Hood's reputation. But who made the statement? Is it OpenAI, who developed the software? Is it Microsoft, who licensed it and implemented it under Bing? Is it the software itself, acting as an automated system? If so, who is responsible for prompting that system to create the statement? Does making such a statement in such a setting constitute "publishing" it, or is it more like a conversation between two people? In that case, would it amount to slander? Did OpenAI or ChatGPT "know" that this information was false and how do we define negligence in such a case? Can an AI model exhibit malice? Does it depend on the law, the case, the judge?

These are all open questions because the technology they refer to did not exist a year ago, let alone when the laws and precedents that legally define defamation were established. While it may seem silly to sue a chatbot for saying something false, chatbots are not what they used to be. With some of the world's largest companies touting them as the next generation of information retrieval, replacing search engines, they are no longer toys but tools regularly used by millions of people.

Hood sent a letter to OpenAI asking them to do something about it; it is not really clear what he can do, or if he is required to do it, or any other action he can take, by Australian or US law. But in another recent case, a law professor was accused of sexual harassment by a chatbot quoting a fictional Washington Post article. And such potentially harmful false claims are likely more common than we think: they are now becoming serious and damaging enough to warrant informing the people involved.

This is just the beginning of this legal drama, and even lawyers and AI experts have no idea how it will play out. But if companies like OpenAI and Microsoft (not to mention all the other big tech companies and a few hundred startups) expect their systems to be taken seriously as sources of information, they can't avoid the consequences of those claims. They may suggest recipes and trip planning as starting points, but people expect companies to ensure that these platforms are a source of truth.

Will these troubling statements turn into actual lawsuits? Will those lawsuits be resolved before the industry changes yet again? And all this will be discussed by legislation between the jurisdictions where the cases are brought? It's about to be an interesting few months (or more likely years) as legal and tech experts try to tackle the target of this fast-moving industry.

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks