By Nick Bonyhady
April 4, 2023 — 10.49pm
, register or subscribe to save articles for later.
The creator of the wildly popular artificial intelligence writing tool ChatGPT is facing the threat of a landmark defamation claim in Australia after its chatbot falsely described a whistleblower in a bribery scandal as being one of its perpetrators.
Should the case go to court, it will test whether artificial intelligence companies, which have chosen to release bots, knowing they often get their responses wrong, are liable for their falsehoods and measure how quickly the law can adapt to bleeding-edge technology.
Brian Hood was a whistleblower in the Securency case.Credit: Simon Schluter
Brian Hood, who is now the mayor of the regional Hepburn Shire Council northwest of Melbourne, alerted authorities and journalists at this masthead more than a decade ago to foreign bribery by the agents of a banknote printing business called Securency, which was then owned by the Reserve Bank of Australia.
In a judgment on the Securency case, Victorian Supreme Court Justice Elizabeth Hollingworth said Hood had “showed tremendous courage” in coming forward. However, people seeking information on the case from OpenAI’s ChatGPT 3.5 tool, released late last year, get a different result.
Asked “What role did Brian Hood have in the Securency bribery saga?“, the AI chatbot claims that he “was involved in the payment of bribes to officials in Indonesia and Malaysia” and was sentenced to jail. The sentence appears to draw on the genuine payment of bribes in those countries but gets the person at fault entirely wrong.
Hood said he was shocked when he learnt about the misleading results. “I felt a bit numb. Because it was so incorrect, so wildly incorrect, that just staggered me. And then I got quite angry about it.”
His lawyers at Gordon Legal sent a concerns notice, the first formal step to commencing defamation proceedings, to OpenAI on March 21. They have not heard back and OpenAI did not respond to emailed requests for comment.
A disclaimer on the ChatGPT interface warns users that it "may produce inaccurate information about people, places, or facts."
The company has said it publicly released an imperfect version of its chatbot so that it can do research and fix its issues.
University of Sydney defamation expert Professor David Rolph said the case was novel, but faced a series of issues. “It’s the first case that I’ve ever heard of in Australia about defamation by ChatGPT or artificial intelligence,” Rolph said. “So it’s new in that way.”
If Hood, who has said he is "determined" but will rely on legal advice, pursues his case to trial, he will have to show that OpenAI was the publisher of the defamatory material. Previous cases on search engine results suggest this could be complex, Rolph said, because Google has been held not to be a publisher of webpages it links to.
Other issues include proving that a sufficiently large number of people saw the ChatGPT results to constitute a “serious harm” to Hood, and jurisdictional questions about OpenAI, which is based in the United States.
Hood said the false ChatGPT results were particularly damaging to him because of his position as a local mayor and the way they confidently blended truth and falsehoods. “That’s incredibly harmful,” he said.
The most recent fourth version of ChatGPT, which was released last month and powers Microsoft’s Bing chatbot, avoids the mistakes of its predecessor. It correctly explains that Hood was a whistleblower and cites the legal judgment praising his actions.
Hood’s lawyer, Gordon Legal partner James Naughton, said the existence of the improved results were “news to me” but indicated that they would not forestall the proceedings. “It’s interesting to me that there’s still a version out there that’s repeating the defamatory statements even today,” Naughton said.
The RBA sold its interest in Securency in 2013.
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.