Current News

/

ArcaMax

Stanford misinformation expert admits his chatbot use led to misinformation in sworn federal court filing

Ethan Baron, Bay Area News Group on

Published in News & Features

A Stanford University misinformation expert who was called out in a federal court case in Minnesota for submitting a sworn declaration that contained made-up information has blamed an artificial intelligence chatbot.

And the bot generated more errors than the one highlighted by the plaintiffs in the case, professor Jeff Hancock wrote in an apologetic court filing, saying he did not intend to mislead the court or any lawyers.

“I express my sincere regret for any confusion this may have caused,” Hancock wrote.

Lawyers for a YouTuber and Minnesota state legislator suing to overturn a Minnesota law said in a court filing last month that Hancock’s expert-witness declaration contained a reference to a study, by authors Huang, Zhang, Wang, that did not exist. They believed Hancock had used a chatbot in preparing the 12-page document, and called for the submission to be thrown out because it might contain more, undiscovered AI fabrications.

It did: After the lawyers called out Hancock, he found two other AI “hallucinations” in his declaration, according to his filing in Minnesota District Court.

The professor, founding director of the Stanford Social Media Lab, was brought into the case by Minnesota’s attorney general as an expert defense witness in a lawsuit by the state legislator and the satirist YouTuber. The lawmaker and the social-media influencer are seeking a court order declaring unconstitutional a state law criminalizing election-related, AI-generated “deepfake” photos, video and sound.

Hancock’s legal imbroglio illustrates one of the most common problems with generative AI, a technology that has taken the world by storm since San Francisco’s OpenAI released its ChatGPT bot in November 2022. The AI chatbots and image generators often produce errors known as hallucinations, which in text can involve misinformation, and in images, absurdities like six-fingered hands.

In his regretful filing with the court, Hancock — who studies AI’s effects on misinformation and trust — detailed how his use of OpenAI’s ChatGPT to produce his expert submission led to the errors.

Hancock confessed that in addition to the fake study by Huang, Zhang, Wang, he had also included in his declaration “a nonexistent 2023 article by De keersmaecker & Roets,” plus four “incorrect” authors for another study.

Seeking to bolster his credibility with “specifics” of his expertise, Hancock claimed in the filing that he co-wrote “the foundational piece” on communication mediated by AI. “I have published extensively on misinformation in particular, including the psychological dynamics of misinformation, its prevalence, and possible solutions and interventions,” Hancock wrote.

He used ChatGPT 4.0 to help find and summarize articles for his submission, but the errors likely got in later when he was drafting the document, Hancock wrote in the filing. He had inserted the word “cite” into the text he gave the chatbot, to remind himself to add academic citations to points he was making, he wrote.

 

“The response from GPT-4o, then, was to generate a citation, which is where I believe the hallucinated citations came from,” Hancock wrote, adding that he believed the chatbot also made up the four incorrect authors.

Hancock had declared under penalty of perjury that he “identified the academic, scientific, and other materials referenced” in his expert submission, the YouTuber and legislator said in their Nov. 16 filing.

That filing also questioned Hancock’s reliability as an expert witness.

Hancock, in apologizing to the court, asserted that the three errors, “do not impact any of the scientific evidence or opinions” he presented as an expert.

The judge in the case has set a Dec. 17 hearing to determine whether Hancock’s expert declaration should be thrown out, and whether the Minnesota attorney general can file a corrected version of the submission.

Stanford, where students can be suspended and ordered to do community service for using a chatbot to “substantially complete an assignment or exam” without permission from their instructor, did not immediately respond to questions about whether Hancock would face disciplinary measures. Hancock did not immediately respond to similar questions.

Hancock is not the first to submit a court filing containing AI-generated nonsense. Last year, lawyers Steven A. Schwartz and Peter LoDuca were fined $5,000 each in federal court in New York for submitting a personal-injury lawsuit filing that contained fake past court cases invented by ChatGPT to back up their arguments.

“I did not comprehend that ChatGPT could fabricate cases,” Schwartz told the judge.

_____


©2024 MediaNews Group, Inc. Visit at mercurynews.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus