A misinformation expert admits he used ChatGPT-40 for citations

Misinformation Expert Acknowledges Using ChatGPT in Submission, Denies Awareness of AI-Induced Fabrications

The rise of artificial intelligence has radically changed the landscape of communication, transforming how we draft and present information. However, as reliance on AI grows, so do the challenges associated with it. Recently, a noted misinformation expert faced criticism after a legal document he filed contained false citations generated by AI. Ironically, the filing was intended to argue against the use of AI-generated content designed to mislead voters before elections. The researcher has since acknowledged using ChatGPT to streamline the document’s citations, but insists this error shouldn’t overshadow the core arguments of the filing.

Jeff Hancock, a Stanford professor specializing in misinformation, had backed a Minnesota law aimed at restricting the use of Deep Fake technology during elections. The filing in which he supported this law unexpectedly became a focal point of criticism. It contained inaccuracies due to AI-generated content, which compromised its reliability. In a surprising twist, the expert’s own arguments against AI misuse were marred by the very technology he sought to regulate.

In an additional statement, Hancock clarified that ChatGPT-4 was used solely for organizing citations, and he was unaware that the tool had inserted fabricated references. He maintained that AI was not employed for developing any other parts of the document. Reflecting on the situation, Hancock stated, “I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it.” He asserted that these claims are supported by current scholarly research and embody his expert opinion on AI’s impact on misinformation and society.

Hancock also specified that both Google Scholar and GPT-4 were employed to compile the citation list, but were not involved in drafting the core content. He admitted to being unaware of AI’s tendency to “hallucinate” or generate false data, which led to the citation errors. Acknowledging the mistake, he expressed profound regret for any confusion caused: “I did not intend to mislead the Court or counsel. I express my sincere regret for any confusion this may have caused.” Nevertheless, Hancock stands by the fundamental assertions within his declaration.

The incident raises important questions about the use of AI in legal contexts and underscores the potential pitfalls. Whether the court will accept Hancock’s explanation is yet to be seen, but the situation highlights the need for careful oversight when leveraging AI in legal and formal documents.