As artificial intelligence (AI) technology continues to evolve, the methods of drafting and presenting written communication are undergoing significant transformation. However, the growing dependence on AI for content development has introduced new challenges. A recent incident involving a misinformation expert demonstrates the complexities associated with using AI-generated text. This individual faced criticism after utilizing AI to prepare a legal document, which misleadingly included fabricated citations. Ironically, this legal filing aimed to oppose the use of AI-generated material that could mislead voters ahead of elections.
Misinformation Expert’s Unforeseen AI Pitfalls
Jeff Hancock, a Stanford University professor and noted expert in misinformation, filed an affidavit supporting a Minnesota law designed to ban Deep Fake technologies that could distort election outcomes. Unfortunately, the very affidavit intended to combat AI’s influence on voters also contained inaccuracies from AI, undermining the reliability of his claims.
In a subsequent declaration, Hancock acknowledged using ChatGPT-4o for organizing citations. However, he claimed to be unaware that this tool could generate false information or references. Hancock emphasized that he had not employed the AI tool for drafting the main content of the document and described the citation errors as unintentional.
“I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it, all of which are supported by the most recent scholarly research in the field and reflect my opinion as an expert regarding the impact of AI technology on misinformation and its societal effects.”
Elaborating further, Hancock mentioned that he leveraged both Google Scholar and GPT-4o to compile the citation list but reiterated that the AI was not used in creating the document’s main arguments. He candidly admitted to his unfamiliarity with the concept of AI ‘hallucinations,’ which resulted in the erroneous citations. Despite this, Hancock remained committed to the substantive points laid out in the declaration, insisting that these should not be overshadowed by citation errors related to AI.
“I did not intend to mislead the Court or counsel. I express my sincere regret for any confusion this may have caused. That said, I stand firmly behind all the substantive points in the declaration.”
The implications of Hancock’s experience raise critical questions about the use of AI tools within the legal field. While his explanation may be taken into consideration by the court, this incident underscores the inherent risks associated with integrating AI into formal settings.
Leave a Reply