Leading misinformation expert Jeff Hancock is under fire for allegedly citing non-existent sources in support of Minnesota’s new law banning election misinformation. The law, which prohibits the use of “deep fake” technology to influence elections, is currently being challenged in federal court for violating free speech protections.
Hancock, the founding director of Stanford Social Media Lab and known for his research on deception with technology, submitted an affidavit at the request of Minnesota Attorney General Keith Ellison supporting the law. However, several of the academic works cited in Hancock’s declaration do not appear to exist, leading the lawyers challenging the law to suggest that they were generated by artificial intelligence software like ChatGPT.
The questionable citations include a study titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” which supposedly was published in the Journal of Information Technology & Politics in 2023, but no record of such a study exists. Another citation to a study called “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance” also appears to be fabricated.
Hancock has not responded to requests for comment, and it remains unclear whether the fake citations were inserted by him, an assistant, or another party. Critics of the law argue that AI-generated content can be countered by fact-checks and education rather than censorship.
This incident is not the first involving AI-generated content causing legal issues. In 2023, two New York lawyers were sanctioned for submitting a brief with citations of non-existent legal cases created by ChatGPT. The situation raises questions about the reliability of AI-generated information and the potential consequences of using it in legal proceedings.
Source
Photo credit minnesotareformer.com