When Judge Victoria Kolakowski reviewed Exhibit 6C in a California housing dispute, she sensed something was amiss. The video, submitted by the plaintiffs, featured a witness with a disjointed, monotone voice and a fuzzy, emotionless face. Her expressions would twitch and repeat every few seconds.
Kolakowski, of California’s Alameda County Superior Court, soon realized the video was a “deepfake” generated by artificial intelligence. While it claimed to show a real witness who had appeared in other authentic evidence, the exhibit was a fabrication. Citing the plaintiffs’ use of AI-generated material masquerading as real evidence, Kolakowski dismissed the case on September 9. She later denied their request for reconsideration.
The case, Mendones v. Cushman & Wakefield, Inc., appears to be one of the first known instances of a suspected deepfake being submitted as authentic evidence and detected in a U.S. court. Judges and legal experts warn it signals a much larger threat: the potential for hyperrealistic fake evidence to flood courtrooms and erode the justice system’s foundation of trust.
The judiciary is now grappling with how to address the rapid advances in generative AI, which can produce convincing fake videos, images, documents, and audio. “The judiciary in general is aware that big changes are happening,” Kolakowski said. “But I don’t think anybody has figured out the full implications. We’re still dealing with a technology in its infancy.”
Previously, courts have confronted the “Liar’s Dividend,” where parties cast doubt on authentic evidence by suggesting it could be an AI fake. The Mendones case represents the opposite tactic: attempting to pass off AI-generated content as genuine.
This development amplifies a growing fear among the judiciary. “There are a lot of judges in fear that they’re going to make a decision based on something that’s not real… and it’s going to have real impacts on someone’s life,” said Judge Stoney Hiljus of Minnesota’s 10th Judicial District, who chairs the state’s AI Response Committee.
Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, an advocate for judicial adoption of AI, illustrated the risk with a chilling hypothetical. “She could easily clone my voice on free or inexpensive software to create a threatening message that sounds like it’s from me,” Schlegel said of his wife of 30 years. “The judge will sign that restraining order. They will sign every single time. So you lose your cat, dog, guns, house, you lose everything.”
Judge Erica Yew of California’s Santa Clara County Superior Court fears AI could also corrupt long-trusted methods of evidence gathering. Forged documents, such as a false car title, could be entered into official county records by clerks without the time or expertise to verify them. A litigant could then present a certified copy in court. “So now do I, as a judge, have to question a source of evidence that has traditionally been reliable?” Yew asked. “We’re in a whole new frontier.”
In response, a small group of judges, including Schlegel and Yew, are leading efforts to address the threat. A consortium by the National Center for State Courts and the Thomson Reuters Institute has created a guide for judges, advising them to question the origin of potential AI evidence, identify who had access to it, and look for corroboration.
Proposals to amend the federal rules of evidence are also being considered. One, co-authored by former federal judge Paul Grimm and professor Maura R. Grossman, would require parties alleging deepfakery to substantiate their claims. However, the U.S. Judicial Conference’s advisory committee opted not to advance the rule changes in May, arguing existing standards are sufficient for now—a decision Grimm called pessimistic given AI’s rapid evolution.
Some legal experts believe the current framework is adequate. Jonathan Mayer, a Princeton professor and former chief AI officer at the Justice Department, said that while in government, “We generally concluded that existing law was sufficient.” However, he added that his team also prepared for scenarios where the impact of AI could change quickly.
In the interim, many believe attorneys must be the first line of defense. Schlegel pointed to Louisiana’s Act 250, which mandates that lawyers exercise “reasonable diligence” to verify that evidence is not AI-generated. “If it doesn’t smell right, you need to do a deeper dive before you offer that evidence into court,” he said.
Technological solutions are also emerging. Metadata—hidden data showing a file’s origin and modification history—can be a crucial tool. In the Mendones case, metadata revealed a key video was supposedly filmed on an iPhone 6, yet the action depicted required an iPhone 15 or newer. Another potential safeguard is mandating that recording devices embed cryptographic signatures to prove authenticity.
However, these solutions raise concerns about equity, as parties without access to technical expertise may be disadvantaged.
For now, legal professionals urge heightened skepticism. Grossman, a computer science professor and lawyer, warns that generative AI has “democratized fraud,” enabling anyone to create convincing fakes.
“We’re really moving into a new paradigm,” she said. “Instead of trust but verify, we should be saying: Don’t trust and verify.”



