As artificial intelligence tools like ChatGPT become more integrated into the legal industry, legal administrators face a dual challenge: embracing technological innovation while safeguarding their firms against misuse and potential sanctions. Recent developments highlight why this balancing act is more critical than ever.

AI Hallucinations: A Real Problem in the Legal Sector

“AI hallucinations”—when generative AI tools fabricate legal citations or misstate facts—are no longer rare anomalies. A recent wave of cases has exposed a disturbing trend: attorneys across the U.S. have submitted court documents containing non-existent case law, all generated by AI.

In one high-profile incident, lawyers were fined $6,000 after submitting a motion riddled with false citations. In another, attorneys for a state corrections department used AI-generated material in a legal brief, which later proved to be completely fictitious. Judges across jurisdictions, from Florida to New York, have expressed growing concern, issuing sanctions and publicly reprimanding counsel for relying on AI without verification.

A review of nearly 70 known cases in 2024 and early 2025 shows that licensed attorneys—not pro se litigants—were responsible for most AI-induced errors. This indicates a broader issue of improper implementation and oversight, rather than simple technological misunderstanding.

Why Legal Administrators Must Take the Lead

Legal administrators are uniquely positioned to enforce ethical and compliant use of AI across their organizations. Their oversight is vital for creating safeguards that prevent such reputational and legal risks. Here are key actions administrators should consider:

1. Establish Clear AI Use Policies
Develop comprehensive guidelines governing how and when AI tools can be used in legal work. Policies should emphasize mandatory human verification of all AI-generated content and strictly prohibit the use of unverified citations in court filings.

2. Require Mandatory Training
Ensure that all attorneys and staff understand both the capabilities and the limitations of AI tools. Training should focus on identifying hallucinations and using AI outputs only as starting points for further research.

3. Enhance Document Review Processes
Incorporate additional layers of review for any documents that utilize AI-generated content. Establish a checklist or certification step to confirm citations and references have been vetted.

4. Coordinate with Risk Management
Collaborate with your firm’s risk management team to evaluate and mitigate potential liabilities. This includes maintaining malpractice insurance that addresses emerging tech issues.

Support for Safe AI Adoption: Innovative Computing Systems’ Webinar Series

To help legal administrators and firms navigate these challenges, Innovative Computing Systems offers a dedicated AI Webinar Series. These sessions focus on practical strategies for implementing AI tools safely and effectively in legal environments. Topics include:

  • Understanding AI “hallucinations” and their implications

  • Best practices for AI governance and compliance, including building AI policies

The Road Ahead

AI will undoubtedly remain a powerful asset in the legal toolkit—but only when used with care. Legal administrators must lead the charge in setting firm-wide standards, ensuring compliance, and fostering a culture of responsible AI use. By doing so, they not only protect their firms from sanctions but also enhance the quality and integrity of legal work.

To stay ahead of emerging AI trends and safeguard your firm’s reputation, now is the time to act. Connect with us to learn more about how our AI expertise and resources can support your team’s success.