News
Maryland Lawyer Reprimanded After Court Finds AI-Generated Legal Citations
Source: thebanner.com
Published on October 14, 2025
Updated on October 14, 2025

Maryland Lawyer Reprimanded for AI-Generated Legal Citations
A Maryland lawyer has come under scrutiny after submitting a legal brief containing fabricated citations generated by artificial intelligence (AI). The incident highlights the growing concerns surrounding the use of AI tools in legal research and the need for rigorous verification to maintain the integrity of legal documents.
Adam Hyman, a family law attorney, admitted during an appellate court hearing that his office had used AI to prepare a legal brief. Judge Kathryn Grill Graeff noted that the brief included numerous citations that either did not exist or misrepresented legal principles. Hyman acknowledged his lack of awareness regarding the use of AI but took full responsibility for the oversight.
The Risks of AI in Legal Research
The case underscores the challenges posed by AI in the legal field. Large language models, such as those used to generate the citations in Hyman’s brief, are designed to predict responses based on vast datasets. However, they lack the ability to understand the context or verify the accuracy of the information they produce. This can lead to the creation of seemingly legitimate but ultimately false or misleading citations.
"AI tools can generate Citations that appear authentic but are entirely fabricated," said Amy Sloan, a law professor at the University Baltimore. "Legal professionals must exercise caution when relying on AI-generated content, as it can undermine the credibility of their work."
Ethical Obligations and Professional Responsibility
The American Bar Association (ABA) has emphasized the importance of ethical considerations when using AI in legal practice. In a recent formal opinion, the ABA reminded lawyers of their duty to verify the accuracy of AI-generated information and to ensure it aligns with legal and ethical standards. Earlier this year, two attorneys were reprimanded for submitting AI-generated citations in a separate lawsuit, further highlighting the need for vigilance.
Hyman has since taken steps to prevent similar incidents in the future. He has completed continuing legal education, subscribed to a reputable legal research service, and implemented a written AI policy within his office. Additionally, he reported the incident to the Attorney Grievance Commission of Maryland, following the advice of Judge Kevin F. Arthur.
The Importance of Human Oversight
While AI tools are increasingly being integrated into legal research, experts stress the importance of human oversight. Legal research services are developing AI systems trained on verified databases, but caution is still advised. Sloan compared the situation to unnoticed penalties in football, where convincing but inaccurate AI outputs can mislead professionals into believing the information is correct.
"AI should be used primarily for background information, with human verification remaining a critical step in the process," Sloan said. "This incident serves as a reminder of the essential role of human oversight in the age of artificial intelligence."
Looking Ahead
The incident involving Hyman is not an isolated case. As AI continues to advance, legal professionals must remain vigilant about the risks associated with its use. By adopting clear AI policies, investing in reliable research tools, and prioritizing human verification, lawyers can mitigate the risks and ensure the integrity of their work.
The case also highlights the need for ongoing education and awareness about the ethical implications of AI in law. As AI becomes more prevalent, legal professionals must stay informed about best practices and adapt their strategies to maintain the highest standards of accuracy and integrity.