News

VA's AI Tools Face Scrutiny Over Patient Safety Risks

Source: military.com

Published on January 22, 2026

Updated on January 22, 2026

VA's AI Tools Face Scrutiny Over Patient Safety Risks

The Department of Veterans Affairs (VA) is under scrutiny after a watchdog report warned that its AI tools lack proper patient safety oversight. The VA's inspector general raised concerns that AI systems used in clinical settings at VA facilities operate without formal safety protocols, potentially putting veterans' health at risk. This revelation comes as the VA rapidly expands its use of AI in healthcare, raising broader questions about the balance between innovation and patient safety.

AI in Healthcare: Balancing Innovation and Safety

The inspector general's report, released on January 15, highlights a critical oversight gap in how the Veterans Health Administration (VHA) authorizes AI chat tools for clinical use. According to the report, the acting director of VA's National AI Institute and the chief AI officer within VA's Office of Information and Technology have been using an "informal collaboration" to deploy these tools, bypassing the National Center for Patient Safety. This practice violates VHA Directive 1050.01, which mandates that the Office of Quality Management and the National Center for Patient Safety oversee quality and safety programs.

The lack of formal oversight means there is no established process to manage AI-related risks, such as inaccurate outputs that could affect diagnoses and treatment decisions. A study published in npj Digital Medicine in May 2025 found that generative AI systems can produce false or incomplete information, underscoring the potential dangers of relying on these tools in healthcare settings. The inspector general emphasized that this lack of oversight prevents the VA from detecting patterns that could improve the safety and quality of AI-assisted clinical care.

Rapid AI Adoption Amid Global Governance Challenges

The VA's rapid expansion of AI use cases—from 229 applications in 2024, up from previous years—includes tools like AI-assisted clinical documentation and predictive algorithms for identifying veterans at risk of suicide. However, these systems often lack access to current data, as they do not have web search capabilities and rely solely on user prompts. This limitation could lead to outdated or incomplete clinical guidance, which is particularly concerning given the VA's reliance on these tools for patient care.

The oversight gap at the VA mirrors a broader challenge facing governments worldwide. A July 2025 report by the Government Accountability Office (GAO) revealed that only 10% of governments have centralized AI governance, with one-third lacking dedicated AI controls and 76% lacking automated mechanisms to shut down high-risk AI systems. This global trend highlights the urgent need for more robust AI governance frameworks to ensure safety and accountability.

The VA's AI strategy, outlined in a September 2025 document, outlines ambitious plans for AI-assisted healthcare, including automated eligibility determination for benefits programs and AI-enhanced customer support. However, the inspector general's report suggests that the VA may be prioritizing rapid adoption over patient safety, raising questions about the responsible use of AI in sensitive healthcare settings.

In response to the report, VA press secretary Pete Kasperowicz stated that "VA clinicians only use AI as a support tool, and decisions about patient care are always made by the appropriate VA staff." However, the inspector general's ongoing review signals that the issue is far from resolved, and the VA may need to implement stricter oversight measures to ensure the safe use of AI in its healthcare systems.