The State of AI: Ethical Quandaries and Financial Incentives Drive AI's Use in Warfare

Published on November 17, 2025 at 04:30 PM
As AI takes on an increasingly central role in military strategy, experts are raising concerns about potential risks, ethical oversights, and the overhyping of AI's capabilities in combat. In a new collaboration between the Financial Times and MIT Technology Review, Helen Warrell (FT) and James O’Donnell (MIT) discuss the implications of AI in warfare, considering both the dystopian fears and the practical realities. Warrell raises the specter of AI-driven cyberattacks, disinformation campaigns, and autonomous drones, painting a picture of a future where military commanders might lose control of escalating conflicts. Referencing Henry Kissinger's warnings, she emphasizes the urgent need to mitigate these risks. O’Donnell notes a significant shift in attitudes among AI companies regarding military applications. He points to OpenAI's initial ban on using its tools for warfare, followed by a subsequent agreement with Anduril for battlefield drone defense. This shift, he argues, is driven by the promise of more precise warfare, coupled with the financial incentives from defense contracts and venture capital. While some, like Keith Dear, believe existing laws are sufficient to regulate AI in warfare, others, such as Missy Cummings, express concerns about the fundamental limitations of AI models, particularly large language models, in high-stakes military settings. The debate highlights the tension between the promises of AI and the need for rigorous oversight and skepticism. Both Warrell and O’Donnell agree on the importance of questioning the safety and oversight of AI warfare systems. They caution against blindly accepting the “extraordinarily big promises” made by companies, emphasizing the need for scrutiny and debate in the rapidly evolving defense tech landscape.