AI's Impact on National Security

Source: justsecurity.org

Published on October 2, 2025

The Hidden Costs of AI on National Security

Artificial intelligence is changing how people learn, work, and solve problems. These tools range from machine learning and natural language processing, like Siri, to defense initiatives that generate target lists by synthesizing data. Generative AI tools, such as chatbots and large language models, which are promoted as writing and research supports, pose a specific risk to national security personnel, whose choices have serious implications. Consistent usage of generative AI could degrade the cognitive abilities vital to U.S. security. If policymakers don't take action, these efficiency tools might impair national security professionals' critical thinking, rapid response, and strategic capabilities. Given the risk-averse nature of national security, this consequence should be acknowledged and addressed.

During my time as a U.S. Deputy Assistant Secretary of Defense (DASD) from 2021 to 2024, I attended as many as thirteen meetings daily. Often, the only available time to review materials was while walking to these meetings, which covered topics ranging from budget allocation to military education policy. My effectiveness depended on quickly using current knowledge of priorities and relationships within the DoD to assess large amounts of data and make informed decisions. In short, analytical and critical thinking were essential to my job.

Like any tool, AI's worth is measured by its performance, with the intention to simplify tasks. AI is marketed as a way to improve human performance for teachers, doctors, and pilots. However, there are costs involved, including effects on the job market, climate, and water. There is increasing evidence that generative AI has a subtle effect on our cognitive skills. This could seriously harm the U.S. national security workforce, which needs strong critical thinking skills.

The Impact on Education

Educators have observed changes in students after the introduction of generative AI. One professor noted that understanding the power of clear thinking is vital but that generative AI can disrupt this process. Another said that education involves learning to overcome challenges and appreciating the learning process. AI enables people to bypass both the difficulty and the process. Consequently, educators are unsure whether this technology helps or harms students.

Research confirms this concern. A recent study revealed that using generative AI shifts focus from gathering information, problem-solving, and analysis to checking AI-generated content. The study also found that these tools make it harder for knowledge workers to recognize when critical thinking is needed, particularly when tasks seem unimportant or when users overly trust AI. Researchers suggest that neglecting critical thinking in low-stakes situations can lead to a decline in cognitive abilities, posing risks when high-stakes situations arise.

Another study suggested that writing with AI assistance reduces brain connectivity in key areas, which may reduce creativity and idea generation. A further study indicated that consistent exposure to AI might reduce endoscopists' skills. These results highlight potential vulnerabilities within the national security workforce. Critical thinking is crucial for national security professionals such as intelligence analysts, State Department officials, and Pentagon employees. Analytical and research skills are important in national security roles, but these are also the skills most impacted by using generative AI, according to research.

AI Integration and the Future Workforce

The Pentagon began using AI in 2018 and released an adoption strategy in 2023. OpenAI launched an initiative to integrate ChatGPT and other tools across the federal government earlier this year. Generative AI has become more prevalent in the private sector, doubling in use in the last two years. It’s important to consider how many people are using these tools, how long they use them, and for what purposes.

Almost 30% of American white-collar workers, who are likely to enter public service, use AI daily or weekly. While many use AI to generate ideas, only a smaller percentage feel it boosts their creativity. Many organizations have integrated AI in some capacity. Children and young adults are encountering AI earlier in life, including in toys. An Executive Order in April paved the way for integrating AI into K-12 classrooms. The current high school seniors are the last to remember education before ChatGPT. By the time these students enter the workforce, AI use may be so ingrained that it will be hard to avoid.

Given AI's increasing integration into education and the economy, future national security professionals will have used generative AI tools from kindergarten through their first jobs. How do we want these professionals to make decisions? Should they simply verify AI outputs, and is that enough for the challenges ahead? The American people expect their government to ensure their safety. The national security workforce faces threats that include decoding China’s nuclear signals, managing Russian incursions, addressing climate change impacts, and securing critical minerals. Public servants will need to use AI to support national defense, but will we successfully minimize the cognitive cost?

Shaping AI Engagement

We must intentionally shape how society engages with AI, especially generative AI, from education to the workplace. It will be challenging to balance the benefits of generative AI—faster knowledge retrieval, drafting assistance, training, and readily available expertise—with the risks. This balance should start with a clear understanding of the risks. Generation Z recognizes the risks AI poses to education. They understand that AI can affect their thinking and want schools to help them use it well.

It is important to responsibly develop AI tools and proactively develop the people who will use them. Equipping the workforce to use AI effectively is sound planning. Guidelines for integrating AI should support human needs without undermining cognition. The first step is to offer a standard AI literacy curriculum covering AI terminology and history, as well as proper usage. While the U.S. government seems to be moving in this direction, the goal of AI literacy should be ensuring cognitive prowess while using these tools. The second step is to define AI's value proposition to determine where it should and shouldn't be used. AI should be applied carefully and intentionally. The third step is identifying which skills can be offloaded to AI. Skills that are essential to national security must be protected, while ancillary skills can be delegated to AI. Proactively identifying these will help align AI with its purpose. The final step involves policymakers developing AI governance to guide tool development and responsible diffusion across sectors. It is important to strengthen both the tools themselves and the minds that use them.

Caroline Baxter is the Director of the Converging Risks Lab within the Council on Strategic Risks. From 2021 to 2024, she served as Deputy Assistant Secretary of Defense for Force Education and Training.