AI in Federal Law Enforcement

Source: afcea.org

Published on June 2, 2025

Federal Law Enforcement and AI

Experts from the FBI, TSA, and NCIS discussed the use of machine learning, automation, large language models, and other AI aspects in federal law enforcement at AFCEA Bethesda’s LEAPS Preview event on April 17, which was moderated by Sonya Thompson, former chief information officer and assistant director, Information Technology and Data Division, Federal Bureau of Prisons. Federal law enforcement organizations are proceeding cautiously with AI, given the legal authorities and evidence-building at stake. Nevertheless, these organizations are applying AI to business applications and operations and anticipate further benefits as they implement more AI applications and as the technologies advance.

FBI's Use of AI

According to Kiersten Schiliro, senior technical advisor, Operational Technology Division, FBI, the need for the FBI to use AI came about because of the Boston Marathon bombing in 2013. The FBI needed to quickly locate and track suspects using a large amount of video data after the terrorist attack. Schiliro noted that the FBI developed an open MPF, a multimedia processing framework for computer vision that can quickly triage large data sets to extract specific optical characteristics. It can extract license plates and words in images and identify specific objects, such as a particular car. It can also track faces but is not an identity resolution tool. Schiliro said it can track the same subject throughout the data. In the case of the bombing, they were looking for two suspects wearing a black hat and a white hat and needed to find every instance of those people within the data.

The MPF has proven to be a valuable triage tool for federal investigators. The AI capability has since reduced the time it takes to review data sets, which is especially important with the increased use of body-worn cameras and other video data. Schiliro stated that it took close to a year to fully process the bombing data but would now take about two days with current capabilities. Computer vision tools have become one of the FBI’s most mature AI use cases, providing faster ways to extract useful information from data sets.

The FBI is also using AI in its Criminal Investigative Division to prevent or stop violence against children. Schiliro noted that, in cases where there are vulnerable child victims, they often do not appear in adult databases, such as the NGI database or Department of Motor Vehicle records, because they are children. In cases where they are having trouble identifying a victim or subject, they will conduct facial recognition technology searches for identity resolution. Schiliro says the accuracy is very high and that this technology is being used to save victims’ lives.

Schiliro explained that the bureau intentionally chose to apply AI to some of its harder problem sets first. Rolling certain AI tools into the operations of the Criminal Investigative Division made AI adoption easier, given that it was only one division and not the entire FBI. The bureau is exploring other ways AI can provide efficiency gains and benefit more of the workforce. Schiliro advised companies that offer AI capabilities to law enforcement that AI can be beneficial for increasing accuracy, increasing efficiency, or doing what a human cannot do. She stated that some AI tools perform better than human reviews and that it is important to identify a measurable outcome for each specific AI use case.

TSA's Use of AI

Kristin Ruiz, deputy assistant administrator and deputy chief information officer, TSA, said that the TSA is using AI tools from its parent organization, the Department of Homeland Security (DHS), which has a capability development arm in its Science and Technology office that has already been implementing AI. The TSA’s Chief Information Office is having everyone in the information technology office learn AI with the intention of using tools across the agency and from the DHS. Ruiz emphasized that they do not need to add additional staff to leverage this technology because their current staff are skilled and able to learn new skills through training. They partner with component organizations that may have funding for one piece while they may have funding for another in order to achieve economies of scale. They are looking at all available options to see what they can do for the biggest bang for their buck.

One tool the TSA is using internally is the TSA Answer Engine, which allows TSA employees in the field to ask questions about standard operating procedures, generate documents or reports, and get quick answers about regulations or policies. The TSA is also using its Innovation Lab to help with AI implementation. Ruiz shared that they bring in partners to discuss use cases and look at their technology, and to help those in the field see how they could automate tasks. As an example, the organization paired with industry partners to demonstrate how the TSA could use virtual reality holograms combined with ChatGPT personas for training at TSA checkpoints. This allowed TSA officers to have real-life training with different personas and scenarios.

Ruiz stated that officers never knew what ChatGPT was going to do during the training. Each officer might handle the same scenario differently and get a different response. Some could get the hologram to comply, and some could not. The TSA’s use of facial recognition, AI, and biometric data at airports and ports has led to further gains as the TSA partners with the DHS, Department of Justice, and State Department and uses the resulting information and data, Ruiz noted.

NCIS' Use of AI

The NCIS relies on about 250 agents in 15 field offices in the United States and abroad. According to the agency, recent work by the NCIS includes a joint investigation that indicted Chinese nationals for a computer hacking campaign, a multiagency takedown of a drug trafficking network, and a joint child exploitation arrest operation. Richard Dunwoodie, acting executive assistant director in NCIS’ Operational Technology and Cyber Innovation Directorate, said that the NCIS is progressing with AI, starting with a few small projects. AI represents possibilities for the organization both in business applications and in the operational environment in these types of investigations. Dunwoodie shared that the NCIS is looking at using these capabilities as part of their business applications and is concerned with efficiencies and helping the workforce navigate policy better.

Dunwoodie sees AI helping with human aspects as well as for vehicle recognition. He explained that these capabilities are being used in an operational environment in a number of different ways. The NCIS is also talking to industry to see how AI-related capabilities could help the workforce operationally on a day-to-day basis. In addition, they are relying on the Department of Defense’s Chief Digital and Artificial Intelligence Office, or CDAO. Dunwoodie said they are starting small and are not yet at the point of giving agents a tool that allows access to investigative and operational data safely in an operational environment. Fortunately, the Department of Defense’s Chief Digital and Artificial Intelligence Office is vetting, assessing, and validating AI solutions from the private sector that are readily adaptable to their environment. He stated that they are often starting with what is already on the shelf that has been approved and validated for immediate use.

Dunwoodie continued that unique, legacy environments may require a specific AI capability. He said that they have been using AI and machine learning capabilities for some time by using the ability to work with large data sets to inform decision-making. They just haven’t necessarily been putting an “AI brand” on that kind of work. One example is using AI for public-facing events and statically deploying pre-programmed solutions. Dunwoodie said that there is an opportunity to be a great deal more effective.

Staff education is necessary at the NCIS, as is combating the desire of agents to rely on commercial GPT applications to write their reports. Dunwoodie acknowledged that this is an issue because submitting a query to a commercial AI tool gives away knowledge of their gaps, potentially to adversaries. He emphasized the need for policies and guardrails to guide this. Schiliro added that the FBI has checks and balances when it comes to using AI, and while that takes longer, it ensures a careful approach. For example, the FBI reviews all its AI use cases for privacy and civil liberties considerations, and they undergo an ethics review by their AI Ethics Council. She said that all of the agencies have been cautious and deliberate about their approach to AI and that they only have one chance to get these kinds of things right.