DOD needs to pay more attention to building people’s confidence in AI, researchers say

Written by Jackson Barnett

The Department of Defense sees artificial intelligence as a troop-aiding technology, but so far there seems to be a lack of research on how to build human confidence in machines that could go to the field. battle, a new report says.

According to Center for Security and Emerging Technology research at Georgetown University. The problem is that while the DOD is focused on building technologies that can pair with humans, like self-driving vehicles or algorithms that help with decision-making, it doesn’t understand how that pairing will actually work, says The report.

“There really is a consensus within military circles that trust is important to the relationship,” Margarita Konaev, the project’s lead author, told FedScoop in an interview. “It’s something we expected to see, but it’s really not something we found.”

The search looked at descriptions available in the DOD’s extensive science and technology research portfolio, which includes work from agencies dedicated to emerging technologies, such as the Defense Advanced Research Projects Agency (DARPA), as well as offices at within each military service, such as the army. AI Working Group. There is also the work of the Pentagon’s Joint AI Center (JAIC), which serves as the fusion and deployment office for AI technology.

The study hardly had a comprehensive view of military research on trust and AI, given that much of it could be classified. But it’s highly unlikely that any research on the topic will be shielded from public view, Konaev said.

Find the right level of confidence

Both extremes of trust can be a problem, of course. The military wants personnel to have full confidence in AI-based technology that is designed to provide an advantage, such as an unmanned ground vehicle trained to follow a convoy of manned vehicles. But too much confidence can also be a problem. If troops trust a system too much, it could cause them to abdicate their position “in the loop” of a battlefield decision.

“If the person doesn’t trust the system that provides recommendations, then we lose a lot of money that was invested in developing these technologies,” Konaev said.

“Proper calibration” is essential, Konaev said, and to begin to understand how to calibrate, the institution needs to spend more research dollars on the subject.

Building trust with “safety and security”

Confidence also comes with a technical component. Investing in parts of AI development that encourage trust, such as testing and explainability, will also help ensure the successful pairing of humans and machines. If a person can better understand how a machine arrived at a decision and knows that it has been reliably tested, they will be more likely to trust it. Konaev said the data she and her team analyzed included many references to the technical side of trust, but there was little consistency on the topic.

“We couldn’t conclude that this is a consistent theme in US military research,” she said of the research into aspects of AI development such as testing. , explainability and reliability.

The technical angle is to certify the “safety and security” of the technology, Konaev said. For example, if a database that the DOD uses to train an algorithm is insecure and potentially exposed to manipulation, the resulting algorithm is unlikely to be reliable. While DOD leaders show a general interest in ensuring data integrity, the department does not appear to make the topic a research priority, according to the CSET report.

The two elements of trust – calibrating the human side and ensuring technical safety and security – will only become more critical as AI-enabled systems move closer to the battlefield. Much of DOD’s AI research projects began with low-risk back-end systems, such as advancing the use of automation for financial systems. But now the DOD’s Joint AI Center has dedicated the majority of its funding to its “Joint Warfare Combat Mission Initiative,” and the Army has renewed its efforts for unmanned/manned team vehicles.

“We’re definitely talking about AI-enabled systems that are going to go into the field,” Konaev said.

James G. Williams