AI researchers expected to help with some military work

In January, Google Chief Executive Sundar Pichai said artificial intelligence (AI) would have a “deeper” impact than even electricity. He was following a long tradition of business leaders claiming their technologies were both revolutionary and wonderful.

The problem is that revolutionary technologies can also revolutionize military power. AI is no exception. On June 1, Google announced that it would not be renewing its contract to support a US military initiative called Project Maven. This project is the first “deep learning” AI system deployed operationally by the military, which uses layers of processing to transform data into abstract representations – in this case, to classify images into sequences. collected by military drones. The company’s decision to pull out came after around 4,000 of Google’s 85,000 employees signed a petition to ban Google from developing “wartime technologies”.

Such challenges create great moral hazard. The integration of advanced AI technology into the military is as inevitable as the incorporation of electricity once was, and this transition is fraught with ethical and technological risks. It will take the contribution of talented AI researchers, including those from companies such as Google, to help the military stay on the right side of ethical lines.

Last year I conducted a study for the American intelligence community, showing that The Transformative Impacts of AI Will Span the National Security Spectrum. Military robotics, cybersecurity, surveillance, and propaganda are all vulnerable to disruption from AI. The United States, Russia and China all expect AI to underpin future military power, and the monopoly enjoyed by the United States and its allies on key military technologies, such as aircraft. stealth and precision-guided weapons, is coming to an end.

I sympathize with researchers, academics and industry, who are faced with the difficult question of whether to help the armed forces. On the one hand, the mission of keeping the United States and its allies safe, free, and strong enough to deter potential threats is more vital than ever. Helping the military integrate new technologies can simultaneously reduce dangers to soldiers and civilians trapped in combat zones and enhance national security.

On the other hand, researchers who help the military sometimes regret it. Some of the scientists who worked on the Manhattan Project, which developed the nuclear bombs used in World War II, later concluded that the world would have been better off without this research. Many uses of AI are ethically and legally dubious – evidenced by issues with software used to aid law enforcement and sentencing.

Fortunately, American AI researchers are free to choose their projects and can influence their employers. This is not the case for their counterparts in China, where the government can compel companies or individuals to work on national security efforts.

Even if researchers refuse to participate in a project, however, they can’t really avoid the national security consequences. Many hobbyist drone makers were appalled to learn that their products were being used by the Islamist terrorist group ISIS to drop explosives on US troops. There is no doubt that many researchers working on driverless cars have not fully considered the implications of this technology for driverless tanks or car bombs. But ignoring potential apps won’t stop them from happening.

Additionally, AI scientists openly publish much of their research. In such cases, the release of algorithms, code libraries, and training datasets makes these building blocks available to all armies, and benign projects can enable malicious applications. Outright refusals by tech companies to work with US national security organizations are counterproductive, even if other companies choose to do this work. The nation’s AI researchers need to hear from the military about the security implications of technologies, and the military needs broad expert guidance to be able to apply AI ethically and effectively.

This does not mean that researchers in artificial intelligence should happily support all the projects imagined by the American army. Some proposals will be unethical. Some will be stupid. Some will be both. When they see such proposals, researchers should oppose them.

But there will be AI projects that genuinely advance national security and do so legally and ethically. Take, for example, the work of the US Defense Advanced Research Projects Agency on combating counterfeit video and audio enabled by AI. The AI ​​research community should consider working on such projects, or at least refrain from demonizing those who do.

It is useful to remember the bacteriologist Theodor Rosebury, who researched biological weapons for the United States military in the 1940s. After World War II, Rosebury limited his work on biological weapons to defensive research exclusively. and argued for defense as the only US military policy. His position was eventually enshrined in the Biological Weapons Convention of 1972.

Which brings us back to Google and Project Maven.

For years, I have been among those advocating for the US military to increase its use of advanced AI technologies, and to do so in a careful and ethically conscious way. The Maven project, which performs a non-security-critical task that is not directly related to the use of force, is exactly what I had hoped for. The system uses AI computer vision to automate the most mundane aspect of drone video analysis: counting the number of people, vehicles, and buildings. The companies involved deserve to be commended, not criticized, for their support.

The all-or-nothing position is a dangerous simplification. AI experts from businesses and universities have a unique and vital opportunity to help the military integrate AI technology in a way that ethically enhances national and international security. They should take it.

James G. Williams