Nation/World

In a first, Air Force uses artificial intelligence aboard military jet

The Air Force allowed an artificial-intelligence algorithm to control sensor and navigation systems on a U2 Dragon Lady spy plane in a training flight Tuesday, officials said, marking what is believed to be the first known use of AI onboard a U.S. military aircraft.

No weapons were involved, and the plane was steered by a pilot. Even so, senior defense officials touted the test as a watershed moment in the Defense Department’s attempts to incorporate AI into military aircraft, a subject that is of intense debate in aviation and arms control communities.

“This is the first time this has ever happened,” said Assistant Air Force Secretary Will Roper.

Former Google chief executive Eric Schmidt, who previously headed the Pentagon’s Defense Innovation Board, described Tuesday’s flight test as “the first time, to my knowledge, that you have a military system integrating AI, probably in any military.”

The AI system was deliberately designed without a manual override in order to “provoke thought and learning in the test environment,” Air Force spokesman Josh Benedetti said in an email.

It was relegated to highly-specific tasks and walled off from the plane’s flight controls, according to people involved in the flight test.

“For the most part I was still very much the pilot in command,” the U2 pilot who carried out Tuesday’s test told The Washington Post in an interview.

ADVERTISEMENT

The pilot spoke on the condition of anonymity because of the sensitive nature of his work. The Air Force later released photos from shortly before the test flight with materials that referenced only his call sign: “Vudu.”

“[The AI’s] role was very narrow . . . but, for the tasks the AI was presented with, it performed well,” the pilot said.

The two-and-a-half-hour-long test was performed in a routine training mission at Beale Air Force Base, near Marysville, Calif., starting Tuesday morning. Air Force officials and the U2 pilot declined to offer details about the specific tasks performed by the AI, except that it was put in charge of the plane’s radar sensors and tactical navigation.

Roper said the AI was trained against an opposing computer to look for oncoming missiles and missile launchers. For the purposes of the initial test flight, the AI got the final vote on where to direct the plane’s sensors, he said.

The point is to move the Air Force closer to the concept of “man and machine teaming,” in which robots are responsible for limited technical tasks while humans remain in control of life-or-death decisions like flight control and targeting.

“This is really meant to shock the Air Force and the [Defense] Department as a whole into how seriously we need to treat AI teaming,” Roper said in an interview shortly before the test.

The AI “is not merely part of the system . . . we’re logging it in the pilot registry,” he said.

The AI itself, dubbed ARTUµ in an apparent Star Wars reference, is based on open-source software algorithms and adapted to the plane’s computer systems at the U2 Federal Laboratory.

It is based on a publicly accessible algorithm called µZero, which was developed by the AI research company DeepMind to quickly master strategic games like Chess and Go, according to two officials familiar with its development. And it is enabled by a publicly-available, Google-developed system called Kubernetes, which allows the AI software be to ported between the plane’s onboard computer systems and the cloud-based one it was developed on.

On its face, the U2 seems an unlikely candidate for AI-enabled flight. It was developed for the CIA in the early 1950′s and used throughout the Cold War to conduct surveillance from staggeringly high altitudes of 60,000 or 70,000 feet. The planes were later procured by the Defense Department.

But its surveillance function is one that has already incorporated the use of AI to analyze complex data. An Air Force program called Project Maven sought to rapidly analyze reams of drone footage in place of humans. Google famously declined to renew its Maven contract following an internal revolt from employees who didn’t want the company’s algorithms involved in warfare. The company later released a set of AI principles that disallowed the company’s algorithms from being used in any weapons system.

Schmidt, who led Google until 2011, said he believes it’s unlikely that the military will embrace fully-autonomous weapons systems anytime soon. The problem, he says, is that it’s hard to demonstrate how an AI algorithm would perform in every possible scenario, including those in which human life is at stake.

“If a human makes a mistake and kills civilians it’s a tragedy . . . if an autonomous system kills civilians it’s more than a tragedy,” Schmidt said Tuesday in an interview.

“No general is going to take the liability of a system where they’re not really sure it’s going to do what it says. That problem may be fixed in the next several decades but not in the next year,” he said.

ADVERTISEMENT