Loading...

Artificial Stupidity: Fumbling The Handoff From AI To Human Control

By SYDNEY J. FREEDBERG JR.

Science fiction taught us to fear smart machines we can’t control. But reality should teach us to fear smart machines that need us to take control when we’re not ready. From Patriot missiles to Tesla cars to Airbus jets, automated systems have killed human beings, not out of malice, but because the humans operating them couldn’t switch quickly enough from passively monitoring the computer to actively directing it.

“How do you establish vigilance at the proper time?” wrote Army Maj. Gen. Michael Vane after automated Patriots shot down two friendly aircraft in 2003. “(It’s) 23 hours and 59 minutes of boredom, followed by one minute of panic.”

That human-machine handoff is a major stumbling block for the Pentagon’s Third Offset Strategy, which bets America’s future military superiority on artificial intelligence. It’s a high-tech quest for the all-conquering software we’re calling the War Algorithm (click here to read our ongoing series). The Offset’s central thesis, what Deputy Defense Secretary Bob Work calls “human-machine teaming,” is that the combination of human and artificial intelligence is more powerful than either alone. To date, however, human and AI sometimes reinforce each other’s failures. The solution lies in retraining the humans, and redesigning the artificial intelligences, so neither party fumbles the handoff.

On June 1st, 2009, Air France Flight 447 was flying to Paris on autopilot from Rio de Janeiro. The junior co-pilot was on the controls; the chief pilot, serenely confident in the automation, was about to take a nap, while the senior co-pilot had just woken up from one. Then the airspeed indicators froze up. It was a minor glitch, but it deprived the autopilot of data, throwing it into the kind of ambiguous situation that artificial intelligence doesn’t handle well. So the autopilot disengaged and put the humans in full control.

The humans weren’t ready. Surprised by the sudden hand-off, disoriented by different alarms and increasingly panicked, the aircrew couldn’t figure out what had happened — or understand the consequences of their own actions. For no reason investigators understand, the junior pilot kept pulling back on the stick. In the normal flight mode, what the pilots were accustomed to, the autopilot would have overridden any dangerous inputs. But in the emergency mode, which the pilots only encountered in simulations, the autopilot let the humans do anything — including pull back the stick until the aircraft stalled and plummeted into the sea. 228 people died.

Was it the autopilot’s fault? The automation worked exactly as designed, but maybe it shouldn’t have been designed to work that way. Was it the human’s fault? The aircrew clearly lost what pilots call “situational awareness,” but how attentive can you expect humans to stay when they have nothing to do for hours at a time?

“It’s unreasonable to expect a human…to sit back and do nothing 99 percent of the time, and then leap in,” said Paul Scharre, director of the future of warfare initiative at the Center for a New American Security. (It was Scharre who identified the three case studies for this article). But plenty of automated systems count on humans to rescue them from their inadequacies.

Handing off to the human in an emergency is a crutch for programmers facing the limitations of their AI, said Igor Cherepinsky, director of autonomy programs at Sikorsky: “For us designers, when we don’t know how to do something, we just hand the control back to the human being… even though that human being, in that particular situation (may) have zero chance of success.”

Air France 447 is the most lethal disaster caused by the combination of artificial and human intelligence being less than the sum of its parts, but it’s not the only one. And while the Air France crash was the result of the machine handing control to humans when they weren’t prepared, other deadly incidents have occurred when the machine didn’t hand over control and humans didn’t realize they should take it.

Last year’s fatal Tesla crash is another striking example. Tesla enthusiast Joshua Brown, a Navy veteran, had posted a video of his car’s Autopilot feature saving him from a potential collision with a truck; just a few weeks later, he drove right into one. The Tesla’s sensors and algorithms apparently could not distinguish the bright white of a tractor-trailer from the bright sky behind it, so it did not engage the brakes. Neither did Brown, who’d made a habit of driving with his hands off the wheel — something Tesla advised against: “We tell drivers to keep their hands on the wheel just in case, to exercise caution in the beginning,” CEO Elon Musk had said.

So was it Autopilot’s fault for not seeing the truck? Or it Brown’s fault for not seeing that Autopilot didn’t see it? The National Highway Traffic Safety Administration held Tesla blameless, saying their system had already saved lives and could not be blamed for failing to save Brown’s. (The National Transportation Safety Board, however, is still investigating).

“Tesla was like, ‘oh, it’s not our fault….The human driving it clearly didn’t understand,'” said Scharre. Given normal human psychology, however, it’s hard to expect Brown to keep watching Autopilot like a hawk, constantly ready to intervene, after he had experienced it not only driving well in normal conditions but also preventing accidents. Given Brown’s understandable inattention, Autopilot was effectively on its own — a situation it was explicitly not designed to handle, but which was entirely predictable.

“You can get lulled into a sense of complacency because you think, ‘oh, there’s a person in the loop,'” said Scharre. When that human is complacent or inattentive, however, “you don’t really have a human in the loop,” he said. “You have the illusion of human judgment.”

The Army’s famous Patriot missile defense system ran afoul of the “illusion of human judgment” in 2003.

“One of the hard lessons of my 35 years of experience with Patriot is that an automated system in the hands of an inadequately trained crew is a de facto fully automated system,” wrote Army engineering psychologist John Hawley in a paper published by CNAS. “The inherent difficulty of integrating humans with automated components has created a situation that has come to be known as the ‘dangerous middle ground’ of automation – somewhere between manual control and full and reliable automation.”

It’s the worst of both worlds. In the end, of 11 Patriot launches, two shot down allied aircraft: a British Tornado and a US Navy F-18 — the latter after the Army had instituted new safety measures intended to put humans firmly in control.

At the time of the Tornado shootdown, Patriot batteries were operating in “automatic mode,” automatically firing on incoming threats unless a human overrode them. Much of that software, Hawley wrote, actually came from the earlier Safeguard anti-ballistic missile program, intended to intercept incoming ICBMs in the upper atmosphere — an environment in which almost all radar contacts would be hostile and letting anything through could get millions killed. Patriot operated in the lower atmosphere, with plenty of friendly aircraft around, little nuclear threat, and a very different calculus of risk for killing a contact versus letting it go, but the software remained biased towards shooting. Neither Army leaders nor Patriot operators understood this — and given Patriot’s much-touted performance in the first Gulf War, they were no more inclined to question the computer than Joshua Brown was to doubt his beloved Tesla.

Once Flight Lieutenants Kevin Main and David Williams died, however, the Army took its Patriots off automatic fire — but the soldiers still didn’t understand the software. The plan was to keep the automatic engagement mode active, so the Patriots would keep tracking potential threats and be ready to fire at a moment’s notice, but put the launchers themselves on standby. When a sensor glitch reported an incoming Scud, the battalion tactical director gave the order “bring your launchers to ready.” What he meant was “get ready to fire.” What happened was the launchers reconnected with the fire control computer — still in automatic mode, remember — and got the automated order to fire. US Navy Lieutenant Nathan White died.

Since 2003, the Army has reformed and increased Patriot operator training, though not enough to satisfy Hawley. “One of the most common myths about automation is that as a system’s automation level increases, less human expertise is required,” he wrote. The opposite is true: “Operators often must have a deep knowledge of the complex systems under their control to be able to intervene appropriately when necessary.”

Artificial intelligence isn’t able to compensate for human stupidity. To the contrary: Human and computer failures compounded each other. The lesson of Patriot, Tesla, and Air France 447 is that smart machines require smart humans.

Design for Interdependence

“You will always have a human being on the loop somewhere,” said Cherepinsky, who heads the MATRIX automated helicopter project at Sikorsky (now part of aerospace titan Lockheed Martin). “Whether the human being is in the cabin of the air vehicle itself or on the ground is almost immaterial,” he said. “That becomes more a cultural question.”

Just as there should always be a human involved, even if they’re not in control, Cherepinsky argued that the automation should always be engaged, even if it’s not in control. “Don’t think of the computer as on/off” like traditional autopilots, he said. “That’s one of the concepts we rejected early on.”

Instead, “it’s always on, it’s with you from the time you turn on the airframe, it watches what you’re doing.,” he said. “The machine becomes your copilot.”

Want to fly the entire mission by hand? Sure, but the computer will always be monitoring you, checking your progress against the mission plan, checking your control inputs against the parameters of safe flight, ready to take over if asked — or even to suggest you’re overloaded and might want to let it take over some tasks. Want to fly the entire mission hands-off? Sure, and the computer will still monitor you, so if it has to hand back control in a crisis, it’ll only return tasks to manual control when it knows you’re ready to take them. Want to fly solo, with a human co-pilot, or let the helicopter fly away without anyone in it? The system can adjust, Cherepinski said.

That kind of seamless transition between humans assisted by AI and AI overseen by humans is the holy grail of human-machine teaming. You want both sides fully engaged, not the human asleep at the wheel while the algorithm plods rigidly ahead.

“How do you keep the human involved, keep the human creativity, judgment, compassion?” asked Matt Johnson, a Navy pilot turned AI researcher, during a recent conference at the Johns Hopkins University Applied Physics Laboratory. “A lot of times, we think about the goal as (being) to make autonomous systems. We want autonomous cars, we want autonomous drones, whatever the case may be. We want to take a machine that’s dependent on people and make it independent.”

“I would suggest that’s not the right goal,” Johnson said. “What I want is an interdependent system that works with me to do what I want to do.”

But how do you get the human half of this interdependent system to understand what the machine is doing, let alone to trust it? That’s the subject of our third and final article on Artificial Stupidity: Learning to Trust.

Subscribe to receive free email updates:

0 Response to "Artificial Stupidity: Fumbling The Handoff From AI To Human Control"

Post a Comment

Loading...