Why the Depiction of AI, Drones and Autonomy in “Outside the Wire” is Almost Awesome
“Outside the Wire” has everything: AI, drones, robots, a Trolley Problem, Asimov’s 3 Laws of Robotics, and the threat of nuclear war. Normally I’d skip anything so shamelessly targeting my id, but this one starred the underrated Anthony Mackie, who made a courageous star turn in the brilliant “Striking Vipers” episode of “Black Mirror.”
Does Mackie bring that level of courage and intelligence to “Outside the Wire?” What do we learn about AI, drones, robots, and the human beings living with them?
I have good news and bad news. The good news is that the more you know about technology and Cold War history, the more interesting the ideas in “Outside the Wire” can be. The bad news is that the plot is straight out of a rejected “Call of Duty” add-on.
Let’s start with the plot
It’s 2036. A violent civil war has erupted in Eastern Europe. No countries are named because streaming services don’t want to lose access to local markets. U.S. troops are stationed as peacekeepers on “this lawless frontier.” The bad guy is a criminal warlord named Viktor Koval, because “VK” makes for cool graffiti. To combat Koval’s growing power, the Pentagon has deployed armed robotic soldiers called Gumps for the first time.
So far, so Hollywood. But what bothers me about “Outside the Wire” isn’t its kitchen-sink sci-fi plot. It’s in the issues it raises, and then ditches, fast. And I do mean fast. Let’s review some of my favorite paths they didn’t take.
#1: Robots are only as good as their programmers
The opening scene introduces Harp, a young American drone operator in Creech, Nevada, remotely observing a firefight in Eastern Europe between U.S Marines and Koval’s troops. Two Marines are injured and pinned down. The unit has one Gump — a 7-foot-tall armored humanoid — ready to make the rescue.
Ask anyone working on robots at Carnegie Mellon University, Stanford, MIT or Georgia Tech, and they’ll all say the same thing: if a job is really, really, really, really dangerous, and you’ve got a robot that can do it, you use the robot.
Why doesn’t the Gump rescue the injured Marine? Unclear. Why are they called Gumps unless they do what Forrest Gump would do and save the day? Unclear. How much autonomy do Gumps have? Unclear. Why doesn’t the Marine commander order it to perform a rescue? Unclear.
I don’t see AI failures. I see programming and leadership failures, which are human failures. Are we going to learn more about the human/Gump relationship? That would be a more interesting movie — one I’d probably watch, too.
Alas, the Gump is destroyed, a second Marine is injured trying to rescue the first, and the plot proceeds.
#2: Let’s answer the trolley problem
After the loss of both Gumps, the Marine commander won’t give the order to advance, but he also won’t leave anyone behind, because we need a plot device, and so the 38 Marines shelter in place while the two injured take more fire. An unidentified vehicle approaches the Marines and stops. Harp declares the vehicle a risk to all 40 Marines and asks for permission to destroy it, which will kill a couple out of the group. Harp’s superior disagrees, and withholds permission to fire. But Harp is convinced and—
Hold on a minute. Is this a Trolley Problem? Sure looks like a Trolley Problem.
Harp fires, the vehicle is destroyed, and two Marines are killed. Harp is court-martialed for disobeying his superior officer. Did Harp make the “right” choice? Was there a “right” choice? Was that vehicle a threat to all 40 Marines? Unclear. Unclear. Unclear.
But in the real world, the Trolley Problem clearly has a right answer: avoid it. Make better choices earlier. James T. Kirk figured this out in 1982’s “Star Trek II: The Wrath of Khan.” So did the AI in 1983’s “Wargames.” If your prior choices forced you into two bad ones, it’s too late. Taken literally, if you want to reduce the number of runaway trolleys, add better brakes and increase the trolley maintenance budget.
Will “Outside the Wire” avoid Trolley tropes and actually teach us something? That would be a very interesting movie. I would like to watch that movie. If only there was some kind of AI in 2036 with the life-saving wisdom of WOPR from “Wargames,” and maybe if the plot focu—
Then, like a Great Dane snatching a steak off your plate just as you’re about to cut into it, Harp’s court martial ends, and he is released and sent to Eastern Europe on a mysterious assignment.
#3: What does a “good” robot do?
Harp arrives at a U.S. base in Eastern Europe. What looks like a Boston Dynamics “Spot” robot dog walks by. A lot of people felt threatened by robot dog videos on Youtube. Is this foreshadowing? Is this one a good robot, or a bad one? Will robot dogs reappear to deliver medical supplies at a critical moment, or get hacked and turn on the former masters?
It sure would be nice to see robots do what they’re programmed to do and help people.
But no. We never see them again.
#4: Robots need charging and maintenance
Harp enters a Gump maintenance room where they are being charged while human technicians service them. This is realistic.
Will a battery go dead at a critical juncture? Will a part fail? Will an unsexy thing like maintenance become the way the good guys win, the way microbes defeated the Martians in “The War Of The Worlds?” Is there a cool James Bond-style gadget coming our way?
#5: What is the philosophy of technology?
Harp enters the office of his new commander, Leo. Three books are briefly visible: Shakespeare’s “Henry V,” Howard Zinn’s “A People’s History of the United States,” and “Black Reconstruction in America” by W.E.B. DuBois. This is some heavy reading. Is this an anti-war movie? Some believe “Henry V” to be anti-war. Will this movie comment on the power of elites as the Zinn book does? Soldiers often resent their leaders. Will this movie comment on racial justice, as DuBois did? Both the films’ leads are Black U.S. soldiers in Eastern Europe, which opens up yet another potentially fascinating dimension. I would love to watch a movie combining questions about technology with any of these ideas. All of them in one movie would be incredible.
But we never see these books again, nor the ideas in them.
#6: Automation and nuclear deterrence
We are 16 minutes into the movie. Leo reveals the mission: stop Koval from gaining the access codes to Systema Perimeter, which was the Soviet Union’s doomsday machine, designed to automatically launch a retaliatory strike on the United States in the event Soviet leadership was wiped out. There’s a great book about it called “The Dead Hand,” and controversy remains over whether the system was ever activated, whether it was semi- or fully automated, and whether it exists today.
Can you imagine the awesomeness of a movie about “The Dead Hand?”
This is not that movie.
#7: If you could build a replicant, how would you program its AI?
But wait! Leo is actually “4th generation biotech,” or 100% indistinguishable from human unless he takes his shirt off and stands in front of soft green lighting. He is also very handsome, paternalistic, arrogant, condescending, prone to violence, has wild mood swings, asks deeply inappropriate personal questions of subordinates like Harp, and likes using typewriters and listening to records.
Is there a reason he has such a terrible personality? Is his AI based on a human for a narrative reason? Or is Leo just based on Anthony Mackie’s mood of the day? Will this movie explore why or how programming “character” into AI might benefit humans?
We never find out.
#8: Resentment of automation
As Leo and Harp leave the base, they pass two soldiers abusing a Gump. They resent “a vending machine being sent to do a soldier’s job.” This makes no sense. The Gumps exist to help the human soldiers. They aren’t depriving fighter pilots of the glory of air-to-air combat. Gumps can take point and absorb damage. Any infantryman would be grateful to have them around.
Does this movie have fresh insight into resentment of automation? Is it suggesting there might be both rational and irrational resentment of automation? This is an interesting idea. I would like to watch a movie about this.
This is not that movie.
#9: Asimov’s 3 Laws of Robotics are dumb, and so are failsafes
Asimov’s Three Laws of Robotics are central to the idea of good robots. Logically, an armed military robot wouldn’t be given autonomy unless it had a failsafe preventing it from turning against friendlies. I would love to see a movie about a military robot wrestling with Asimov’s laws.
And yet the hill the screenwriters chose to die on is… Leo wants the nuclear codes for himself so he can launch a strike against the United States. Why? America is bad. There is no debate. Yes, this will lead to global nuclear war, but the screenwriters had an agenda, and the 3 Laws of Robotics would have been a roadblock.
As for failsafes, those 4th-gen biotechs need a redesign, and so does the organization managing them. Is anyone keeping tabs on these things? Don’t biotechs upload data and do timed check-ins, like nuclear submarines? Wouldn’t you install a self-destruct, in case one of these Leos loses connectivity for too long, to prevent such priceless technology from falling into enemy hands?
You would if you knew anything about AI, drones, and the safety of autonomous systems.
There’s some great sci-fi out there. Here’s where to start
Technology is only as good as we choose it to be, and science fiction only as good as the science behind it. If you want to read some great sci-fi about the morality of autonomous robots, Asimov’s “I, Robot” is the place to start, and it just so happens to be where he first lays out his Three Laws of Robotics. It’s so far ahead of its time, I still can’t believe it was first published in 1950. Alas, the movie starring Will Smith is no substitute, because it makes the same mistake as “Outside The Wire.” Violence may sell, but it rarely illuminates.