Armageddon's Eve

New York Times

Opinion

"Pie in the Skynet"

August 26, 1997

Jenny Schlesinger, Opinion Columnist

Three days have passed since the deaths of two airmen at Andrews Air Force Base, where a drone of the 1st Autonomous Bomber Wing went rogue and unloaded its payload against the base's control tower. While the drone in question was shut down hours later, questions have remained as to how this happened, and why. Today, at a hearing in Washington with members of the Air Force, including Colonel Robert Brewster of the USAF Autonomous Weapons Division, members of the public may finally have some insight as to why.

"I cannot divulge the full details," said Brewster, after taking the stand. "But we believe that the drone interpreted the air traffic controllers as a threat."

When asked to elaborate, Brewster explained that the drone was undergoing a routine exercise where it would unleash its payload against a target, and was operating under orders to "eliminate any threat," simulating the conditions of a real-world combat situation. The drone was given assigned targets on the testing range of Joint Base Andrews (JBA). According to Colonel Brewster, the drone's AI understood its controllers as a "threat" to the parameters of its mission, and therefore, eliminated them, under the auspices of its programing. Relived of its missile payload, the drone remained airborne for two hours and twenty-two minutes before staff at JBA were able to regain control of the drone and land it.

Brewster's account did not go over well with those overseeing the investigation. However, the colonel pointed out that the 1st Autonomous Bomber Wing has so far operated with a perfect operational record. "As regretful as the events of August 23rd are," he explained, "history will show that the automation of the USAF will save countless lives over the following decades."

While some have questioned as to "whose lives?" are being saved by the utilization of drone technology, statistically, Brewster is correct about one thing - since the activation of the Skynet artificial intelligence system at SAC-NORAD on August 4th of this year, the 1st Autonomous Bomber Wing has operated with 100% efficiency. While many of the details remain classified, contacts with the USAF have confirmed that autonomous have carried out strikes in numerous countries, including Sudan, Afghanistan, Bosnia, and Tajikistan. A spokesman for the USAF stated that "despite the tragedy of this incident, the Air Force remains committed to the ethical implementation of AI. It's a dangerous world, and while we may have reached the end of history, the stark reality is that if the United States seeks to remain the indispensable power of the international rules-based order, we have to remain at the top of our game."

It's a sentiment that's been repeated across the other branches of the US Armed Forces, though perhaps not with the same fervor. Only last week, my colleague, Harry Pithart, wrote an article on rumours that the Army is currently carrying out tests of automated track vehicles, and has even prototyped bi-pedal combat soldiers. To again quote Major Miguel Gomez, "I cannot confirm or deny the rumors of any such developments at this time." And while the Navy has perhaps been the least enthusiastic branch of our Armed Forces to embrace artificial intelligence, several sources have confirmed that the Joint Chiefs are looking to incorporate AI into capital ships, potentially reducing the need for human crews by 50 percent.

However, the USAF remains at the forefront of these developments, to the joy and unease of many. "AI fear" is the buzzword of 1997, and many have long since questioned the apparent coziness of Cyberdyne Research Systems and the USAF since their "revolutionary microprocessor technology" was unveiled in 1995. Promises from Cyberdyne that their technology would find applications in the civilian sector have yet to materialize. In light of the Skynet Funding Bill, and the resignation of Director Miles Dyson earlier this year, Cyberdyne remains mired in controversy, and rumors that Dyson resigned in protest as to the direction of the company remain unsubstantiated.

"Miles Dyson is one of the finest minds of our generation, and the Cyberdyne family is sorry to see him go," said spokeswoman Maria Morales in March of this year. "We can assure you that the settlement we reached was amicable, and that Dyson is now enjoying time with his family."

Miles Dyson had long been a rising star in Silicon Valley. Born into a lower-income family in Detroit, Dyson attended MIT, where he was later recruited into Cyberdyne by way of an outreach program. Founded in 1982, Cyberdyne would spend the next decade growing into one of the largest producers and distributors of home computers in the western United States. People in the tech world have noted that it was unlikely that Cyberdyne's founders could have ever anticipated that as of the last three years, Cyberdyne has been the largest supplier of military computer systems to the United States Armed Forces.

As the Dysons have refused any request for interview over the last five months, I was able to contact one of Mister Dyson's old colleagues, Nigel Patrick, currently working as an assistant researcher at MIT. When asked about the controversy, Mister Patrick said…

"Look, Jen…can I call you Jen? Miles was a geek. We both grew up in the sixties, we saw the war in Vietnam, the marches in Washington…some kids read Superman, some kids watched Star Trek. Miles was the kind of guy who always believed that technology could make the world better. Heck, there was so much ugliness in the world back then, world could've ended with the press of a button, but hell, he persevered. Times change, men change with them, but the Miles I know…well, I can't see him sticking around creating death machines."

"Some would say those so-called death machines are saving lives," I pointed out.

"Listen, Jen, I don't get to visit California much, but Miles is a family man first and foremost. You don't bring two children into the world while also plotting how to murder them. I know, war's a dirty business, but…look, take it from me, Skynet could be bloody C-3PO, and he still wouldn't want his name attached to it. He wanted people to look up at the sky in wonder, not in fear."

Again, playing devil's advocate, I pointed out that fear could be a deterrent.

"Good for them. But I know Miles. Alfred Nobel invented dynamite, but he got a peace prize named after him." He paused, before adding, "not every scientist gets to be Alfred Nobel."

With Mister Patrick's words still ringing in my ears, I managed to arrange a phone call with Major Joseph Vance. Vance, as he explained to me, is one of the officers who oversees Skynet and its technicians at NORAD, located within Cheyenne Mountain, Colorado. Major Vance made it clear that he could not comment on the internal affairs of Cyberdyne, but that he could offer commentary on the AI system he helped oversaw. While he explained that much of the technology behind Skynet is classified, in light of the incident at Andrews this week, he was able to comment on the following:

"By all rights this shouldn't have happened," he said. "There's a hundred safeguards that prevent our drones from targeting our own forces, and a hundred safeguards behind each of those original one hundred. By the laws of mathematics, what happened at Andrews should not have been mathematically possible."

"Haven't people been raising concerns since the Skynet Funding Bill was passed?" I asked. "Isn't it true that Skynet is now in charge of all decision making within the US military?"

The major laughed before he went on to explain that this is gross hyperbole from the media. "I'll explain it like this," he said. "Skynet is autonomous, but it's not without oversight. We run a scenario by it, it gives us the result in seconds. Skynet has removed human decision making in the sense that the combined strategy of the US Armed Forces is under its purview, but Skynet itself is incapable of taking direct action. For instance, it could order the president killed, but a human being would still need their hand on the button."

"But what about the 1st Autonomous Bomber Wing and similar units?" I asked. "Doesn't Skynet control them?"

"In a sense, yes. If our bombers are sent out, all their actions are coordinated by Skynet. However, in the event of an emergency, or if a tactical override is required, human operators can take over."

"And yet, three men are dead, and you've assured me that by all rights, this shouldn't have happened."

Major Vance conceded the point, but reiterated that the use of drones would save countless lives over the following decades. I provided a quote from the Center for AI Safety, who published an open letter in the Times in April of this year. Quote, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Major Vance guffawed at this before explaining that we'd been worried about the risks of AI for decades, and so far, none of them had become manifest. "If anything, Skynet makes us safer from nuclear war."

"How? Isn't Skynet plugged into the nuclear arsenal of the United States?"

"Yes, and has been programmed to act as a safeguard against any idiot who wants to press the button. In over five-hundred simulations, Skynet has always prevented an unsanctioned firing."

I pointed out to the major that many, even in the military, do not share this view. The intended talks for START III between the president and his Russian counterpart have been delayed until next year, with many in Russia and the States alike stating out that an AI system plugged into America's nuclear arsenal is counter-productive when it comes to shrinking the nuclear umbrella.

Major Vance refused to comment, and at this point, the interview ended.

As I sit at my desk, as I see the rain patter on the window, I look at a model of an F-117 Nighthawk on my desk. A memento from two years ago at a Lockheed arms fair. Quaint, almost, already outdated by the F-117C.

If you've ever seen an aircraft of the 1st Autonomous Bomber Wing, imagine the Nighthawk, but without the cockpit. The F-117C is, to quote its designers, "the ultimate in stealth-wing technology." By any measure, the implementation of these aircraft and Skynet has been a huge success.

And yet, in the course of this investigation, I remember my discussion with Professor Lieu of the Center for AI Safety, who sees things differently.

"We're sleepwalking into disaster," he said. "You'd think the [censored] in Washington would wake up after this tragedy in Andrews, but this hearing is just a farce. Contractors were awarded billions to set up the infrastructure for Skynet alone. That would be bad enough, but we've handed a loaded gun to an AI that's younger than my son, only without the drooling."

I gave Lieu some time to calm down, and he went on to explain that the events of the last two years had been crazy. "Cyberdyne develops the most advanced computer system in human history, and two years later, it's installed in our military apparatus. Answer me seriously, how much oversight could have gone into that? How many tests were truly conducted? Bills can languish in the White House for years, yet the Funding Bill got passed in a weekend?"

"And it's not just that. Let's say, for argument's sake, that Skynet is everything the military claims it is. Let's say that everyone can sleep safe and sound because of our electrical god in the sky. Great. How long until more AIs arise? How long until the Russians, the Iranians, the North Koreans get their hands on one? Mark my words, if nuclear weapons were the detente of the last half-century, AI will be the detente of the next. And that's provided no-one creates a rogue AI."

"And what makes a rogue AI?' I asked.

"How should I know?" he exclaimed. "Don't you understand? There's no precedent for this. None of it! I've read your articles, Miss Schlesinger, and let me tell you, if something like Skynet was ever implemented in the civilian sector, it could be worse than any rogue drone. Water, gone. Electricity, gone. You reported on Bosnia, didn't you? Total anarchy. An AI doesn't need to kill us to wipe out the human race, it just needs to rob us of our creature comforts. People talk about Skynet plugged into nukes? They're right to do so, that's terrifying, but if an AI wanted us dead, all it would need to do is shut off the grid. We'd do all the killing for it!"

At this point, Lieu had to leave, but he left me with the parting words: "trust me, none of this is natural. I've spent forty years writing on the development of computer technology. AI, like evolution, does not jump."

I asked what he meant by this, but the conversation ended. While claims of corporate espionage have bogged Cyberdyne since the reveal of their new chip, none of the claims have been substantiated. But while Professor King's views have been regarded as "extreme" by many in the field, many scientists I've spoken to have echoed Lieu's sentiments. That much more research is required in regard to a genie we can never put back in a bottle.

All I can say is that if I had three wishes, I'd bring those three men back to life. Men who, Brewster assured me, died so that hundreds of others could be saved.

Hopefully he's right.


A/N

The idea for this came to me when I read an article in Vox. I forget the details, but it basically involved a simulated scenario where an AI-controlled drone turned on its own airbase. Gave me the idea to drabble this up.