I had the opportunity to ask Jason Healey, editor of “A Fierce Domain – Conflict in Cyberspace, 1986 to 2012” and director of the Cyber Statecraft Initiative of the Atlantic Council, some questions on Autonomous Intelligent Agents (AIAs) in cyber conflict for a feature article that will be published in the fall in Engineering and Technology Magazine. Seen that I am forced to use only a short quote for the article it would be a shame not to share Jason’s thoughts in their entirety, even if his take on AIAs is much more pragmatic than mine. So, here goes…
Would Genuine AIAs (as defined here) represent a significant change in the history of cyber conflict, “something entirely different”, or would this be only another small evolutionary step?
From what is known in the non-classified world, I believe most of the precursor technologies for AIAs are well developed and their integration seems not far away: is that correct in your opinion?
Autonomous weapons would be, and are, something entirely different. Even in uniform and under command discipline, humans cannot be metaphorical robots merely following orders, so we until now we had been rightly uncomfortable with real robots doing just that in combat with expectations that humans will be “in the loop” (approving the order to attack) or “on the loop” (able to override an already programmed order to attack). There is great promise with autonomous weapons, as they’ll never become filled with bloodlust, rape, or torture. But great perils too, especially if there is a spiraling competition between each sides autonomous weapons.
I believe autonomous weapons are clearly here already. Your definition of AIAs I’m sure is accurate for how an AI specialist would define them, but from an operational military perspective, Stuxnet has already crossed the line in ways I don’t think we can ignore.
Stuxnet was programmed with exceptionally complex behavior to find out and physically destroy targets for weeks or even months at a time, with little chance of regular outside contact with its human creators. If its masters wanted to stop this automated destruction, they would have had very limited options to communicate with Stuxnet. There doesn’t seem to have been a chance for a human “in the loop” to confirm it was the right target nor was there a human “on the loop” who could hit the equivalent of the self-destruct switch. The attack would have continued until June 2012, when the code was programmed to stop.
In the military, that is a more than reasonable expectation of an autonomous weapon, though it falls short on some of your official criteria.
Even though Stuxnet was well programmed and caused no real collateral damage, perhaps the next algorithmic attack won’t be so well programmed. Cyber Command is expanding rapidly amidst a siege mentality of the Department of Defense being under constant attack. Internal controls often lapse during such periods. More likely though is that American’s adversaries, now that they understand that such capabilities exist, will use algorithmic attacks against us and may choose to disregard the controls used in Stuxnet.