Guest Post: Malcolm MacIver on War with the Cylons

Malcolm MacIver We’re very happy to have a guest post from Malcolm MacIver. See if you can keep this straight: Malcolm is a professor in the departments of Mechanical Engineering and Biomedical Engineering at Northwestern, with undergraduate degrees in philosophy and computer science, and a Ph.D. in neuroscience. He’s also one of the only people I know who has a doctorate but no high school diploma.

With this varied background, Malcolm studies connections between biomechanics and neuroscience — how do brains and bodies interact? This unique expertise helped land him a gig as the science advisor on Caprica, the SyFy Channel’s prequel show to Battlestar Galactica. He also blogs at Northwestern’s Science and Society blog. It’s a pleasure to welcome him to Cosmic Variance, where he’ll tell us about robots, artificial intelligence, and war.

———————————————————

It’s a pleasure to guest blog for CV and Sean Carroll, a friend of some years now. In my last posting back at Northwestern University’s Science and Society Blog, I introduced some issues at the intersection of robotics, artificial intelligence (AI), and morality. While I’ve long been interested in this nexus, the most immediate impetus for the posting was meeting Peter Singer, author of the excellent book ‘Wired for War’ about the rise of unmanned warfare, while simultaneously working for the TV show Caprica and a U.S. military research agency that funds some of the work in my laboratory on bio-inspired robotics. Caprica, for those who don’t know it, is a show about a time when humans invent sentient robotic warriors. Caprica is a prequel to Battlestar Galactica, and as we know from that show, these warriors rise up against humans and nearly drive them to extinction.

a-centurian-cylon-in-battlestar-galactica--2Here, I’d like to push the idea that as interesting as the technical challenges in making sentient robots like those on Caprica are, an equally interesting area is the moral challenges of making such machines. But “interesting” is too dispassionate—I believe that we need to begin the conversation on these moral challenges. Roboticist Ron Arkin has been making this point for some time, and has written a book on how we may integrate ethical decision making into autonomous robots.

Given that we are hardly at the threshold of building sentient robots, it may seem overly dramatic to characterize this as an urgent concern, but new developments in the way we wage war should make you think otherwise. I heard a telling sign of how things are changing when I recently tuned in to the live feed of the most popular radio station in Washington DC, WTOP. The station had commercial after commercial from iRobot (of Roomba fame), a leading builder of unmanned military robots, clearly targeting military listeners. These commercials reflect how the use of unmanned robots in the military has gone from close to zero in 2001 to over ten thousand now, with the pace of acquisition still accelerating. For more details on this, see Peter Singer’s ‘Wired for War’, or the March 23 2010 congressional hearing on The Rise of the Drones here.

While we are all aware of these trends to some extent, it’s hardly become a significant issue of concern. We are comforted by the knowledge that the final kill decision is still made by a human. But is this comfort warranted? The importance of such a decision changes as the way in which war is conducted, and the highly processed information supporting the decision, becomes mediated by unmanned military robots. Some of these trends have been helpful to our security. For example, the drones have been effective against the Taliban and Al-Qaeda because they can do long-duration monitoring and attacks of sparsely distributed non-state actors. However, in a military context, unmanned robots are clearly the gateway technology to autonomous robots, where machines can eventually be in the position to make decisions that have moral weight.

“But wait!” many will say, “Isn’t this the business-as-usual-robotics-and-AI-are-just-around-the-corner argument we’ve heard for decades?” Robotics and AI have long been criticized as promising more than they could deliver. Are there signs that this could be changing? While an enormous amount could be said about the reasons for the past difficulties of AI, it is clear that some of its past difficulties stem from having too narrow a conception of what constitutes intelligence, a topic I’ve touched on for the recent Cambridge Handbook of Situated Cognition. This narrow conception revolved around what might loosely be described as cognitive processing or reasoning. Newer types of AI and robotics, such as embodied AI and probabilistic robotics, tries to integrate some of the aspects of what being more than a symbol processor involves: for example, sensing the outside world and dealing with the uncertainty in those signals in order to be highly responsive, and emotional processing. Advanced multi-sensory signal processing techniques such as Bayesian filtering were in fact integral to the success of Stanley, the autonomous robot that won DARPA’s Grand Challenge to drive without human intervention across a challenging desert course.

As these prior technical problems are overcome, autonomous decision making will become more common. Eventually, this will raise moral challenges. One area of challenge will be how we should behave towards artifacts, be they virtual or robotic, which are endowed with such a level of AI that how we treat them becomes an issue. On the other side, how they treat us becomes a problem, most especially in military or police contexts. What happens when an autonomous or semi-autonomous war robot makes an error and kills an innocent? Do we place responsibility on the designers of the decision making systems, the military strategists who placed machines with known limitations into contexts they were not designed for, or some other entity?

Both of these challenges are about morality and ethics. But it is not clear whether our current moral framework, which is a hodgepodge of religious values, moral philosophies, and secular humanist values, is up to responding to these challenges. It is for this reason that the future of AI and robotics will be as much a moral challenge as a technical challenge. But while we have many smart people working on the technical challenges, very few are working on the moral challenges.

How do we meet the moral challenge? One possibility is to look toward science for guidance. In my next posting I’ll discuss some of the efforts in this direction, pushed most recently by a new activist form of atheism which holds that it is incorrect to think that we need religion to ground morality, and even dangerous. We can instead, they claim, look to the new sciences of happiness, empathy, and cooperation for guiding our value system.

26 Comments

26 thoughts on “Guest Post: Malcolm MacIver on War with the Cylons”

  1. Low Math and Sean comment that we don’t need autonomous or unmanned war machines (or at least, not the newer media-grabbing examples) to raise the ethical issues that were brought up by the original post. This may be true, but I completely disagree with downplaying the significance of the changes that are currently underway through the move of pointing out that there were such devices in the past. Why refer to the relatively recent Aegis and Harpoon systems, when simple autonomous war robots in the form of anti-personnel landmines (sensor to trigger, and an effector) go back 700 years? But, did they or the other systems Low Math and Sean allude to change the way we conduct war as the more recent unmanned systems are? No, and the reasons have less to do with technology than with political factors and military culture. In addition, weapons like the Hellfire and Aegis are deployed from a manned station, usually in the theater. The trend at issue is unmanned systems that are remotely controlled, often from outside of the theater, and how this is going to facilitate the transition to systems where more and more decisions are autonomous.

    Certainly some of the moral issues are not unique and are simply being exposed now in a more dramatic way. However, there are unique elements to the new developments that make it more urgent for policy makers, politicians, military folks, researchers, and the public to assess where we are going with the technology. For example, consider the problem of jurisdiction. War treaties are made on the assumption that the most proximal decision maker, the sensor, and the effector are all in the same location — namely, a soldier somewhere in a theater of war. What happens when the decider is in one place, and the sensor and effector (e.g., gun) is in another, potentially with different legal treaties in each location? Do combatants attacked by drones have legal clearance to attack the corresponding drone command center, with all the protections that the Geneva Convention affords?

    Another set of new problems is offered up by the example of the current cyberwar between the US and China. Cyberwar attacks come seemingly from nowhere – as easily from the hacked computer of the workplace part-time employee as from anywhere else. Imagine these systems are connected to machines that can do violence through encrypted wireless communications. There’s no doubt that there will be the AK-47 of unmanned war robots, mass produced and sold to any entity with the cash or numbered offshore account to pay for it. Tracking down the perpetrator will be significantly more difficult. Autonomous warfare becomes anonymous warfare. This problem connects back to the jurisdiction problem. On the positive side, we can hope that jurisdictional problems will lead to a unified, potentially global effort to deal with the most widespread issues. Perhaps the cost of mistakes will be so high that an international NTSB-like system for dealing with unmanned war system faults will be devised, complete with mandatory black-box systems for recording all data before the fault occurs so that corrections can be made.

    I thank Peter Asaro (www.peterasaro.org) for helpful discussion of these points.

Comments are closed.

Scroll to Top