Guest Post: Malcolm MacIver on War with the Cylons

Malcolm MacIver We’re very happy to have a guest post from Malcolm MacIver. See if you can keep this straight: Malcolm is a professor in the departments of Mechanical Engineering and Biomedical Engineering at Northwestern, with undergraduate degrees in philosophy and computer science, and a Ph.D. in neuroscience. He’s also one of the only people I know who has a doctorate but no high school diploma.

With this varied background, Malcolm studies connections between biomechanics and neuroscience — how do brains and bodies interact? This unique expertise helped land him a gig as the science advisor on Caprica, the SyFy Channel’s prequel show to Battlestar Galactica. He also blogs at Northwestern’s Science and Society blog. It’s a pleasure to welcome him to Cosmic Variance, where he’ll tell us about robots, artificial intelligence, and war.

———————————————————

It’s a pleasure to guest blog for CV and Sean Carroll, a friend of some years now. In my last posting back at Northwestern University’s Science and Society Blog, I introduced some issues at the intersection of robotics, artificial intelligence (AI), and morality. While I’ve long been interested in this nexus, the most immediate impetus for the posting was meeting Peter Singer, author of the excellent book ‘Wired for War’ about the rise of unmanned warfare, while simultaneously working for the TV show Caprica and a U.S. military research agency that funds some of the work in my laboratory on bio-inspired robotics. Caprica, for those who don’t know it, is a show about a time when humans invent sentient robotic warriors. Caprica is a prequel to Battlestar Galactica, and as we know from that show, these warriors rise up against humans and nearly drive them to extinction.

a-centurian-cylon-in-battlestar-galactica--2Here, I’d like to push the idea that as interesting as the technical challenges in making sentient robots like those on Caprica are, an equally interesting area is the moral challenges of making such machines. But “interesting” is too dispassionate—I believe that we need to begin the conversation on these moral challenges. Roboticist Ron Arkin has been making this point for some time, and has written a book on how we may integrate ethical decision making into autonomous robots.

Given that we are hardly at the threshold of building sentient robots, it may seem overly dramatic to characterize this as an urgent concern, but new developments in the way we wage war should make you think otherwise. I heard a telling sign of how things are changing when I recently tuned in to the live feed of the most popular radio station in Washington DC, WTOP. The station had commercial after commercial from iRobot (of Roomba fame), a leading builder of unmanned military robots, clearly targeting military listeners. These commercials reflect how the use of unmanned robots in the military has gone from close to zero in 2001 to over ten thousand now, with the pace of acquisition still accelerating. For more details on this, see Peter Singer’s ‘Wired for War’, or the March 23 2010 congressional hearing on The Rise of the Drones here.

While we are all aware of these trends to some extent, it’s hardly become a significant issue of concern. We are comforted by the knowledge that the final kill decision is still made by a human. But is this comfort warranted? The importance of such a decision changes as the way in which war is conducted, and the highly processed information supporting the decision, becomes mediated by unmanned military robots. Some of these trends have been helpful to our security. For example, the drones have been effective against the Taliban and Al-Qaeda because they can do long-duration monitoring and attacks of sparsely distributed non-state actors. However, in a military context, unmanned robots are clearly the gateway technology to autonomous robots, where machines can eventually be in the position to make decisions that have moral weight.

“But wait!” many will say, “Isn’t this the business-as-usual-robotics-and-AI-are-just-around-the-corner argument we’ve heard for decades?” Robotics and AI have long been criticized as promising more than they could deliver. Are there signs that this could be changing? While an enormous amount could be said about the reasons for the past difficulties of AI, it is clear that some of its past difficulties stem from having too narrow a conception of what constitutes intelligence, a topic I’ve touched on for the recent Cambridge Handbook of Situated Cognition. This narrow conception revolved around what might loosely be described as cognitive processing or reasoning. Newer types of AI and robotics, such as embodied AI and probabilistic robotics, tries to integrate some of the aspects of what being more than a symbol processor involves: for example, sensing the outside world and dealing with the uncertainty in those signals in order to be highly responsive, and emotional processing. Advanced multi-sensory signal processing techniques such as Bayesian filtering were in fact integral to the success of Stanley, the autonomous robot that won DARPA’s Grand Challenge to drive without human intervention across a challenging desert course.

As these prior technical problems are overcome, autonomous decision making will become more common. Eventually, this will raise moral challenges. One area of challenge will be how we should behave towards artifacts, be they virtual or robotic, which are endowed with such a level of AI that how we treat them becomes an issue. On the other side, how they treat us becomes a problem, most especially in military or police contexts. What happens when an autonomous or semi-autonomous war robot makes an error and kills an innocent? Do we place responsibility on the designers of the decision making systems, the military strategists who placed machines with known limitations into contexts they were not designed for, or some other entity?

Both of these challenges are about morality and ethics. But it is not clear whether our current moral framework, which is a hodgepodge of religious values, moral philosophies, and secular humanist values, is up to responding to these challenges. It is for this reason that the future of AI and robotics will be as much a moral challenge as a technical challenge. But while we have many smart people working on the technical challenges, very few are working on the moral challenges.

How do we meet the moral challenge? One possibility is to look toward science for guidance. In my next posting I’ll discuss some of the efforts in this direction, pushed most recently by a new activist form of atheism which holds that it is incorrect to think that we need religion to ground morality, and even dangerous. We can instead, they claim, look to the new sciences of happiness, empathy, and cooperation for guiding our value system.

26 Comments

26 thoughts on “Guest Post: Malcolm MacIver on War with the Cylons”

  1. As long as USA is fighting people with rocket launchers hiding in caves, I guess autonomy may not be high on the wish list, but what if there is another cold war between two fairly equally matched technological powers? Where both sides have drones, and where the battlefield will be full of jamming, eavesdropping and false signaling. Then added autonomy will be a necessity to make decisions fast enough and to be less reliant on vulnerable communication links.

  2. It’s pretty well known among mathematicians that the Gerhard Gade University Professor at Harvard got neither a high-school diploma nor a college degree, and only took his official Princeton Ph.D. because his mother insisted.

  3. Integrating ethical decision making into autonomous robots capable of comprehending ethics, may itself be unethical. Of course, the ultimate sequelae of imbuing human ethical standards in robots will be the inevitable implementation by the robots of the Final Solution to the Human Problem.

  4. Caprica is a pretty awful show. I hope this isn’t the guy who invented the phrases “generative algorithm” and “It’s analog so it can’t be copied!”

  5. “How do we meet the moral challenge? One possibility is to look toward science for guidance. In my next posting I’ll discuss some of the efforts in this direction, pushed most recently by a new activist form of atheism which holds that it is incorrect to think that we need religion to ground morality, and even dangerous. We can instead, they claim, look to the new sciences of happiness, empathy, and cooperation for guiding our value system.”

    We certainly don’t need religion to ground morality, since after all religion is simply a post hoc supernatural justification of moral capacities given us by evolution. But whether science can add a specifically normative element beyond its description of what those capacities are is an open question. New atheist Same Harris (see his website and his TED talk) has been pretty active in pushing the idea that science *can* tell us what’s right and wrong, since human well-being is a matter of particular, objectively determinate brain states. But not everyone will buy the claim that human well-being so measured is the ultimate moral good. They might think the world to come is more valuable than anything we experience here and now. So I don’t think he’s going to convince either the philosophical community or religious fundamentalists that science can close the is-ought gap.

  6. “What happens when an autonomous or semi-autonomous war robot makes an error and kills an innocent?”

    What happens now when navigation algorithm errors crash the drones into villages instead of landing them on runways?

    As for the future, a system that protects and insulates CIA agents for consequence of killing innocents could probably be expanded to robots.

  7. Low Math, Meekly Interacting

    Do accidents caused by imperfect AI differ fundamentally from any other accident in wartime? The interwebs were recently ablaze with opposing bouts of hand-wringing and chest-thumping when wikileaks showed us all how Namir Noor-Eldeen and Saeed Chmagh met their ends. Fog of war, etc., regrettable, but not unethical or criminal, so we’re told. If collateral damage is an “acceptable” cost of the war business, then the odd glitch in an ethics algorithm or other such unforeseen consequences of implementing autonomous weapons don’t seem all that exceptional. Unfamiliar, maybe, but is this more the shock of the new than anything else? If a robot can screw up and it’s such a cause for earnest philosophizing, how has a measure of deadly human miscalculation come to be seen as normative?

  8. You can’t blame a science advisor for the quality of a show, and “generative algorithm” is a fairly standard phrase in machine learning (Google it), whether or not it was used correctly as dialogue.

    I think there is hand-wringing about mistakes (prefer not to use the word “accident”) that might be caused by semi-autonomous actors because it’s not clear where the responsibility lies and how much care was taken to not screw up. When a human screws up, we have some idea of how to assign responsibility, although it’s modified by chain-of-command issues (officers are supposed to be responsible for actions of subordinates, etc.) If a thing screws up, do we blame the engineer, who wasn’t there at the time? Do we say it’s nobody’s fault? Does the engineer have the same incentive to not screw up as a soldier who is on the spot themselves? Practically everybody has encountered some trivial problem, like a wrong bill, that gets blamed on “the computer,” but we know it’s not the computer’s fault – billing programs don’t have agency. What if they did? That’s a legitimate subject for debate.

  9. Low Math, Meekly Interacting

    Well, OK. Say the robot made an “honest” mistake. I suppose it could be dealt with the same way the military deals with other “honest” mistakes, e.g., “Oops, sorry, fog o’ war”. If the programming is poor, I suppose then the manufacturer might be criminally liable, or maybe civilly liable to victims, sort of like Blackwater might be civilly liable for the misdeeds of poorly-trained mercenaries. Is it really so difficult?

  10. “It is for this reason that the future of AI and robotics will be as much a moral challenge as a technical challenge.”

    This is complete nonsense. Thousands of scientists have been working for generations on the technical challenges, and we are still quite far away from a solution. The moral aspect is a relatively trivial problem. Our “hodgepodge” of moral values is a very flexible framework, well used to incorporating new technologies. It can’t be reinvented, and there is no reason to do so.

  11. @joe: Actually, the idea of not being able to copy analog behaviour from one chip to another stems from real attempts to evolve genetic programs on FPGAs. When a selection function is applied and random mutations are introduced to the array, programs evolve which take advantage of analogue behavior of the chip, analogue behaviour which is unique to that chip, and when the gate configuration is copied to another chip, the program no-longer works.

  12. I’m curious to read the next part. I’m doubtful science can give any answers to the moral questions raised because there are too many unknowns. Personally, I’m against any use of AIs or drones in combat. Simply because war becomes a meaningless game when human lives aren’t at risk on both sides. It reduces the moral responsibility of a peoples’ choice to go to war.

  13. Reginald Selkirk

    He’s also one of the only people I know who has …

    This phrase “one of the only” is stupid and meaningless. Please better yourself by not using it any more. Alternative phrases which would actually have meaning include “One of only a few”; “The only” (which emphasizes that only means one); “One of the few”; “One of only N” (where N is some small number)

    Thank you,

  14. “a U.S. military research agency that funds some of the work in my laboratory”

    So, first you prove that you do not even grasp the concept of ‘ethics’ and then you proceed trying to discuss its implementation?

  15. Frankly it looks to me like Malcolm is inventing problems where there are none.

    Malcolm: “One area of challenge will be how we should behave towards artifacts, be they virtual or robotic, which are endowed with such a level of AI that how we treat them becomes an issue.”

    Well, self-conscious machines are still pure science fiction, so it’s as important an issue as how we should treat human clones or humans with brain enhancing implants. We have enough real problems to worry about to waste time and resources on hypothetical future ones.

    Malcolm: “What happens when an autonomous or semi-autonomous war robot makes an error and kills an innocent?”

    The same thing that happens when any other human creation kills an innocent.

    Why do you think it makes any difference that it is an autonomous war robot and not a malfunctioning rifle or even a vacuum cleaner? Those who made it are responsible.

    The only thing that could make a difference would be if said robot were self-conscious, but in this case see above.

  16. Kaninfaan, how’s he not being ethical? Considering how wrong you are I’m really curious to hear your logic behind that statement.

    And Mantis, since when has foreseeing a problem been considered inventing it? Would you rather have us just deal with such problems when they happen? Taking these questions into consideration when doing the research that advances such happenings is the only ethical ways to conduct research, so in that sense Malcolm is doing exactly what he should be.

  17. Chad: “And Mantis, since when has foreseeing a problem been considered inventing it?”

    It’s foreseeing if you can show how present conditions can plausibly evolve into the ones causing the problem.

    If your problem requires near-miracles to manifest itself it’s inventing.

    The only “novel” ethical problem hinted at in the article is the one of self-conscious machines (it is as old as SF, but it would be novel to reality if it were to somehow materialize), but such machines remain as much a pipe dream as ever.

  18. Loved the Caprica episodes I’ve seen, keep them coming. I haven’t had the time to watch it much, but I’ll catch up with it the same way I did with Battlestar Galactica; by renting the DVDs. Especially loved the subtle understanding of history shown by the writers.

  19. Aleksandar Mikovic

    Robots will be useful, but their “intelligence” will never reach the level of a human mind simply because Goedel has shown that a machine reasoning is incomplete.
    So for a robot there will always exist a situation where it will get stuck, i.e. it will not be able to decide what to do. As far as ethics for robots is concerned, it will be even worse, since many of the ethics dilemas cannot be put into a machine logic form.

  20. Thanks for the interesting feedback on my posting – here’s some responses.

    Tom Clark suggests that normativity and science are unlikely to get together. I’m not so sure. If I want to maximize the well-being of conscious humans (as Harris likes to put it), and the way to maximize this is through policies X, Y, and Z, then I ought to implement policies X, Y, and Z. If the point is that science can’t tell me I that I should seek to enhance the well-being of people affected in moral decision making, then—agreed—it cannot. But every logic, be it informal or formal, has premises that have to be accepted without these premises being the result of an argument – on pain of infinite regress. This is a pervasive feature of reasoning which is not at all attendant on this having to do with normativity. All the rest is, potentially at least, an empirical question: what states constitute well-being, how this is measured, and what policies/actions maximize it.

    Low Math, Meekly Interacting and Mantis comment to the effect that an autonomous robot making an error and killing an innocent is not substantially different from a soldier making the same error (or a vacuum cleaner). Actually, as Singer points out in ‘Wired for War,’ product liability law may provide a model for some of the legalities we need to think of when it comes to autonomous war machines making errors. Singer also mentions that laws concerning damage inflicted by your pet may provide another source, since pets are autonomous along some of the same lines as these future robots will be. We are far, however, from articulating how we will apply these ideas to unmanned and autonomous warfare. This much is clear, and there’s a lot of empirical evidence for our discomfort in not knowing what to do in this arena, including the “Rise of the Drones” congressional hearing I linked to in my posting, and articles such as this one where a law professor suggests that a drone pilot may be held culpable for his or her actions: http://is.gd/bNKcE.

    There are also unique issues to autonomous war machines — consider that errors in the way we code or command autonomous war machines could result in the death of large numbers of innocent people. Is this something that we, as a society, should just say is covered by the insurance policies of the military contractors involved, or are more serious criminal penalties in order? The shear potential lethality takes the debate of the issue considerably out of the ambit of vacuum cleaner casualties. Effective autonomous war robots are a significant enough change to the way we’ve done things before in regards to culpability in these situations that researchers, policy makers, and politicians need to start to work through the issues now.

    Mantis says that it’s not worth considering the problem of how we should treat robots or virtual characters that are imbued with AI since there is no such thing. Why think about this when there are real problems to solve? I appreciate the practical outlook behind this comment, although I don’t think it’s too early to begin to work these things out. This is in part because the emphasis on a single far-off moment where things are conscious or sentient is misguided. Our own consciousness evolved in many steps, and we can see this in the cognition of non-human animals. I’ve speculated about the evolutionary antecedents of planning, a form of consciousness, in earlier work (http://is.gd/bNKib). An evolutionary perspective can suggest some of the ways in which we may start to approach sentient artifacts long before what is recognizably sentient. We are already seeing some discussion of emotional connections that soldiers and home users of robots can develop to their robots, and the ethical implications of this.

  21. Low Math, Meekly Interacting

    But the shear and frequently-realised lethality of human misconduct or miscalculation is also monstrous. Even our conventional weapons can empower a single human being with the ability to kill hundreds, maybe even thousands. Humans are far, far from infallible. The incomplete knowledge and dilution of responsibility afforded by quite normal chain-of-command decision-making is a potent, well-proven vehicle for mass slayings of non-combattants. I have no doubt the real tally of civilian deaths in Iraq and Afghanistan inflicted by American forces alone would fail to shock, yet under these circumstances nothing is “wrong”. It’s war, people regrettably get killed. This appalling state of affairs is, as I stated, quite normative. It’s not that I wish to trivialize the lethality of errant and heavily-armed AI’s, nor do I put them on par with a malfunctioning vacuum cleaner (which is an absurd analogy). I just don’t see autonomous drones and the like as such an ethical stretch, given how savage and wasteful “good” flesh-and-blood warriors are more-or-less expected to be as a cost of doing business. Autonomous weapons are new, strange, and scary, but I’d scarcely expect them to do a worse job than the human actors we currently rely on. If an intelligent drone dropped a Hellfire missile on a playground, is that so different from ordering a human to bomb a wedding? Not to me, anyhow.

  22. I don’t know how determining states of well-being is an empirical question. That is, it could be, but surely you could get answers such as Soma, porn gum, and Beverly Hills 90210? That’s not even three different things…

    Perversely, autonomy and well-being are – for humans at least – to some extent conflicting desires.

    I suspect with AIs that we will never be sure that there is “anyone home”, an experiencer. Of course, that’s true of many human Saturday nights.

  23. “Isn’t this the business-as-usual-robotics-and-AI-are-just-around-the-corner argument we’ve heard for decades?”

    While AI may still be “just around the corner” (for some values of just around the corner), robotic weapons are already here – and have been for decades. Consider the Harpoon anti-ship missile. It’s given an area to search, launched, and from that point, makes its own decisions (based on programming, it’s not like it’s even minimally conscious) about what to attack. Consider Phalanx (and other close-in weapon systems). You turn it on, set it for AAW Auto mode, it begins searching for targets. If it sees something it considers a threat to the ship… it attacks. If it sees something it doesn’t consider a threat… it doesn’t attack – and you can’t force it to (I’m speaking of its missile defense capability – there are anti-surface modes that are purely manual).

    There are probably ethical concerns involving truly intelligent systems, but as we’re not close to inventing weapons that are intelligent in any real sense of the term, this is an angels-dancing-on-pinheads question, at least for the time being. The ethical questions for systems that are autonomous but not intelligent – well, they’re not really capable of doing ethical analysis. I guess the bottom line is that this line of inquiry is sort of interesting but not really very applicable to anything, and won’t be for quite a while.

  24. Low Math, Meekly Interacting

    How about the some of the oldest and dumbest weapons with a measure of autonomy we’ve got, namely heat-seeking missiles? Probably pretty dangerous if you just went around launching them willy-nilly. But if you put such a weapon at the disposal of a cadre highly-trained soldiers, some of considerable rank, manning an Aegis-equipped cruiser, well, why worry?

    http://www.washingtonpost.com/wp-srv/inatl/longterm/flight801/stories/july88crash.htm

    While the U.S. govt. provided some restitution to settle a lawsuit, apparently our armed forces did nothing “wrong”. Officially, we “regretted” the incident. I bet a lot of people don’t even know this happened.

    A modest proposal: Ethicists and psychologists figure out how to equip humans with the skills to flawlessly operate sophisticated, at best semi-autonomous weapons, and the problem of AI agency will likely be solved as a bonus.

Comments are closed.

Scroll to Top