Another Step Toward Skynet

There should be some government program that forces scientists to watch dystopian science-fiction movies, so they can have some idea of the havoc their research is obviously going to cause. I just stumbled across an interview with Nobel Laureate Gerald Edelman, that has been on the site for a couple of months. (Apparently the Discover website is affiliated with some sort of magazine, to which you can subscribe.)

Edelman won the Nobel for his work on antibodies, but for a long time his primary interest has been in consciousness. He believes (as all right-thinking people do) that consciousness is ultimately biological, and is interested in building computer models of the phenomenon. So we get things like this:

Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortex—what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you don’t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who aren’t thinking of anything.

In other words, our device has some lovely properties that are necessary to the idea of a conscious artifact. It has that property of indwelling activity. So the brain is already speaking to itself. That’s a very important concept for consciousness.

terminator_robot.jpg
Oh, great. We build giant robots, equip them with lasers, and now we teach them how to gaze at their navels, and presumably how to dream. What can possibly go wrong?

86 Comments

86 thoughts on “Another Step Toward Skynet”

  1. Sounds cool to me, though I would probably appreciate it more if they were research ways to keep our brains ticking over for another century or more, rather than inventing new ones 🙂

  2. I dunno–sounds like grounds for detention at Guantanamo to me. How do you torture computer memory into compliance with your suppositions? (Waterboarding would be a short answer)

  3. Pingback: Brain-Based Devices « Speculative Heresy

  4. From the article:

    “An artificial intelligence program is algorithmic: You write a series of instructions that are based on conditionals, and you anticipate what the problems might be. AI robot soccer players make mistakes because you can’t possibly anticipate every possible scenario on a field. Instead of writing algorithms, we have our BBDs play sample games and learn, just the way you train your dog to do tricks.”

    ummm while i mostly didn’t go to class when i was in school, i went to a few, and the ones on AI seem to contradict that statement.

    He sounds like when a philosopher tries to talk about physics and isn’t really familiar with the literature.

  5. I for one welcome our… darn it:P. Beat to it.

    EDIT: Testing new editing feature. WEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE!! I love being able to edit my bad spellings away:).

  6. So… what percentage of the gazillions of neurons in a real brain are dedicated to motor control and the autonomic nervous system? If the hardware, including I/O, replace that number of neurons, how many artificial neurons are still needed to evolve consciousness?

  7. “He believes (as all right-thinking people do) that consciousness is ultimately biological”

    That’s awfully dismissive. Sure, people like Chalmers don’t seem to be very popular amongst physicists, but it seems a little harsh to call them wrong-thinking.

  8. I wonder how much activity is needed for the neural net to qualify for entry into heaven?

  9. Sure, people like Chalmers don’t seem to be very popular amongst physicists, but it seems a little harsh to call them wrong-thinking.

    I’ve only skimmed Chalmers’ arguments, but it seems to me that he still believes in a naturalistic source for consciousness (as opposed to supernatural) and allows for the possibility that AI systems can also have consciousness, so in the end his philosophical differences with the majority of scientists in the field are probably moot regarding this particular issue.

    In my own view, there has been plenty of evidence found in the last few years that biology is indeed all that’s necessary for a functioning brain to be a conscious entity. There certainly hasn’t been any evidence found to the contrary.

    So if, one day, we succeed in fully emulating the functioning of a human brain with a mixture of silicon and software then given the right external stimuli (i.e. education, experience) the result will be indistinguishable from a conscious human being. Creating a new entity from scratch would also have to emulate the growth of the brain from birth, through childhood, so I’m not expecting a breakthrough any time soon (!), but one shortcut would be to take a copy of the state of an adult’s brain which will think and act just like the original (at least to start with).

    None of this is remotely possible today, of course, but it’s fun to speculate, and Izhikevitch’s experiments seem to be pointing us in the right direction.

  10. I wonder how much activity is needed for the neural net to qualify for entry into heaven?

    Heh. That certainly will be an interesting debate. If you can create new biological human beings in a test tube and they still receive ensoulment (no religious person or body rejects this notion, as far as I know), then what about creating artificial human life that is truly conscious?

    My guess is that the debate will divide into camps based on a force even more fundamental than religion—i.e. politics—with left-wing Christians accepting the possibility of AI humans having a soul, and right-wing Christians calling such an idea an abomination.

    Of course, should we ever reach the point where AI humans are a reality, it would pose a whole host of difficult problems related to the issue of what it means to be alive. When you power off an AI, is it dead? Would it make a difference (in terms of being dead) between saving the current state before powering off, or not? And if you make a backup copy, is that a separate entity, even if you don’t switch it on while the original is still functional?

    I can see that philosophers and ethicists a going to have a whale of a time sorting through all this stuff when it finally happens for real!

  11. |John R Ramsden

    Re #14, although Christianity, and other religions AFAIK, consider a human soul to be indivisible, I dare say in the scenario you describe (which will undoubtedly come to pass sooner or later at the present rate of progress) religious folk will develop a rear guard argument that all those who helped develop AI have in some sense each contributed a portion of their soul to the product. Gives a new meaning to the phrase “putting heart and soul into one’s work”.

  12. @tacitus,

    I’m a computer guy, although not an AI type. My understanding of all neural nets thus far developed is that they share common characteristics with biological ones. Therefore:

    1). Regarding backups, restores and cloning. You can do so but it doesn’t matter. It’s like cloning an animal or (in principle) a person. You get a new creature physically based upon the original, but it’s truly a new creature. They don’t know what the original knew, they aren’t “linked” in any way, and they have a different mind and consciousness.

    That seems to be because the neural network consists of more than just the physical arrangement of the circuits involved. In biological neural networks this consists of the electrical potentials and neurochemical mixes at the axon interfaces. This is the stuff you cannot clone, and I believe that the ultimate physical reason you cannot is the Heisenburg Uncertainty Principle. In computer neural nets, ah, uh, at this point you hit the limits of my knowledge. I just don’t know.

    Anyhow, it’s my understanding that cloning a neural network is ultimately just a variation on the physical reproduction methods of any species. No one thinks that their children “are them”, at least no one sane and healthy. You’ve created a new organism. The same should hold true of artificial neural networks.

    2). Turning off the net. This would be like a loss of consciousness for you or I. It would not be comparable to death. You’d have to irreparably damage the net to achieve the equivalent of death. In fact it would then be dead.

    It would not be like sleeping either, unless you gave the neural network some sort of control over over the on/off switch, even if it was only indirect. Biological neural nets can be roused from sleep by external stimuli, and also by becoming rested. The artificial equivalent of that (sleeping) would be a low power, low activity state with some sort of timer mechanism maybe?

    What is really interesting is dreaming. Dreaming seems to be fundamental to conscious minds. It’s quite possible that an artificial neural network would have to dream too. My understanding is that the science to know what the role of dreaming is, is still poorly understood. The theories range from “white noise that doesn’t mean anything” to “searching for meaning from the day’s events” to “pruning memories and knowledge” to “consolidation and ordering of facts”.

    I’m sure I have some of the details wrong. Mostly I’m an interested observer of the AI field.

  13. So, if the machines with neural nets are conscious, are they citizens of the nation where they were manufactured? If so, can they vote immediately, or do they have to be around a specified number of years before being granted the right to vote? Is the specified number of years precisely the same as for humans, and regardless of the type of machine? If so, on what grounds, given the differences in computational speed between different machines (and between machines and humans, for that matter)? If their consciousness was recognized, but they still weren’t allowed to vote, what other ‘human’ rights would they be denied?

    I don’t think Terminator is the relevant story; it’s ST:TNG.

  14. @Brian,

    Interesting comments. I have read a little about the fact that there is no set “state” the brain is in — the constantly changing electrical signals that are always active throughout the brain, but I don’t know enough to know whether that’s something that, if lost (in a reboot, for example) you would end up losing the essence of the AIs personality/consciousness.

    I guess the question is whether you can save enough “brain state” before powering down in order to perform a successful restart when you power the AI brain back on.

    If you can’t, then powering off would be like biological death, and the rebooted AI would be a like a new individual brought to life, but if you can restore most if not all of the saved state, then I guess powering off would be most like going into suspended animation for a uncertain length of time.

  15. ” He believes (as all right-thinking people do)….”

    Great! May be there should be a governmental program to tell right -thinking people from wrong-thinking people and tell people what to belive…. We had that in USSR.

  16. ” He believes (as all right-thinking people do) that consciousness is ultimately biological, and is interested in building computer models of the phenomenon”

    I met many philosophers who argue that human consciousness is ultimately social. For the experiment in question they would not see any chance of success… even possibility of recreating a brain of an insect. Some AI experts and some linguists like Lakoff and Johnson wrote volumes to prove that bodily experience is highly essential for conscious development, so they think in the same line as those philosophers.

    There are many other things in philosophy of consciousness which are interesting, but would not qualify as “right-thinking” in a possible worlds where there is a Church or a Ministry of Education that defines and enforces right-thinking.

Comments are closed.

Scroll to Top