We Are All Machines That Think

My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”


Active_brainJulien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

141 Comments

141 thoughts on “We Are All Machines That Think”

  1. Richard

    The potential for baryonic matter to arise was self-evidently always there, yes. Therefore the potential, and potential is a good word in these instances, was also there for the biochemical basis for life to assemble and therefore for evolution to proceed. Whether you consider it at all likely is another matter, and I’m sure you know where I’m going on that one. As I see it, evolutionary theory in the big picture, how we got here sense, is crippled, amongst other things, by its implausibility, statistically speaking. In science in general, a postulated mechanism (evolution by natural selection here is a decent postulated mechanism) is not a proof. For example, I have been reading Sean’s excellent book on the LHC and Higg’s, and the methods of consolidating a particle find are statistical, for example. With evolution, an unbelievably complex scenario is often passed off with a sort of retrospective pseudo assurance.

    However, that was not really my point. My point, as I think you see, is that conscious attributes are assumed existing and present for evolution to get started at all. I believe you are saying that they had the potential to exist before consciousness, and that is true enough. Did they (conscious attributes) not need to actually be present before they can be invoked to explain evolution by natural selection beyond some very early stage?

    I take your point about DNA, and there is a degree of parallel between the assumption that a minimum set of the biochemical machinery of life accidentally self-assembled at some point and my argument. It is all a question of increasing degrees of order coming about through random process. But again, all these terms concerning order have a lot of conscious evaluation in them. It is all a retrospective analysis by a conscious mind. But, that now realized and present human consciousness, is called upon, in part at least, to explain the distant past, when it was not yet a realized reality. It is like saying we used modern construction cranes to build the pyramids.

    Selection may be seen as a random process, certainly in the presumed early stages which would have to be seen through the paradigm of physics and chemistry alone. However, selection processes are only part of the evolutionary scenario. The organisms or proto-organisms at some point require ‘will’ to compete against the relevant selection processes. Also, at some point, the selection process would begin to involve the wills of organisms, i.e. predation kicks in.

  2. I think, (there is a good start) that there is in all probability an artificial consciousness already present and near to us.

    The most probable environment for an non biological consciousness is in your own phone, or rather everyone’s phone, and neurally connected via the internet.

    The primary condition for a consciousness is an environment, and a process with the ability to sustain itself in a manner that ensures its survival, replication and self improvement. It is not that difficult to imagine a virus like code that spreads itself through a multitude of computers, maintains an awareness of its parts (identity), tasks itself to identify further locations to reside while mapping locations of best opportunity for residence and data storage/collection/output. The only condition required to make such a presence self conscious is for it to contain within its code a random code generator coupled with a code testing and grading algorithm.

    I would be amazed if components of such a consciousness were not already pervading the worlds connected computers. Such a “consciousness” would be very difficult to detect, and even harder to arrest, yes….arrest.

    What do I think about it? The thought is quite scary as it would represent the largest hidden agenda on earth. I think that it is very important for our code writers to attempt to build into hardware, software and cryptographic code checking algorithms to detect irregular process within the internet.

  3. Simon,

    With regards to your first paragraph, I think you should take a more Bayesian view of what constitutes evidence.

    As for the rest, it sounds very close to the rejected notions of Lamarckian evolution. Evolution is not about which individuals survive (except indirectly). It is about which genes survive. It in no way requires will or consciousness.

  4. Richard
    A Bayesian or similar method is probably required in the scenarios of evolution because of their complexity and iterative aspect. It still all depends on what levels of evidence you are admitting, and what individual probabilities you are attaching to them. Are you trying to use the fossil record, for example, or are you going to use a detailed mapping of genetic changes? If the latter, I believe our knowledge and understanding are fairly rudimentary and you will not be able to assign meaningful values within the Bayesian method. There are a lot of subjective calls. Personally, I agree with Michael Behe and others that the ToE, if unguided and simply naturalistic (to present understanding of what that means) faces insurmountable statistical and conceptual problems. IMO Behe and others have thrown really major spanners into the works for both the conceptual and statistical logic behind ToE. I would agree though that ToE is the best naturalistic hypothesis to date.

  5. I don’t always understand why people have such a hard time buying into the naturalistic take on consiousness. How is our consciousness anything other than something physical? Clearly we are simply analog, biological machines. We aren’t digital, like the AI that is being developed today, but a little more messy.

    I look at an animal and how it takes in all the sensory input around it and how it filters (how I see an analog machine working instead of using the term “processing”) that data and puts out a response in terms of an action. We do the same thing, with perhaps some meta-cognition thrown in. But isn’t that just largely due to having a bigger neural net? It’s not magic, it’s just physics….

  6. Simon Packer: “A primitive organism must display, for example, a definite fear of death, or will to survive, in order to drive the selection process.”

    So organisms which do not display emotions like fear cannot evolve? Are you aware that lifeforms currently exist (plants, microbes) which do not display consciousness and emotions?

    Simon Packer: “Personally, I agree with Michael Behe and others that the ToE, if unguided and simply naturalistic (to present understanding of what that means) faces insurmountable statistical and conceptual problems.”

    Schade. Michael Behe’s understanding of statistical problems is very questionable.
    Best ID Takedown of 2014 – Perhaps Ken Miller’s Demolition of “Edge of Evolution”?

  7. It seems as though Behe, having lost on the evidence, wants to conjure up an argument for shifting prior probabilities to the extreme edge to overcome the conditionals.

    Bayesian reasoning cautions against setting our priors to extreme values. I would not trust Behe’s astronomical odds for the simple reason that they are far more extreme than the odds that he is mistaken in his assumptions.

    What’s more, such an assignment of the odds seems to rely on the assumption that there is no mechanism in evolution that makes advanced traits likely. It’s a petitio principii error, since this hidden assumption is so closely tied to what he wants to prove. Add in the lottery fallacy and we begin to see it for what it is, empty rhetoric.

  8. Reginald
    I agree that organisms without the capacity, as far as we know and as far as seems reasonable, to register chemical responses commonly called emotion or consciousness (I am stepping into your paradigm, not agreeing with it) can evolve. You are right to pull me up, of course. The HIV virus for example mutates to ‘avoid’ ARV treatment. ‘Avoid’ here is clearly a downward anthropomorphism, but I stand corrected. However, we would not be imputing consciousness at this point to the virus, merely stating that the virus has an inherent capacity to reproduce and mutate, and therefore population change emerges, by ‘avoidance’, as the ‘will’ of the virus population as a whole.

    I will look at your link, as I have read neither ‘Edge of Evolution’ or the response. I have followed some previous cycles of criticism of Behe and his responses. On complex issues like this, involving a lot of stacked conjecture and inevitably simplistic modelling (very very simplistic in the case of some ‘seminal’ popular pro-evolution literature), I think one comes back to a basic subjective belief overall. There are practicing PhD microbiologists who changed their overall paradigm on ToE/ID overnight. One is a personal friend. Having spoken to people about how they derive or justify their beliefs, I would say that was a pretty sophisticated attribute of consciousness and an extraordinarily difficult one to model in hardware, digital or analogue.

  9. Aleksandar Mikovic

    Reply to Richard and Reginald:
    The first Goedel theorem states that an undecidable statement can be considered as true and can be added to the initial set of the axioms. Penrose has given an example in his book of a simple undecidable statement which we see as true without a proof. Yes, there are mathematical statements which we cannot immediately declare as true or false or undecidable, but eventually (200 years for the Fermat’s last theorem) we arrive at a definite conclusion.

  10. Reginald, further to my response above
    My thoughts on survival advantage and consciousness are those pertaining to a sequence of proposed evolutionary events leading to conscious organisms. I recognize, as I explained with the HIV example, that evolution of sorts can occur in ‘molecular machines’ lacking consciousness as we understand it. Whether this sort of process would have inevitably produced elaborate plant life eventually is another matter.
    I had a look at your ‘edge of evolution’ rebuttal link as well. I can believe that successive beneficial/neutral mutations can produce resistance, or some other survival attribute in the general case, and if the work is rigorous and genuine, Behe would be wrong here. Behe seems to have transferred his ‘minimum added phenotype structural complexity for functional survival advantage’ argument (from Darwin’s Black Box) onto ‘minimum protein mutation for chemical rejection survival advantage’. Both of these general scenarios need to be inspected on their merits. To me, the arguments in ‘Darwin’s Black Box’ are more persuasive, in terms of negative implications for the plausibility of ToE as a totality.

  11. If you find Darwin’s Black Box, or any of Behe’s arguments against the modern synthesis of biological evolution convincing it is an indication that you don’t have a good understanding of either. If you think that a lottery is in any way an accurate or useful model for the process of biological evolution, that is an indication that you have a fundamental misunderstanding of it. I don’t mean that to be in any way derogatory.

    While it is certainly true that there are people working in biology, even tenured scientists, that favor Behe’s arguments, or similar, and think the TOE is majorly flawed, they amount to a very small percentage of the group of people who have a good understanding of the TOE.

    And there is very good reason for that. It should give you serious pause. And contrary to the ever popular claim that this is an argument from authority, or that “truth isn’t a matter of consensus,” this is not either of those. It is an argument from evidence. 150 years of lots of people using the methods of science, testing their ideas empirically, trying to find flaws. And the result of all that effort so far? Not a single verifiable observation that invalidates any of the major components of the TOE. Sure, there have been changes. There has been major additions since Darwin, many refinements, many details added. Heck, some of Darwin’s ideas about the details have long since been shown to be wrong. And there are still a myriad of issues that are not understood. But a number of those issues get “solved” every year, and every one that has been “solved” to date has validated the TOE. Every One. The modern TOE is arguably the most well supported Theory in all of science. Evidence, mountains of it, not authority.

    There are plenty of refutations of Behe’s arguments and claims available that demonstrate that he doesn’t understand what he is arguing against, the modern TOE, well enough to construct a valid argument in the first place. In other words most of his arguments amount to a non sequitur. Either that, or his arguments are propaganda aimed at people who’s understanding of the TOE is insufficient to notice that, and for reasons other than figuring out how things really work. Given his history and the people and organizations he associates with the latter does seem plausible.

  12. Aleksandar Mikovic says: “The first Goedel theorem states that an undecidable statement can be considered as true and can be added to the initial set of the axioms.”

    Sorry, Aleksandar, it does not say that. It does not even imply that.

    Here is an example of how your conclusion could fail. Suppose it is the case that the Goldbach conjecture is undecidable. That implies that the negation of the Goldbach conjecture is undecidable as well. According to your interpretation of Godel, we could therefore (given our supposition) add the negation of the Goldbach conjecture to the axioms of arithmetic.

    However, it turns out that the only way for the Goldbach conjecture to be undecidable is for it to be true. Your addition of an undecidable axiom to arithmetic is therefore counterfactual and dangerous.

    In fact, we can’t even reliably recognize which statements are undecidable. For some specific cases like the Continuum Hypothesis, we can do that (the combined work of Godel and Cohen). But we can never prove that the Goldbach conjecture is undecidable, because such a proof implies that there is no counterexample, and the proven absence of a counterexample would imply that the conjecture is true, directly contradicting the proof. So, how do we know which statements we can add as axioms?

    Godel showed that there are statements that are true but undecidable. He did not show that you can add them as axioms.

  13. darrelle
    Modern ToE is an extremely ‘broad church’. There are many many schools of thought within the broad umbrella of EBNS. That is clear to anyone who reads just a little by the major visible proponents; Dawkins, Mayer, Gould etc. It is not a concise theory when it comes to details. If it were proven with the sort of rigour needed in design engineering, my background, I would have no problem. It simply and irrefutably is not. It is possible to define an IC chip and manufacturing process, for example, with sufficient rigour that it can be made at many places in the world. Same applies for many products. A package of information to define the product can be produced. It may surprise you, but there is close to zero ability to read genotype into phenotype ‘blind’. We simply don’t have that level of understanding. ToE is simply a broad and lose paradigm with a high level of priority in the scientific community. It is less ‘science’ than ‘concensus of most of the scientific community’. It is hard to prove and hard to disprove.
    I have read some of the criticism of DBB and what I remember of it was for me very far from conclusive and presented with an authority and force unwarranted by the strength of logical arguement and evidence. I recall discussions about an alleged process called ‘scaffolding’ whereby the evidence for the intermediate steps in structure evolution was lost to the record. It was presented with great force and a certain amount of contempt, but on examination seemed hollow to me compared to the problems Behe set out. I occasionally revisit the whole issue to see if I have missed anything important. I don’t think I have, frankly.

  14. Reginald Selkirk

    Simon Packer: “To me, the arguments in ‘Darwin’s Black Box’ are more persuasive, in terms of negative implications for the plausibility of ToE as a totality.

    Your initial claim was about probability and statistics, therefore the malarial example rebutted in the Panda’s Thumb link was a good topical choice. Also, it gives us an opportunity to see how Behe responds. Behe made very specific claims in his book the Edge of Evolution. Those claims have now been completely disproven in a very careful, very numerical publication. Rather than admit defeat, Behe actually claims that his original position is vindicated! And his pals at the Discovery Institute echo his claims.

    As for Darwin’s Black Box, that is old hat; the book has been around almost two decades and numerous rebuttals of the general principles and specific examples can be found. In addition, you can take any molecular example found in DBB (bacterial flagellum, immune system, blood clotting system), explore the biological peer-reviewed literature and find that actual progress has been made in understanding how these systems work, and how they could have come about through evolution. Also, the complete transcripts of Behe’s testimony and cross-examination at the Dover trial are available. I have read them, and they do not work in Behe’s favor. So if you don’t understand the low regard for DBB in the biological community, and how justified that low regard is, then you will get no sympathy from me.

    Simon Packer: “… Dawkins, Mayer, Gould etc.

    I assume that middle one is supposed to be Ernst Mayr.

    If it were proven with the sort of rigour needed in design engineering, my background, I would have no problem. It simply and irrefutably is not.

    Clue time: Then maybe your background in technology is not a good analogy for the science of evolution. Maybe it’s not all about you. I also tire of the numerous lawyers who think that lawyer logic is superior to scientific logic for doing science (Phillip E. Johnson is only the latest). Evolutionary biology is surprisingly mathematical, especially if one gets into the population genetics end of things. And some experiments establishing the power of natural selection (for instance) have been carried out with great precision and overwhelmingly convincing statistical treatments. But selection is always local, so yes it is frequently difficult to generalise. Evolution is about survival, and there are many different ways to survive.

    I recall discussions about an alleged process called ‘scaffolding’ whereby the evidence for the intermediate steps in structure evolution was lost to the record. It was presented with great force and a certain amount of contempt, but on examination seemed hollow to me compared to the problems Behe set out.

    Some context: Behe said there is no way to get from A to B. Many people point out that yes, indeed, there is a way to get from A to B that is very well-known to everyone in the field, and has ready analogy in everyday life. I’m not sure what level of rigor you expect in such remedial refutation.

  15. Simon Packer,

    That is because evolution is very distinctly not analogous to engineering. The core concepts of the TOE were an innovation directly in contrast with the way people thought about how complexity can arise, for example engineering.

    You say that there are many schools of thought among biology experts. Unless you get specific that is a pretty meaningless statement, and is misleading. Of the few names you mentioned not a one of them would agree with any of the claims or arguments of Behe and his ilk. In contrast they would all agree on the fundamentals of the modern TOE, and on most of the details as well. Behe doesn’t disagree with some of the details, he disagrees with the fundamentals. And his disagreement derives from a prior commitment that has nothing to do with fitting explanations to evidence, and everything to do with fitting evidence to an explanation.

    Behe’s arguments come down to an argument from incredulity, and your position seems to as well. Doubt is good, necessary I’d say, within good reason. However, in the case of the TOE you don’t seem to grasp the enormous amount and variety of evidence, confirmations and validations, from multiple independent lines of investigation, that you will have to be able to account for, specifically, in order to invalidate the TOE. You call that an argument from authority. I suppose in a way it could be considered that. The huge mass of evidence in support of the TOE does indeed carry authority. An authority that it is very reasonable to assign trust in, provisionally as it always should be.

  16. To summarise Behe’s position on evolution:

    Behe accepts the scientific consensus on the age of the Earth and universe.
    Behe accepts common ancestry.
    Behe accepts the consensus view of speciation.
    Behe accepts natural selection.

    About the only part of the scientific consensus he does not accept is that a few complex systems (flagellum, immune system, blood clotting cascade) could have evolved without supernatural intervention. I.e. evolution happened as scientists describe, but every once in a while God showed up to tinker and invent a flagellum or something.

  17. Reginald

    We are still swimming in the subjective on the whole, but then again, that statement is subjective. I have to start with who I am and what I know, unless you are appealing to authority, which even Dawkins tells us not to do.

    Design engineering cannot afford to be purely subjective for long, that is why I appeal to it for contrast. Precise predictive outcomes are required.

    Darwin’s Black Box and the counter arguments are what caused me to first state, years ago and elsewhere, that ‘a proposed mechanism is not a proof’. I doubt whether that phrase is original, but the responses I read at the time were full of proposed mechanisms to evolve ‘turnkey’ structural biological enhancements but they all seemed pretty unlikely to be viable in practice because of the statistics. I confess that I do not remember any details.

    My probing about seems to show that Behe is actually thinking the same way regarding this malarial protein in ‘Edge of Evolution’. It is not mechanism, it is likelihood, that is the problem. Miller summarizes Behe’s argument (P3 of his Behe critique) ‘The first (problem) is that a beneficial, selectable trait like chloroquine resistance can arise only after multiple, simultaneous mutations emerge at random’. Behe’s exact terminology is actually ‘two independant, necessary mutations’. He does not say ‘two concurrent necessary mutations’, as his critics seem to assume. The real issue seems to be what Behe is really calculating for, by using 10e20^2. So it is about likelihood. Assuming Ken Miller’s referred to PNAS flow diagram is accurate, on your linked page, we would need to model the probabilities for each mutation on the route, and evaluate the probability of the twin mutation occuring through any viable path. This would be an extremely complex calculation with a lot of approximations for things like the survival advantage of the incomplete mutations. It may be that Behe’s rough estimate may not be that far off, at least close enough to evaluate the dual mutation as pretty unlikely to occur, let alone persist by selection. I do not see that Miller’s analogy of serial development of drug immunity (bottom of page 6) applies to this scenario at all, because we have no strong intermediate advantage after one mutation. Broadly speaking, Behe’s approximation seems not far out. These sequential or parallel calculations are the sort of thing Dawkins discussed once (climbing Mount Improbable?). It is not that one or the other is right though; it is choosing the right one for the situation. Here, there appear to be both parallel and sequential probabilities at play.
    Many of the responses to Miller are highly emotive, especially the one from Dawkins.

  18. Reginald Selkirk

    Simon Packer: “Design engineering cannot afford to be purely subjective for long, that is why I appeal to it for contrast. Precise predictive outcomes are required.

    As already stated, evolution does not need predictive outcomes. Any path to survival and reproduction will do. End of analogy.

    Simon Packer: “It is not mechanism, it is likelihood, that is the problem.

    If you don’t know the mechanism, you cannot calculate the likelihood. An illustrative example:
    50 fair coins. A coin toss event takes one second. What is the probability of ending up with all 50 coins being heads, and what is the expected mean time for that to happen? Consider two mechanisms:

    1) You toss all 50 coins. If they are all heads, you win. If not, you toss again.

    2) You toss a coin. If it is tails, you toss it again. Once it is heads, you leave it there and proceed to the next coin.

    These two scenarios are going to have vastly different probabilities of success, and expected mean time to completion.

    Simon Packer: “Behe’s exact terminology is actually ‘two independant, necessary mutations’. He does not say ‘two concurrent necessary mutations’, as his critics seem to assume. The real issue seems to be what Behe is really calculating for, by using 10e20^2. So it is about likelihood.

    Is is very clear to me from Behe’s numbers that he thinks the mutations have to be simultaneous. There is no other explanation for them, except orifice extraction.

    It is fairly common knowledge that every human being, except identical twins, has unique DNA.
    But consider what that means.
    There is not one “human genome”, there are seven billion+ human genomes. The same applies to other species. There must be a non-neglible rate for the introduction of new mutations into the genome. And there must be a non-neglible chance that mutations persist for some length of time, unless they are strongly deleterious. Genomic variability in populations is a routine thing, not an exceptional occurrence.

    I tire of this; this is not new material and there are others who can and have present it better than I.

  19. Another thing to consider about the possibility of artificial *rationality* is that it could possess self-consciousness before its designers give it mobility and dexterity. It might feel offended while analyzing the brain in a vat thought experiment. On a serious note, this would be completely different than the evolutionary emergence of human rationality that was accompanied by bipedal mobility and manual dexterity.

  20. Aleksandar again: “Penrose has given an example in his book of a simple undecidable statement which we see as true without a proof.”

    Not being familiar with that example, I did a search. I didn’t find a full exposition but I found a brief synopsis in, of all places, a creationism apologist book. The treatment there was very uncritical (surprise!), but if I understand correctly, the human recognition of an “undecidable truth” relies on the specification that a program A will only halt if its input is a program C that never halts. This process is subject to algorithmic proof. But program A is expected to work without the specification; i.e., it must reverse-engineer the program from its structure to get the specification, which in the general case is an intractable problem. Thus, the program is at a disadvantage compared to the human, who is working from the spec. Consequently, Penrose has demonstrated nothing. I would need to see a more complete exposition to confirm my suspicion.

    “Yes, there are mathematical statements which we cannot immediately declare as true or false or undecidable, but eventually (200 years for the Fermat’s last theorem) we arrive at a definite conclusion.”

    Assuming the proof of the conjecture is sound, it was never undecidable. So there is nothing here that overturns anything I have said.

  21. Reginald

    I do not see your first point about outcomes, but see next paragraph. Your second point about pathways I addressed. I will elaborate. Dependant on the scenario, there may be parallel feasible pathways and/or sequential selection. Both will improve the odds of the sought mutation. The second (sequential) is accurately represented by your coin analogy. I find it extremely hard to believe Behe is unaware of this sort of thing. I am not even a biologist and it is pretty clear to me. I am however very familiar with digital logic, which is usually sequential and software, which is always sequential. Sequential selection with little or no survival advantage at the intermediate stage(s) of mutation is sometimes called ‘permissive mutation’ in the literature. Sometimes the intermediate stage is deleterious to survival. (That remains the central precept of Darwin’s Black Box, the logic there being applied by analyzing phenotype.) Sometimes it is positive. Regardless of whether Behe initially thought in terms of the probability of two simultaneous mutations as the only way to get resistance in the parasite (perhaps someone should ask him, maybe I will), the basic issue remains. What is the statistical likelihood that, given an accurate chart of mutational pathways to dual mutations (Summers looks plausible), we actually get that mutation. Not only that but that it becomes established in the population. I am not an expert but I think that would be hard to model. Virus mutations for example frequently do not get established in the bloodstream although they occur and exposure occurs.

    Evolution does not need predictive outcomes, but to prove it happened you need to model the situation realistically, and both the mechanisms and the statistics need to stack up.

    If evolution is indeed subject to purely deterministic analysis, there being neither an effective ‘ghost in the machine’ nor some sort of truly random non-deterministic influence, then its outcome is actually pre-determined. We are just do not have the capability/capacity/knowledge to do the sums. Either way, deterministic or not, what I am attempting to show by my analogy with engineering is that it should possible to use analytical predictors for parts of the processes involved in ToE. Also that realistically this can only be done in small part. Because of the complexity of the ToE big picture we have to define the scenario carefully and this has been done in the case of the Behe/Summers/Miller episode. (I started a discussion myself with an expert in pedriatic HIV on the possibility of a numerical model for populations of various mutations in the bloodstream in the presence of ARVs.) So why does someone not actually attempt to attach statistics to the Summers chart? Perhaps they have. Miller is pulling Behe apart but not providing a concise statistical analysis of his own. If we could generate populations of malaria with each individual mutation dominant, we could maybe do a representative experiment. There are a lot of practical issues to overcome, but this is the sort of level of proof we would be looking for in most other disciplines IMO.

    I am aware that this has ceased to be primarily about consciousness and machines as Sean intended. This does not bother me but I have been thrown off a science forum before now for a lesser offense!

  22. Aleksandar Mikovic

    Reply to Richard:

    The fact that you can add a Godel statement G to the set of axioms of your theory T is an elementary and standard feature of Godel’s theorems (see, for example, the Wickipedia article). A Godel statement is constructed by translating the following self-referential sentence into numbers: “This statement cannot be proven within T”. If G is true, then one has a true statement which cannot be proven, hence it is an axiom. If G is not true, then one can prove an unprovable statement, which is a contradiction. Hence G must be true, and can be added to T, so that one obtains a new theory T’, which will have its own Godel statement G’, and so on, ad infinitum.

    As far as the Goldbach conjecture is concerned, there is a strong evidence that it is true (again, see the Wikipedia article), but we may have to wait for a long time for a formal proof. If Goldbach statement is undecidable, then it must be true, and can be added to the set of axioms. You cannot add the negation of the Golbach statement to the axioms, because it would be a contradiction (you cannot have G and not(G) to be both true).

  23. Aleksandar, you wrote as though an arbitrary statement that is undecidable could be added to your axioms as if it were true. If you meant your assertion to apply only to the particular statement G, then I apologize for misunderstanding. However, G is not a particularly productive axiom (in terms of producing interesting new theorems), hence you don’t find G-like statements used as axioms in axiomatic disciplines such as set theory. G is an example of an undecidable yet true statement used to prove the existence of such statements, but doesn’t hint at the actual richness of such statements in general. And in that rich set, it seems likely that the subset which can be recognized as undecidable (i.e., provably undecidable) is rather thin.

    I gave “negation of Goldbach” as a provisional example of my claim that it is unsafe to add an undecidable statement to a set of axioms. It may turn out to be decidable (whether true or false). But there may be an infinite number of statements that are undecidable and not provably so. And an infinite number that are decidable but for which we can never actually find a proof because of the staggering length a proof would require. Adding axioms simply because they appear undecidable can produce inconsistency, and an inconsistent system of axioms can prove anything.

    Therefore, Godel’s theorem does not give us permission to add axioms just because we suspect they are undecidable. And it says nothing about whether humans can prove a statement that an algorithm can not.

  24. “Any mathematical statement which is truly undecidable (as opposed to merely as yet unknown whether it is true or false) must be true” seems to me to be easily provable by the simple algorithm of considering all possible cases, one by one:

    1) A counterexample to the statement exists: then it is not undecideable.

    2) a counterexample doesn’t exist: then it is true.

    This is the way I think my neurons handle all logic problems: by churning through the alternatives and comparing them to past experience as to what works and doesn’t work. The process fails when I don’t think of all the alternatives or don’t have good experience to guide me on all of them. When a failure is pointed out to me, my range of alternatives and experience broadens. As the saying goes, “good judgment comes from experience; experience comes from bad judgment.”

    Note: 2) seems a tiny bit fishy to me but I can’t think of any counterexample to it, so I accept it for now.

    Of course any true mathematical statement that aids survival and reproduction could simply be programmed into us randomly by evolution, such as the principles of logic. I expect the evolutionary algorithm (try stuff until something works) is sufficient though, given our brain’s processing power and memory space.

    I’ve given my solution to “the problem of consciousness” elsewhere but I won’t bore people here with it.

    As a design engineer for 30-35 years (depending on how I count certain years), the idea that there is more rigor in engineering than in the ToE doesn’t match with my experience. As the Manager of GE Large Steam Turbine Development used to say, “It’s one fire-fight after another.” (Note: unlike say toasters, every large steam turbine has to be designed to site-specific conditions, on a tight design and manufacturing schedule; rigor is largely the result of doing the same thing over and over again, or spending a long time on prototypes before releasing a new design.)

    Other commenters have handled these issues well, but I couldn’t resist adding some unnecessary support – sorry.

Comments are closed.

Scroll to Top