Artificial Intelligence Intelligent Design Mind News

Face it, your brain isn’t a computer

Spread the love

Though Gary Marcus tells us it is, in “Face It, Your Brain Is a Computer”at the New York Times:

… Finally, there is a popular argument that human brains are capable of generating emotions, whereas computers are not. But while computers as we know them clearly lack emotions, that fact itself doesn’t mean that emotions aren’t the product of computation. On the contrary, neural systems like the amygdala that modulate emotions appear to work in roughly the same way as the rest of the brain does, which is to say that they transmit signals and integrate information, and transform inputs into outputs. As any computer scientist will tell you, that’s pretty much what computers do.

Of course, whether the brain is a computer is partly a matter of definition. The brain is obviously not a Macintosh or a PC. And we humans may not have operating systems, either. But there are many different ways of building a computer.

The real payoff in subscribing to the idea of a brain as a computer would come from using that idea to profitably guide research. In an article last fall in the journal Science, two of my colleagues (Adam Marblestone of M.I.T. and Thomas Dean of Google) and I endeavored to do just that, suggesting that a particular kind of computer, known as the field programmable gate array, might offer a preliminary starting point for thinking about how the brain works.

That computers do not generate emotions is not a “popular argument”; it is a fact.

If neurons are akin to computer hardware, and behaviors are akin to the actions that a computer performs, computation is likely to be the glue that binds the two.

There is much that we don’t know about brains. But we do know that they aren’t magical. They are just exceptionally complex arrangements of matter. Airplanes may not fly like birds, but they are subject to the same forces of lift and drag. Likewise, there is no reason to think that brains are exempt from the laws of computation. If the heart is a biological pump, and the nose is a biological filter, the brain is a biological computer, a machine for processing information in lawful, systematic ways. More.

And Frankenstein is alive and well at the North Pole too.

DuBois needs to talk to David Gelernter:

Following on a Slate computer columnist’s assessment that artificial intelligence has sputtered, Yale computer science prof David Gelernter offers some thoughts on the closing of the scientific mind. Readers will appreciate his comments on the “punks, bullies, and hangers-on” who have been attacking philosopher Thomas Nagel for doubting Darwin:

The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy. More.

But he won’t.

See also: Why the human mind is hard to grasp (so to speak)

Follow UD News at Twitter!

Follow UD News at Twitter!

55 Replies to “Face it, your brain isn’t a computer

  1. 1
    Mung says:

    Oh sure the brain is a computer. We just need to find out just what kind of “computer” it is and then redefine what it means for something to be a computer. See how easy that is?

  2. 2
    Silver Asiatic says:

    There is much that we don’t know about brains. But we do know that they aren’t magical. They are just exceptionally complex arrangements of matter.

    There’s a little we don’t know about Hamlet. It’s a pretty good play. But there’s nothing magical about it. It’s just a complex arrangement of words. As any computer scientist will tell you, that’s pretty much what computers do.

  3. 3
    asauber says:

    Computers are absolutist/binary in regards to processing information.

    They were designed that way and that’s the reason why they work.

    Hmmmm…

    Andrew

  4. 4
    mjoels says:

    It isn’t a computer, but not for the reasons people usually list. The one major thing that keeps it from being a computer is its ability to identify new problems and it’s ability to solve said problems in new ways. The first rule of computation is that it only does exactly what you tell it to do. Nothing more. You cannot program a computer to do something if you don’t already know the outcome. The same rule would apply to our brain if it was a computer. If our brain were a computer, then the outcome of all of our thoughts would be 100% deterministic. This is not the case as innovation and choice get in the way of this universal limitation of computation. All computation is deterministic and predictable. The human mind is not according to knowledge and experience. Therefore, at least at the current moment, such assertions are nothing more than wild and frankly stupid conjecture based off of a materialistic world view.

  5. 5
    mjoels says:

    I am a computer scientist btw. He is full of it and grabbing for secular kudos with some science fiction headliners.

  6. 6
    Mapou says:

    I disagree that a computer program cannot be made to express emotions. Sure, they would not be conscious emotions but pain and pleasure are not entirely spiritual/conscious concepts. There is a mechanical/physical/causal aspect to them as well, one which is tied to motivation and goal seeking behavior. The rule of motivation is simple: reinforce behaviors that lead to pleasure and away from pain and suppress behaviors that lead to pain and away from pleasure. A computer program can certainly be written to do that.

  7. 7
    mjoels says:

    I have to agree with mapou here. I am not saying that it couldn’t be programmed in some way to have a feedback loop that simulated emotional responses, I am saying that there is no driver in a computer no matter how much code you put in it. Emotions are most likely initiated by feedback loops, much like senses. My comment is primarily concerned with the I part of everyone. Without that, there is nothing to feel the emotion or deal with it. It is simply a physiological response with a deterministic outcome otherwise.

  8. 8
    mjoels says:

    I have to agree with the above here. I am not saying that it couldn’t be programmed in some way to have a feedback loop that simulated emotional responses, I am saying that there is no driver in a computer no matter how much code you put in it. Emotions are most likely initiated by feedback loops, much like senses. My comment is primarily concerned with the I part of everyone. Without that, there is nothing to feel the emotion or deal with it. It is simply a physiological response with a deterministic outcome otherwise.

  9. 9
    Mapou says:

    mjoels, I agree. There is a great danger in all this, IMO. We, humans, mistakenly ascribe consciousness to emotional behavior. We do it with animals but I believe we are wrong. In spite of their outward display of emotions, animals are unconscious meat robots, IMO.

    The danger I see is that we will mistakenly ascribe consciousness to our future intelligent machines. Once that is done, it won’t take long before machines are given legal rights like humans or even the right to govern or police humans. That would be a disaster.

    I observe with growing horror how the powers that be are pushing us to accept robots as conscious beings. There is something very sinister about all this.

  10. 10
  11. 11
    Tom Robbins says:

    Consciousness is primary and material is secondary – the greatest minds in physics that developed quantum theory believed it and proved it. Numerous experiment have shown the Mind can exert influence on what is observed. God is the big conciseness, and we are in his image.. If you take this step, that matter is secondary, many “issues” of science disappear. This idea is moving out of metaphysics, and gaining some ground. We may be able to create something that mimics thinking but OF COURSE wold never be conscious and consciousness is immaterial, in fact we invent things like outboard motors only to find that flagella is more sophisticated, powerful, and efficient than anything we can “emulate” we create because we are made in his image, and that is what our creator does, we often try to emulate our fathers. I know, its a bit of a weak argument if you are a materialist scientist, but I don’t have those constraints – hey if they can postulate a multiverse to avoid fine tuning, I think this idea I just put forward looks pretty damn good in comparison, at least their is evidence for it.

  12. 12
    Querius says:

    What we perceive as reality now depends on our earlier decision what to measure, which is a very, very deep message about the nature of reality and our part in the whole universe. We are not just passive observers.
    – Anton Zeilinger, Quantum Physicist

    Simple experiment. Let a computer “observe” the double-slit experiment (irretrievably recording the outcome). If the wave function collapses, we’ll know that computers have consciousness.

    -Q

  13. 13
    harry says:

    Having dealt with computers at the level of the CPU’s instruction set, writing software in assembly language, and having written software to simulate a CPU’s instruction set to facilitate debugging telephony switching systems, let me assure everyone that computers have all the intellect and all the capacity for emotion that is found in a box of rocks, which is to say, none whatsoever. Computers will never become any smarter or emotional than that, either, regardless of how many feedback loops are set up.

    Computers are just machines. They are very intricate and well designed machines, but that is still all they are or will ever be. They will never feel pain or experience emotion because there is “nobody home” in them to do so.

    They operate in a completely deterministic manner. Unlike humans, they really and truly don’t have a free will, which is why man-made androids, regardless of how well they mimic human behavior, will never be more than just machines, no smarter than a box of rocks, no more human than is your electric can opener.

  14. 14
    Robert Byers says:

    Yes the brain is a computer or rather the brain is just a memory machine. The difference for Christians is that we should mesh the soul to this memory machine.

    I don’t agree their are emotions. These are just thoughts/conclusions that linger. What emotion is not in its dna a thought?
    A computer could have emotions or rather conclusions.
    The computer simply would never notice. No one there.

    By the way false “emotions” are created or twisted by drugs acting on the “brain”. So emotions don’t just come from free will.
    this is good for creationist ideas on humanity.

  15. 15
    bornagain77 says:

    Querius at 12, actually an experiment testing that assumption has been done, and consciousness is found to be integral.

    The following experiment clearly shows that a ‘material’ detector recording information is secondary to the experiment and that a conscious observer being able to consciously know the ‘which path’ information of a photon with local certainty, is of primary importance in the experiment.

    Quantum physics mimics spooky action into the past – April 23, 2012
    Excerpt: The authors experimentally realized a “Gedankenexperiment” called “delayed-choice entanglement swapping”, formulated by Asher Peres in the year 2000. Two pairs of entangled photons are produced, and one photon from each pair is sent to a party called Victor. Of the two remaining photons, one photon is sent to the party Alice and one is sent to the party Bob. Victor can now choose between two kinds of measurements. If he decides to measure his two photons in a way such that they are forced to be in an entangled state, then also Alice’s and Bob’s photon pair becomes entangled. If Victor chooses to measure his particles individually, Alice’s and Bob’s photon pair ends up in a separable state. Modern quantum optics technology allowed the team to delay Victor’s choice and measurement with respect to the measurements which Alice and Bob perform on their photons. “We found that whether Alice’s and Bob’s photons are entangled and show quantum correlations or are separable and show classical correlations can be decided after they have been measured”, explains Xiao-song Ma, lead author of the study.
    According to the famous words of Albert Einstein, the effects of quantum entanglement appear as “spooky action at a distance”. The recent experiment has gone one remarkable step further. “Within a naïve classical world view, quantum mechanics can even mimic an influence of future actions on past events”, says Anton Zeilinger.
    http://phys.org/news/2012-04-q.....ction.html

    “If we attempt to attribute an objective meaning to the quantum state of a single system, curious paradoxes appear: quantum effects mimic not only instantaneous action-at-a-distance but also, as seen here, influence of future actions on past events, even after these events have been irrevocably recorded.”
    Asher Peres, Delayed choice for entanglement swapping. J. Mod. Opt. 47, 139-143 (2000).

    You can see a more complete explanation of the startling results of the experiment at the 9:11 minute mark of the following video

    Delayed Choice Quantum Eraser Experiment Explained – 2014 video
    http://www.youtube.com/watch?v=H6HLjpj4Nt4

    A few related notes:

    Does Quantum Physics Make it Easier to Believe in God? Stephen M. Barr – July 10, 2012
    Excerpt: Couldn’t an inanimate physical device (say, a Geiger counter) carry out a “measurement” (minus the ‘observer’ in quantum mechanics)? That would run into the very problem pointed out by von Neumann: If the “observer” were just a purely physical entity, such as a Geiger counter, one could in principle write down a bigger wavefunction that described not only the thing being measured but also the observer. And, when calculated with the Schrödinger equation, that bigger wave function would not jump! Again: as long as only purely physical entities are involved, they are governed by an equation that says that the probabilities don’t jump.
    That’s why, when Peierls was asked whether a machine could be an “observer,” he said no, explaining that “the quantum mechanical description is in terms of knowledge, and knowledge requires somebody who knows.” Not a purely physical thing, but a mind.
    https://www.bigquestionsonline.com/content/does-quantum-physics-make-it-easier-believe-god

    The Measurement Problem in quantum mechanics – (Inspiring Philosophy) – 2014 video
    https://www.youtube.com/watch?v=qB7d5V71vUE

  16. 16
    Mung says:

    harry, I kicked my computer and it beeped at me, and I felt bad. Obviously the computer had feelings.

  17. 17
    harry says:

    Mung @16,

    LOL! Somebody must have finally figured out how to set up those feedback loops in just the right way. I used to love my computer, because “Love means never having to say you’re sorry,” but now that I know I have to apologize to it, I suddenly hate it. ;o)

  18. 18
    Mung says:

    Hell, not only is my brain not a computer it doesn’t even qualify as memory!

  19. 19
    Mapou says:

    Boston Dynamics published some videos of their quadruped robot, BigDog, about a year ago. One of them showed BigDog being kicked by a human being. The robot appeared to be highly motivated to stay upright in a way that made it appear conscious. The video went viral when people started feeling sorry for BigDog. Some even said in all seriousness that there should be laws against kicking a robot.

    Again, I think there is a great danger in anthropomorphizing machines. However, mainstream entertainment/news media and the materialist lobby seem hellbent on using their propaganda network to convince the public that machines can become conscious by some voodoo magic called emergence.

  20. 20
    Mapou says:

    Querius:

    Simple experiment. Let a computer “observe” the double-slit experiment (irretrievably recording the outcome). If the wave function collapses, we’ll know that computers have consciousness.

    Are you serious? The experiment you are proposing assumes its own conclusion, that it takes a conscious observer to effect a quantum event. It’s a laughable claim, in the not even wrong category. I can’t believe grown educated human beings can imagine and believe in such hogwash. The religion is strong in some people.

    Edit: Here’s how your experiment can be interpreted the opposite way it was intended. One can say that, since machines are not conscious, the observation of a quantum event by the computer would refute the claim that quantum events require a conscious observer. By the way, machines routinely observe quantum events in modern accelerators.

  21. 21
    anthropic says:

    My computer beat me at chess, but it proved to be no match in kickboxing.

  22. 22
    Box says:

    My mind is neither my brain nor a computer. My brain may be a computer.

    there is a popular argument that human brains are capable of generating emotions, whereas computers are not.

    Anything can “generate” emotions. Cars can generate emotions. My mind is experiencing emotions. Neither my brain nor a computer is experiencing emotions.

  23. 23
    Popperian says:

    Wow. What a stunningly bad argument.

    Human beings exhibit emotions, but computers we build do not exhibit emotions. Therefore, our brains are not computers?

    The problem with this is, there are plenty of things human beings do that computers do not do, because we’ve never programmed them to in the first place, not be cause they are not capable of doing them.

    Nor would we want an AI plane pilot to get nervous about its first flight, or the fact that the weather was deteriorating rapidly, etc. Furthermore, there are a number of things human beings can do, which we only got around to developing algorithms for in the last 10 years.

    So, it’s a fallacy to assume just because computers don’t do something that human beings currently do, they cannot.

    Why don’t you start out by explaining why human beings have emotions, then point out why computation doesn’t fit that explanation?

    Let me guess. We have emotions because “that’s just what some designer must have wanted”?

  24. 24
    harry says:

    Popperian @34

    So, it’s a fallacy to assume just because computers don’t do something that human beings currently do, they cannot.

    Let me know when you determine the rules by which matter and energy are to be configured such that consciousness, free will and rationality emerge. Then build that and point out to me which atoms make up consciousness, which ones make up rationality, and which free will. You can even skip free will if you want. I’ll settle for consciousness and rationality.

    Of course, if Max Planck was right, and matter is an epiphenomenon of Mind, not the reverse, then you will never, ever be able to do that. Good luck! Keep me posted on your progress.

  25. 25
    EugeneS says:

    A quote in OP: “Of course, whether the brain is a computer is partly a matter of definition.”

    Not in the least. Leaving aside the mind vs. brain problem, each particular non-controversial chain of reasoning can be modeled by an algorithm and is therefore computable. As a consequence, by the very nature of computability, it is at the same time Goedel-incomplete.

    On the other hand, in their entirety all possible chains of reasoning the human mind is capable of generating aren’t computable. Human minds do not suffer from Goedel incompleteness and therefore are not formalizable as a whole.

    The human mind sees no obstacle in self-referential statements like the liar’s paradox. The computer falls short.

  26. 26
    asauber says:

    “So, it’s a fallacy to assume just because computers don’t do something that human beings currently do, they cannot.”

    It’s also a fallacy to assume that computers can do things they are currently not capable of doing.

    Computers are limited to what humans design them to be as working electronics. That they will transcend design and/or the limitations of electronics is kinda in the realm of fantasyland.

    Andrew

  27. 27
    tarmaras says:

    Some thoughts on the difference between computers and humans from Ashish Dalela’s Godel’s Mistake: The Role of Meanings in Mathematics.

    “Turing’s proof of the Halting Problem means that there are no formal procedures to distinguish programs that halt from those that don’t.

    This illustrates the contrast between computer programs and humans. Even an average intelligence human is unlikely to loop through the above instructions more than once. Humans would quickly detect a loop and stop even though there is no instruction to that effect. Humans are goal oriented and can see that looping is not taking them closer to the goal of solving a problem. A computer is not goal-oriented and has no way of knowing if it is getting closer to its goal. It knows how to execute instructions but has no clue about the computational ‘distance’ between a problem and its solution. When faced with an intractable problem, a computer would continue indefinitely on a line of approach that has been fed into it through programming. Human beings will likely alter their approach, try to solve the problem from multiple angles, and take the ideas and intuitions developed in one approach into another. They might bring unrelated ideas to bear upon the solution of a problem, which a computer will not. In case the problem isn’t solved, humans would stop attempting after a while, but the computer will not.

    In short, computers can never stop even when the problem is unsolvable and Turing formalized this in the Halting Problem. A problem might take a hundred years to solve, so it is worthwhile to know that the problem indeed has a solution before we spend a hundred years trying to solve it. It would be futile to spend a hundred years and then abort the attempt because the solution wasn’t found so far. Humans have the ability to abort intractable problems and Turing proved that this was impossible for a computer. The Halting Problem is an example of the kinds of unsolvable problems that Gödel’s theorem alludes to, but did not explicitly identify. The machine that attempts to answer such a question for a program that never halts will also run forever since coming to a stop means determining that the program being analyzed also comes to a halt.”

    http://www.ashishdalela.com/books/godels-mistake/

  28. 28
    Mapou says:

    It is a fallacy that modern computers are Turing machines and are thus subject to the halting problem. This is the age of massively parallel computing and networks. Turing’s idea’s on this are irrelevant.

  29. 29
    kairosfocus says:

    Mapou, I think the underlying logical case is deeper than whether we have a Turing machine. Computers are mechanisms that do not work through meaning and common sense. They execute mechanical operations, blindly, on data, and so fall under GIGO including getting into flailing loops or semantic blunders that go nowhere and just keep on until the power is externally switched off. As I have noted, computation is not contemplation. KF

  30. 30
    kairosfocus says:

    Popperian, computers are blindly mechanical, non rational signal processing devices. Responsible rational freedom and associated conscious intelligence put us in an entirely different category. And, we should note that. KF

    PS: Perhaps this can help us open up thinking on the mind-brain-body issue, courtesy Derek Smith: http://iose-gen.blogspot.com/2.....l#smth_mod

  31. 31
    Popperian says:

    Again, why doesn’t someone start out by explaining how human beings generate emotions, then point out how the universality of computation does not fit that explanation. Effectivly stating “It’s magic and computers are not magic doesn’t cut it.” Pushing the problem into an inexplicable mind hat exists in an inexplicable realm, doesn’t improve the problem.

    Of course, no one wants to explain how human beings generate emotions. How could anyone since it’s been divinely revealed that God did it and he is inexplicable, right?

    Computers, in the context of the article, are Universal Turing machines, not calculators. No one designed the first UTM with the goal of creating universally. Rather, we wanted a way to perform more accurate calculatons, quicker and more conveniently. Universality emerges from a specific repertoire of computations. It’s one of those concrete examples where explicability resolves at a higher level that is quasi-independent.

    As for why we’ve stalled, see this article.

  32. 32
    mjoels says:

    The answer is simple and infinitely complex at the same time. Our brain (if we are really just meat computers) can expand its problem space infinitely. We can identify and solve novel problems without having to be pre programmed to do so. Even the smallest life can do it, albeit to a limited degree. (antibiotic resistance anyone) No machine created can ever do that. Machines are deterministic. You can’t change that. No matter what, they have a finite set of outcomes based on their initial coding, that is spread across a specific spectrum of possible solution space. A computer can never defy it’s initial program. Neither can a TM. No model we currently have in mathematics can while bounded. That is impossible. The problem is that consciousness grows. It is not some static quantity you can fit inside a pre-defined box. AI is junk science mostly. We can define loops and clever tricks to make it seem like a computer makes decisions or performs some action, but there is never any intention behind it, it is always deterministic and will always be bounded by its physical limitations. People that believe in AI believe in it because it fits in with their materialistic beliefs, not because there is any strong proof that it is possible. All the proof right now says it is not. Even the article only speculates that physics HINTS AT IT While I believe that we might one day have bio based computing algorithms, there is no possibility that a hunk of metal will come to be known to be alive or have any equivalent sort of existence with a human.

  33. 33
    Querius says:

    Mapou,

    When a non-conscious machine of some kind such as a recording device is involved in a quantum experiment, unless the recording is observed/observable by a human, the wave function does not collapse, and the machine becomes entangled with the quantum experiment.

    Whether Schrödinger’s cat, by staring intently at the radioactive particle that would set off the geiger counter, etc., can remain alive by employing the Quantum Zeno effect is unresolved and controversial.

    -Q

  34. 34
    Mapou says:

    kairosfocus:

    Computers are mechanisms that do not work through meaning and common sense.

    I fully disagree with this. If you had said “Computers are unconscious mechanisms”, I would have agreed. But meaning, reasoning and common sense are all cause-effect phenomena, which means that they are actually impossible without a mechanism. So there is no reason that these things cannot be emulated in a machine.

    IDists should stop resisting artificial intelligence. It’s making ID look bad. Intelligence does not require consciousness or vice versa. Soon, we will have machines that are just as intelligent as we are or even more so.

    On a slightly different tangent: In the not too distant future, I plan to release Rebel Speech, the first unsupervised machine learning program that can learn to recognize speech (or any other sound) as accurately as a human being, just by listening. In addition, it is able to focus on a given voice in a conversation while ignoring all others, thereby solving the cocktail party problem. Wait for it.

  35. 35
    kairosfocus says:

    Mapou,

    reasoning and common sense etc are not blindly mechanical causal chains (perhaps perturbed by some noise) such as are effected in an arithmetic-logic unit, ALU or a floating point unit, FPU.

    Instead, such are inherently based on insight into the ground-consequent relationship and broader heuristics that guide inference, hunches, sense of likelihood or significance of a sign etc. While we can mimic some aspects of such through sufficiently complex blends of algorithms — I have in mind so-called expert systems, these again are critically dependent on programming design and the structure and contents of data evaluated as knowledge and rules of inference, heuristics of “explanation” in response to query, etc.

    Such things of course are intelligently designed.

    From what you are saying, you have been developing a system capable of detecting characteristic patterns and locking to a target once acquired, resisting a fair degree of background noise or interference. Such is an achievement, one that is again functionally specific, complex, organised, information-rich — i.e. FSCO/I — and it is obviously intelligently designed. (BTW, note the military implications.)

    I bring forward the FSCO/I point to underscore that AI systems as implemented fundamentally reveal their source in design.

    That is not crucial, what is is the difference between inherently blind mechanism and insight based rationality. Reduction to tokens used as symbols and stored in data structures then processed on mechanical step by step algorithms to yield programmed results through essentially mechanical cause-effect chains is not rational insight and inference. Nor is it responsible, rational freedom.

    I again draw attention to Reppert (And others beyond him):

    . . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions

    KF

  36. 36
    kairosfocus says:

    Popperian, re:

    why doesn’t someone start out by explaining how human beings generate emotions, then point out how the universality of computation does not fit that explanation. Effectivly stating “It’s magic and computers are not magic doesn’t cut it.” Pushing the problem into an inexplicable mind hat exists in an inexplicable realm, doesn’t improve the problem.

    Thanks for sharing your reflections (as opposed to the too common deadlocks on talking point games and linked typical fallacies that have become all too familiar . . . and informal fallacies are instructive on this matter . . . ), this always helps discussion move forward.

    Second, pardon an observation: your response inadvertently shows how you have become overly caught up in the Newtonian, clockwork vision of the world.

    Again, that reasoning by analogy or paradigmatic example — even though misleading — is instructive.

    My fundamental point is that reasoning as opposed to blindly mechanical computation inherently relies on insight into meaning and a sense of structured patterns that suggest connexions. For instance, many informal fallacies pivot on how emotions are deeply cognitive judgements that shift expectations and trigger protective responses. So, if someone diverts attention from the focal topic and sets up then soaks a strawman in ad hominems and ignites, the resulting fears and anger will shift context and will contribute to inviting dismissal of the original matter without serious evaluation. Thus the protective heuristice have been manipulated.

    Similarly, by shifting focus from the significance of insights and meaningful connexions to the scientific paradigm of Newtonian clockwork, then blending in the success of computer systems there is a shift away from a crucial difference that then leads to a reductionist, mechanistic tendency.

    The case of expert systems as was just discussed with Mapou is instructive:

    reasoning and common sense etc are not blindly mechanical causal chains (perhaps perturbed by some noise) such as are effected in an arithmetic-logic unit, ALU or a floating point unit, FPU.

    Instead, such are inherently based on insight into the ground-consequent relationship and broader heuristics that guide inference, hunches, sense of likelihood or significance of a sign etc. While we can mimic some aspects of such through sufficiently complex blends of algorithms — I have in mind so-called expert systems, these again are critically dependent on programming design and the structure and contents of data evaluated as knowledge and rules of inference, heuristics of “explanation” in response to query, etc.

    Notice, the motif of evaluation by comparison while noting key differences? Thus, the implication that analogies — pivotal to inductive reasoning BTW — are prone to being over-extended. We know per widespread experience that there are patterns in the world, and that sch often can be extended from one case to another so if we think there is a significant similarity, we will extend. But this raises the question of implications of significant difference and adjusting, adapting or overturning the extension.

    Such thought is imaginative, active, inferential, defeasible but verifiable to the point of in some cases strong empirical reliability, and more, much more. It is inherently non-algorithmic, pivoting on meaning, judgement and insight.

    As I am aware of your problem with inductive reasoning (broad sense), I share Avi Sion’s point:

    We might . . . ask – can there be a world without any ‘uniformities’? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms.

    Therefore, we must admit some uniformity to exist in the world.

    The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs.

    Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some ‘uniformities’; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . .

    The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion.

    It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) – for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases [[of inferred generalisations; try: “we can make mistakes in inductive generalisation . . . “] that have not been found worthy of particularization to date . . . .

    If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . .[[Logical and Spiritual Reflections, BK I Hume’s Problems with Induction, Ch 2 The principle of induction.]

    We have a deep intuitive sense that there is order and organisation in our cosmos, which comes out in recognisable, stable and at least partly intelligible patterns that extend from one case to another.

    Mechanism, of course is one such, and explanation on mechanism is highly successful in certain limited spheres. But by the turn of C19, there were already signs of randomness at work and by C20 we had to reckon with the dynamics of randomness in physics. In quantum mechanics, this is now deeply embedded, many phenomena being inextricably stochastic.

    But reducing an irreducibly complex world tot he pattern of mechanism with some room for chance, is not enough.

    The first fact of our existence is our self-aware, self-moved intelligent consciousness and interface with an external world using our bodies.

    This too is a reasonable pattern, one that we see in action with others who are as we are.

    From this we abstract themes such as intelligence, responsible freedom, agency, purpose and more, which we routinely use in understanding how we behave and the consequences when we act.

    What has happened in our time is that due to the prestige of science, mechanism based explanations have too often been allowed to displace the proper place for agent based explanations, the place for art and artifice. This has even been embedded in a dominant philosophy that too often unduly controls science: evolutionary materialism.

    There is even a panic, that if agency is allowed in the door, “demons” will be let loose and order and rationality go poof. This then often triggers fear, turf protection and linked locked in closed minded ideological irrationality.

    The simple fact that modern science arose from in the main Judaeo-Christian thought that perceived a world as designed in ways meant to point to its Author, through involving at some level simple and intelligible organising principles or laws, should give pause. The phrase thinking God’s [creative, organising and sustaining] thoughts after him should ring some bells. (This is too often suppressed in the way we are taught about the rise of modern science.)

    And of course, by way of opening the door to self-referential incoherence through demanding domination of mindedness by mechanism, evolutionary materialism falsifies itself. Haldane puts it in a nutshell:

    “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.

    So, the very terms you use: “how human beings generate emotions,” is a giveaway.

    We do not so much generate emotions and other consciously aware states of being, we experience them. And, to recognise and respect that fact without reference to demands for mechanistic reduction is a legitimate start-point for reflection.

    All explanation is going to be finite and limited, so there will always be start-points. Starting from the realities of our interior-life experience is a good first point, and reflection on such shows that rationality itself (a requisite of doing science etc) crucially depends on insightful, purposeful responsible and rational freedom.

    That which undermines such will then be self-defeating, and should be put aside.

    Thus, the significance of Reppert’s development of Haldane’s point via Lewis:

    . . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions

    Trying to reduce this to blindly mechanistic physical cause-effect chains with perhaps some noise, is self-defeating.

    In short, start-points and contexts for reasoning count for a lot.

    KF

    PS: Headlined: http://www.uncommondescent.com.....t-explana/

  37. 37
    Axel says:

    35# KF, not to put to fine a point on it: It is the Holy Spirit that coordinates the strands of our intelligence, producing intuition (infused knowledge), wisdom (infused understanding), etc.

    Christians know that it is the Holy Spirit who is the source of our prayers, when prayed aright, speaking, according to the mind of God. However, there is sometimes a strange similitude between its action in a person’s prayer, and the movements of a sportsman, when he is ‘in the zone’.

    A further similitude springs to mind between a person having obtrusive thoughts and a person who is told, on no account to think of the word that you are going to say to him. He will immediately apprehend the full meaning of the word, but will then be unable to prevent himself from reflecting on it, however briefly; the only difference being that bad, obtrusive thoughts are demonically inspired. It sounds ‘hair-raising’ (and usually is!), but we are subject to their promptings pretty much all the time, in a host of different ways, however appropriately or inappropriately we may respond.

    However, that initial, immediate apprehension, indeed, intuiting, of the full meaning of the word in both cases is paralleled by the play of both the sportsman when ‘in the zone’, and on occasions by a person while in his prayers.

    The coordination of heart and mind when praying is not always automatic and easy, particularly when tired, when the mind can wander off following the hearts discursive meanderings, instead of its leading and controlling the path of the thoughts of the heart.

    It seems that the way to remedy this distractedness is to pray faster. I used to find it a little shocking that a priest could lead praying of the Rosary at what seemed to me to be in an unseemly haste, but later discovered that one can not only ‘get into the flow’ at a mundane level, actually focusing the mind by reciting the words, but on occasions, get into the sportsman’s ‘zone’ by doing that, so that, not only does one immediately, effortlessly and seamlessly apprehend the meanings of the words but can simultaneously reflect on the Mysteries relating to Christ’s life, death, Resurrection and Ascension concerned. Almost as though, at the same time, one were a spectator. It doesn’t always happen that way, but it’s nice when it does.

  38. 38
    Zachriel says:

    mjoels: Our brain (if we are really just meat computers) can expand its problem space infinitely… Even the smallest life can do it, albeit to a limited degree.

    How can something be infinitely expandable, but limited in degree?

    Querius: When a non-conscious machine of some kind such as a recording device is involved in a quantum experiment, unless the recording is observed/observable by a human, the wave function does not collapse, and the machine becomes entangled with the quantum experiment.

    Nowadays, wave function collapse is normally analyzed as a case of quantum decoherence, which occurs with any macroscopic interaction or system with many degrees of freedom.

  39. 39
    Mung says:

    How can something be infinitely expandable, but limited in degree?

    Darwinism in a nutshell.

  40. 40
    Mapou says:

    kairosfocus:

    Mapou,

    reasoning and common sense etc are not blindly mechanical causal chains (perhaps perturbed by some noise) such as are effected in an arithmetic-logic unit, ALU or a floating point unit, FPU.

    Instead, such are inherently based on insight into the ground-consequent relationship and broader heuristics that guide inference, hunches, sense of likelihood or significance of a sign etc. While we can mimic some aspects of such through sufficiently complex blends of algorithms — I have in mind so-called expert systems, these again are critically dependent on programming design and the structure and contents of data evaluated as knowledge and rules of inference, heuristics of “explanation” in response to query, etc.

    Such things of course are intelligently designed.

    Of course, they are intelligently designed and so is the brain. But this is irrelevant to whether or not a machine can be just as intelligent as you and I. Your use of the word ‘blind’ to refer to mechanisms is erroneous. There is nothing blind about concurrent and sequential pattern detectors. The opposite is true. They are not blind to the sensory patterns that they detect.

    From what you are saying, you have been developing a system capable of detecting characteristic patterns and locking to a target once acquired, resisting a fair degree of background noise or interference. Such is an achievement, one that is again functionally specific, complex, organised, information-rich — i.e. FSCO/I — and it is obviously intelligently designed. (BTW, note the military implications.)

    Believe me, the implications, military and otherwise, have not escaped me.

    I bring forward the FSCO/I point to underscore that AI systems as implemented fundamentally reveal their source in design.

    That is not crucial, what is is the difference between inherently blind mechanism and insight based rationality. Reduction to tokens used as symbols and stored in data structures then processed on mechanical step by step algorithms to yield programmed results through essentially mechanical cause-effect chains is not rational insight and inference. Nor is it responsible, rational freedom.

    I disagree. Insight is a way of saying that some bits of knowledge are connected to some others. This is a normal characteristic of hierarchical knowledge systems. By the way, my learning algorithm is unsupervised, meaning that, unlike like current deep learning programs, it does not require that a label or symbol be attached to the audio data. Rebel Speech is not an expert system. It is non-symbolic: no tokens, no symbols, no labels. Just sensory data.

  41. 41
    Popperian says:

    It’s also a fallacy to assume that computers can do things they are currently not capable of doing.

    How do you determine what a computer is or is not capable of doing?

    Again, I would suggest that there are a number of things computers can do, but are currently incapable of doing because we haven’t figured out how to program that capability yet.

    For example, part of what makes the internet so powerful is that systems can dynamically determine the closest route though a number of nodes in a network. The algorithm that makes this possible is called Dijkstra’s algorithm. The earliest universal computers did not exhibit this capability. This is because Dijkstra’s algorithm had yet to be developed. Yet, we knew this was possible, not because we actually achieved it, but because of the explanatory theory about how computers do what they do.

    In the same sense, the laws of physics are such that a digital computer can simulate any other physical system, not just another computer, with arbitrary precision.

  42. 42
    anthropic says:

    Mung 39
    Q: How can something be infinitely expandable, but limited in degree?

    A: Darwinism in a nutshell.

    Post of the day, Mung!

  43. 43
    Virgil Cain says:

    Zachriel:

    How can something be infinitely expandable, but limited in degree?

    It was two different things, Zachriel. One- the human brain’s problem space- was infinitely expandable and the other-lower organisms- was limited in degree. So yes one thing can be infinite while another can be limited and the two can have similarities.

    Things that make you go hmmmmm….

  44. 44
    Box says:

    Mung #39,

    Perfect!

  45. 45
    Zachriel says:

    Mung: Darwinism in a nutshell.

    In what sense is Darwinism “infinitely expandable, but limited in degree”?

  46. 46
    Zachriel says:

    Virgil Cain: One- the human brain’s problem space- was infinitely expandable and the other-lower organisms- was limited in degree.

    How do you count the problem space for humans, as opposed to lower organisms?

  47. 47
    Silver Asiatic says:

    Z

    In what sense is Darwinism “infinitely expandable, but limited in degree”?

    In the literal sense.

  48. 48
    Zachriel says:

    Silver Asiatic: In the literal sense.

    That’s not even close to an answer. Take the definition of Darwinism, and then show in what manner it is “infinitely expandable, but limited in degree.”

    By Darwinism, are you referring to the modern theory of evolution, or to Darwin’s original theory, or something else? By infinitely expandable, do you mean the theory is infinitely expandable, or are you referring to the capabilities of evolution?

  49. 49
    Virgil Cain says:

    Zachriel:

    How do you count the problem space for humans, as opposed to lower organisms?

    Via observation and experimentation, ie science. That is what has you confused.

    By Darwinism, are you referring to the modern theory of evolution, or to Darwin’s original theory, or something else?

    There isn’t any “modern theory of evolution”. Darwin tried but he didn’t produce a scientific theory either.

  50. 50
    Zachriel says:

    Virgil Cain: Via observation and experimentation, ie science.

    That vague statement doesn’t entail an enumeration.

  51. 51
    Virgil Cain says:

    Zachriel:

    That vague statement doesn’t entail an enumeration.

    It is only vague to the unknowledgeable. And the premise was about limitations, not enumeration.

    To recap: Zachriel totally messed up what mjoels said, got caught and is now in full flail mode.

  52. 52
    Zachriel says:

    Virgil Cain: the premise was about limitations, not enumeration.

    We asked about a count, and you responded that it was via “science”. But the question remains, how do you measure the human “problem space”?

  53. 53

    Gérard DuBois is a French illustrator and produces images for the NYT including an image for the cited article.

    Gary Marcus, a professor of psychology and neural science at New York University, actually wrote the article that O’Leary (NEWS) cites as being written by DuBois.

    Please correct the OP.

  54. 54
  55. 55
    News says:

    Thanks for proofreading, Lincoln Phipps! Must have swatched the wrong name from the screen. Corrected.

Leave a Reply