Intelligent Design

Elizabeth Liddle Agrees: Saying “It’s Emergent!” is no Better than Saying “It’s Magic!”

Spread the love

For some years now I have argued that when it comes to explaining the existence of consciousness (subjective self-awareness), materialists have nothing interesting to say, that their so-called explanation amounts to nothing more than “poof! It happened.” See here, here and here. I was gratified to learn in a recent exchange that Elizabeth Liddle agrees with me at least at a certain level. In various places in that exchange she wrote:

Certainly an emergent property must be explained in terms of the system; and clearly an explanation must be “systematic” in the sense of specifying a cascade of mechanisms. . . .

“[Emergent” is] simply a word to denote the idea that when a whole has properties of a whole that are not possessed by the parts, those properties “emerge” from interactions between the parts (and of course between the whole and its environment). It is not itself an explanation – to be an explanation you would have to provide a putative mechanism by which those properties were generated. . . .

So the claim that consciousness is an emergent property of the materials of our bodies is not an explanation – it’s a conjecture. “[I]t’s emergent” would be [on an intellectual par with saying “It’s magic!”]. To support an emergent hypothesis you would have to provide a description of the putative processes by which the property emerges. So I agree with that.

In this respect Liddle apparently agrees with Thomas Nagel: “Merely to identify a cause [of consciousness] is not to provide a significant explanation, without some understanding of why the cause produces the effect.” Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False

For Nagel, to qualify as a genuine explanation, an emergent account would make the connection between mental events such as subjective self-awareness and the electro-chemical state of the nervous system “cease to seem like a gigantic set of inexplicable correlations and would instead make it begin to seem intelligible.” Nagel concedes, however, that at this point a systematic theory of consciousness is “a complete fantasy.”

I agree with Nagel. Science has not come remotely close to explaining how a physical event (the electro-chemical processes in the brain) can result in mental events (e.g., qualia; subjective self-awareness; intentionality; subject-object duality, etc.).

Liddle disagrees. She says that scientists have in fact identified how physical events result in mental events and she repeatedly directed us specifically to the work of Edelman and Tononi in A Universe Of Consciousness: How Matter Becomes Imagination. She gives a faint sketch of Edelman/Tononi’s argument:

But I think the essence of the answer lies in our capacity to simulate the outputs of our actions before we execute them and feedback those outputs as inputs into the action-selecting process. That allows us to both anticipate and remember in what Edelman calls a “remembered present”, in which past and possible futures are integrated.

At Liddle’s behest, I have read A Universe Of Consciousness. The authors summarize their key conclusion as follows:

Memory is a central component of the brain mechanisms that lead to consciousness. . . . the key conclusion is that whatever its form, memory itself is a system property. It cannot be equated exclusively with circuitry, with synaptic changes, with biochemistry, with value constraints, or with behavioral dynamics. Instead, it is the dynamic result of the interactions of all these factors acting together, serving to select an output that repeats a performance or an act.

As anyone with any experience in this area would have suspected, Edelman and Tononi identify consciousness as an emergent property. But, according to Liddle, they have gone a step further and identified at least some of the details of how consciousness arose from chemicals. Could this really be the case? Thomas Nagel has been among the most famous and influential philosophers of mind since the early 70’s. He says that a systematic theory of consciousness is “a complete fantasy.” Does Elizabeth Liddle know something that Nagel doesn’t?

You will probably not be surprised to learn that the answer to that question is “no.” But don’t take my word for it. In his review of A Universe Of Consciousness for Nature, Raymond J. Dolan wrote: “Explaining consciousness has become the Holy Grail of modern neuroscience. Any reckoning on who has found the true path is surely premature.”

In his review for The Guardian Steven Poole wrote:

Few people these days seriously doubt that consciousness arises solely from physical activity inside our skulls. But the big question is how this happens. Why does matter arranged in this way, and not others, give rise to minds? This is a question that Gerard Edelman and Giulio Tononi signally fail to answer, despite the grand promise of their subtitle.

Where has Liddle gone wrong? I can give no better answer than UD commenter Box, who wrote in that same exchange:

The book doesn’t help you at all, it’s a classic example of the good old cum hoc ergo propter hoc – ‘correlation is causation fallacy’. Evidence is provided suggestive of consciousness being *associated* with interconnected regions of the brain. And from this, Edelman and Tononi conclude that consciousness *arises* from the brain. IOW no mechanism that describes how to get from chemicals to consciousness, but a questionable cause logical fallacy instead.

In other words, Edelman and Tononi have asserted as an explanation exactly what Nagel said does not count as a genuine explanation – a gigantic set of inexplicable correlations.

The issue here is really very very simple. And for that reason I am always amazed when highly educated and articulate people like Liddle utterly fail to grasp it. I will try one more time to lay it out step by step.

1. Merely identifying a putative cause is not an explanation.

2. To count as an explanation, one must also give some understanding of why the putative cause produces the effect.

3. Asserting that physical brain state “A” exists (whatever “A” happens to be) and consciousness exists merely identifies a correlation.

4. For physical brain state A to count as an explanation of consciousness, one must also provide an understanding of why that physical event gave rise to that mental event.

5. This has never been done; no one has come close to doing it. There is good reason to believe it is not, in principle, possible to do it.

141 Replies to “Elizabeth Liddle Agrees: Saying “It’s Emergent!” is no Better than Saying “It’s Magic!”

  1. 1
    Andre says:

    Of course this problem won’t stop Dr Liddle or any of her cohorts in their belief in the power of emergence., What it does do however is make clear to everyone with more than 2 brain cells is, that materialists are the most superstitious bunch of people on this planet, they absolutely believe in magic and they do so dogmatically!

  2. 2
    kairosfocus says:

    BA,

    Pardon a clip from another thread where sparc (in 144, which I replied to at 148) so unwisely thought to sweep away the descriptive abbreviation, FSCO/I:

    149 kairosfocus May 4, 2015 at 4:12 am (Edit)

    F/N: FSCO/I is BTW a genuine, legitimately accounted for case of the emergent behaviour of systems comprising interacting parts. But, of course, while it readily gets you to mechanical GIGO limited computation, it will not allow you to indulge the fantasy of poof, we get North to rational self-aware contemplation by insistently heading West to blindly mechanical computation. KF

    There is a real emergence, but it won’t do what is wanted by the evolutionary materialists, to create the impression of explaining mind, morals and responsible freedom on blind chance and mechanical necessity.

    Where, FSCO/I has just one empirically grounded adequate causal explanation. Design — intelligently directed configuration.

    KF

    PS: And since you mention Box, let me cite the clipped remark by Reppert (building on C S Lewis and thence J B S Haldane etc) that so struck him when I recently quoted it:

    . . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

  3. 3
    Box says:

    In the OP, Barry quotes Edelman and Tononi saying that memory is a central component wrt consciousness.
    Here are the same writers saying that consciousness depends on the emergence of language:

    Our position has been that higher-order consciousness, which includes the ability to be conscious of being conscious, is dependent on the emergence of semantic capabilities and, ultimately of language.

    [source: Edelman and Tononi, “A Universe Of Consciousness: How Matter Becomes Imagination”, p.208.]

    Whatever. One thing is for sure: no biology-to-consciousness mechanism is offered.

    Steven Poole: Few people these days seriously doubt that consciousness arises solely from physical activity inside our skulls.

    The insanity of our days on display. It is utterly incoherent not to doubt a hypothesized mechanism that you don’t begin to understand.

  4. 4

    My response (and it would be my response to Ray Dolan too) is that Tononi and Edelman were indeed on the right track, that subsequent work (including work by Tononi, in collaboration with Karl Friston, a close colleague of Ray Dolan’s) has fleshed out a great number of details (note that Dolan’s review was in in 2001) and that what follows the part of his review that you quoted is:

    Nevertheless, the account of consciousness provided by Edelman and Tononi is certainly highly plausible and can be recommended as one of the most ambitious accounts around.

    I’m not going to get into a long discussion about this, not because the answer is too detailed, but because the issue is whether the details that have emerged since Edelman and Tononi wrote their book are even relevant to the question.

    I believe that they are, because I think the reason that they appear to some (including Dolan in 2001) to be inadequate isn’t that the claim is “premature” or insufficiently detailed, but because the more fundamental issue is over the nature of the question.

    The reason I think Edelman and Tononi were on the right lines is that their model of a “dynamic core” (for which we now have much better evidence) of re-entrant loops by which simulated output from a decision-making process is re-entered as input is what allows us to model the world on a map in which we ourselves are represented, thus allowing us to represent ourselves simultaneously as both subject and object.

    It’s that re-entrant architecture, in my view, that is key to the question as to why we do not simply react to stimuli, but perceive ourselves as the reactors.

    Thus we are able, not only to model the world as including dynamic objects and intentional agents, but as one that includes ourselves as one of those intentional agents – not only able to make goal-oriented decisions, but to model ourselves as the agent of those decisions.

    So Ray was right in 2001 that in 2001 there was still a big project ahead, and there still is.

    But where Ray differs from, say, Chalmers, is that he, like Edelman and Tononi (and I) do not think the problem is intractable, or “Hard”, as Chalmers calls it (at least I think that is Ray’s view).

    Personally, I think the answer is “hiding in plain sight” – not in detail (as Dolan says, there is a lot of work to be done, even 14 years later), but in principle, and that the reason the problem seems “Hard” is because it is often couched as a search for an explanation for a “state” rather than for a “process”.

    I do not think that consciousness is best viewed as a state. Doing so leads to the hypothetical existence of “philosophical zombies” – quasi-humans who act identically to ourselves but are not in a “conscious state” and thus experience nothing, only act.

    I think the phrase “conscious state” is misleading because it suggests that it is possible to be in such a “state” and yet there to be nothing we are conscious OF. And I think that a state in which we are conscious of nothing (including being conscious of being conscious of nothing!) is incoherent. I would say that to be conscious, we must be conscious OF something, even if all we are conscious of is that we are “conscious of nothing” (for instance, people in sensory-deprivation tanks often report being “conscious of nothing” – but that implies that they were conscious of being conscious of nothing).

    And if we allow that to be conscious we must be conscious OF something, even if that something is no more than “absence”, then I think that Edelman and Tononi’s model works just fine: it provides the basis (by means of what they call “re-entry”) of the architecture by which our models of the world are themselves the subject of consciousness – we are not only aware of the world (as a philosophical zombie must be, if they are to navigate it successfully) but aware of their awareness of the world.

    Anyway, Barry, thank you for reading the book, and for reading Ray Dolan’s review. As I think said, I was not convinced you would find it persuasive, and you didn’t – but I do, especially because subsequent work has tended to confirm their hypotheses about the neural architecture of re-entry, and the existence of a “dynamic core”.

    But the reason I find it persuasive is not the neuroscience per se (though the theory would fall down if their model was not supported by data), but because, philosophically, I am of the view that consciousness is the kind of thing that can be explained by re-entry, i.e. that it requires an object.

    If you think that consciousness can be a pure “state” with no object (nothing we are conscious “of”) then the book will not explain it – and indeed, as you rightly say – no book ever will.

    Either because it is irreducibly mysterious, or because it is not coherent – like a four-sided triangle. I think it is the latter, but I respect the views of people who think the former! I was one myself for half a century.

  5. 5
    Box says:

    Lizzie:
    But the reason I find it persuasive is (..) because, philosophically, I am of the view that consciousness (…) requires an object.

    Whatever. The question is: what has this to do with a step-by-step explanation of how to get from chemicals to consciousness?
    You don’t seem to get the point Barry is making: the book is not unpersuasive because it provides an unpersuasive step-by-step explanation of how to get from chemicals to consciousness, but because there is no attempt to present such an explanation at all.

    Last time I checked, one cannot be persuaded by a non-existent explanation.

  6. 6

    If by “step by step explanations of how to get from chemicals to consciousness” Barry wants a detailed account of neurodevelopment from conceptus to conscious person, then sure, that is beyond what science can currently, and probably ever, provide.

    But that is not the kind of explanation science usually provides – if you want to know how a heap of rocks came to be at the bottom of a cliff, we can explain it in terms of generalised principles such as weathering and gravity, without accounting for the precise trajectory of every rock.

    What the book does (and subsequent neuroscience papers have been supporting with evidence in the intervening 14 years) is to propose an overall mechanism by which the developed organism has conscious capacity, in terms of a neural architecture that allows for the re-entry of the output of unexecuted decisions as input into the decision-making process, by means of a “dynamic core” of brain networks. In terms of how those networks actually form out of the raw materials we (and our mothers) eat, you would need to know just how gene expression works during development, and we are only touching the surface of that.

    But if that is really what Barry is asking for, then indeed, no we don’t know exactly how the neural architecture comes about, in step-by-step embryological and molecular detail, but then we don’t know that detail for legs or ears, either.

    The book is about how such physical architecture gives rise to the capacity for consciousness.

  7. 7
    Silver Asiatic says:

    So the claim that consciousness is an emergent property of the materials of our bodies is not an explanation – it’s a conjecture. “[I]t’s emergent” would be [on an intellectual par with saying “It’s magic!”].

    That sounds like an honest response. It’s a conclusion that is often resisted by some who cite emergence as an explanation – but I think many can see that there’s really nothing to that at all. It’s not an explanation. Even as a conjecture, it’s hard to think that “it happens by magic” would be much of a conjecture.

    If we said that “magic requires a magician (thus intelligence)” or more simply, the appearance of emergence is evidence of design built into the properties of the thing … that’s more than a conjecture.

  8. 8
    Box says:

    Lizzie:

    If by “step by step explanations of how to get from chemicals to consciousness” Barry wants a detailed account of neurodevelopment from conceptus to conscious person, (…)

    Nope. You are allowed to start with a full-grown brain.

    Just explain step-by-step the process that gets us from brain-chemicals to consciousness.

    Lizzie: we don’t know exactly how the neural architecture comes about, in step-by-step embryological and molecular detail,(…)

    Embryological? Why do you suggest that this is what Barry has been asking? As far as I can tell, no one has asked you to explain this. Where did you get the idea that this discussion is about neurodevelopment from conceptus?

  9. 9

    OK, then I refer you to my response vis-a-vis rocks at the bottom of a cliff.

    We can explain them in terms of generalisable principles. The generalizable principle put forward (in some consderable detail) by Edelman and Tononi is, as I said, the principle or re-entry, by means of a “dynamic core” of brain networks.

    The reason I mentioned “embryological processes” is that we all start life as a single undifferentiated cell, so if you (or Barry) had wanted a step by step account of how the food that the mother eats (the “chemicals”) become the architecture of the child’s brain, you would need an embryological account.

    But as you have clarified that this is not what you are asking for (and I assumed that Barry was not) I refer you to my response regarding the proposal by Edelman and Tononi.

  10. 10
    Barry Arrington says:

    Box:

    the book is not unpersuasive because it provides an unpersuasive step-by-step explanation of how to get from chemicals to consciousness, but because there is no attempt to present such an explanation at all.

    Liddle erects a straw man in response:

    If by “step by step explanations of how to get from chemicals to consciousness” Barry wants a detailed account of neurodevelopment from conceptus to conscious person, then sure, that is beyond what science can currently, and probably ever, provide . . . no we don’t know exactly how the neural architecture comes about, in step-by-step embryological and molecular detail, but then we don’t know that detail for legs or ears, either.

    It should be obvious that Box and I are not asking for an account of the development of the brain (from conceptus or any other stage). We are asking for an account of how the chemicals in a currently-existing fully developed and mature brain result in consciousness. To use your terms, we are not asking for a description of the development of legs or ears. We are asking how legs result in walking and ears result in hearing.

    And science does in fact answer these questions. Here is Wikipedia’s thumbnail description of how the ear results in hearing:

    Sound that travels through the outer ear impacts on the tympanic membrane (ear drum), and causes it to vibrate. The three ossicles bones transmit this sound to a second window (the oval window) which protects the fluid-filled inner ear. In detail, the pinna of the outer ear helps to focus a sound, which impacts on the tympanic membrane. The malleus rests on the membrane, and receives the vibration. This vibration is transmitted along the incus and stapes to the oval window. Two small muscles, the tensor tympani and stapedius, also help modulate noise. The tensor tympani dampens noise, and thestapedius decreases the receptivity to high-frequency noise. Vibration of the oval window causes vibration of the endolymph within the ventricles and cochlea.

    The hollow channels of the inner ear are filled with liquid, and contain a sensory epithelium that is studded with hair cells. The microscopic “hairs” of these cells are structural protein filaments that project out into the fluid. The hair cells are mechanoreceptors that release a chemical neurotransmitter when stimulated. Sound waves moving through fluid flows against the receptor cells of the Organ of Corti. The fluid pushes the filaments of individual cells; movement of the filaments causes receptor cells to become open to the potassium-rich endolymph. This causes the cell to depolarise, and creates an action potential that is transmitted along the spiral ganglion, which sends information through the auditory portion of the vestibulocochlear nerve to the temporal lobe of the brain.

    Box’s point (and mine) is that Tononi and Edelman have not even attempted to provide a similar account tracing the causal links that lead from the electro-chemical processes in the brain to, for example, subjective self-awareness. And no one else has either.

    You say much progress has been made since 2001. If you mean the proposed set of inexplicable correlations has grown ever more gigantic, who could argue with that? If you mean someone has taken even the first step in showing how physical events result in mental events, then no that has not happened. And you know it has not happened; else you would point us to that work.

    There is a vast ontological gulf between the physical things in the brain and mental things such as qualia; subjective self-awareness, intentionality, and subject-object duality. Not only is it true that no one has come close to bridging that gulf; it is true that no one has taken even the very first baby step in doing so. And the reason for this should be obvious. Trying to bridge the gulf between the physical and the mental is like trying to bridge the gap between onions and the number four. But like Sisyphus rolling his rock up the hill, materialists never cease in their futile efforts to bridge the unbridgeable. Sisyphus was compelled by the gods; materialist ideologues are compelled by their blinkered adherence to their incoherent metaphysics.

  11. 11
    Barry Arrington says:

    Liddle has fallen back on the following analogy:

    If you want to know how a heap of rocks came to be at the bottom of a cliff, we can explain it in terms of generalised principles such as weathering and gravity, without accounting for the precise trajectory of every rock.

    That is certainly true. And it is also (apparently unintentionally) highlights the poverty of materialist “explanations” of consciousness.

    Here is a scientific account of Case 1; How rocks came to rest at the bottom of a hill.

    The rocks came to rest at the bottom of the hill because water acting in accordance with well understood principles of hydro-dynamics carried away the soil supporting the rocks on the side of the hill. When that soil was carried away, the rocks came loose from the side of the hill and rolled to the bottom in accordance with our models of gravity.

    Contrast that with Case 2; How chemicals cause consciousness

    There is a neural architecture that allows for the re-entry of the output of unexecuted decisions as input into the decision-making process, by means of a “dynamic core” of brain networks. And there is also consciousness. The first thing causes the second thing by means of “emergence” (synonym: “magic”).

    It is a mystery why Liddle believes that a scientific explanation that appeals to well-known and well modeled regularities is analogous to a “scientific” explanation that appeals to a gigantic set of inexplicable correlations.

  12. 12
    Barry Arrington says:

    Off to work for me.

  13. 13
    Neil Rickert says:

    For Nagel, to qualify as a genuine explanation, an emergent account would make the connection between mental events such as subjective self-awareness and the electro-chemical state of the nervous system “cease to seem like a gigantic set of inexplicable correlations and would instead make it begin to seem intelligible.” Nagel concedes, however, that at this point a systematic theory of consciousness is “a complete fantasy.”

    That would be a reductive account, not an emergent account.

    I agree that just say “It is emergent” isn’t saying much at all. However, we should allow the possibility (even likelihood) that an emergent account would toss out the conventional terminology (terms such as “mental event”), and use a fundamentally different analysis.

  14. 14
    phoodoo says:

    Well, Lizzie also struggles with deciding if her version of evolution is called directed or undirected.

    The twisted logic goes like this:

    We know what happens if we put salt into water.

    See, its not random, and its not undirected, so it must be directed, but not designed.

    No one believes in undirected evolution anymore, don’t be ridiculous. And there is no designer.

    Whether or not you accept evolutionary processes as a scientific explanation for the diversity of effective adaptations we observe in biological organisms, it has no bearing on whether atheism or theism are true (but its not random, but the fact that it is not random, does not mean that if its not random, its planned).

    Humpty dumpty sat on a wall.

    I have long given up believing that her arguments even resemble an intelligent attempt at a reply. She just knows two words to fend of all problems of explaining evolution, “modern synthesis,” and “emergence.”

    She can collect a million dollars from Randi because she has proven that magic does exist, just say these two words, and you can turn off your brain.

    Now ask Lizzie to name one paper which provides the best evidence for evolution. Suddenly she can’t speak.

  15. 15
    Box says:

    Lizzie: (…) conscious capacity, in terms of a neural architecture that allows for the re-entry of the output of unexecuted decisions as input into the decision-making process, by means of a “dynamic core” of brain networks.

    By what reason is it justified to term physical processes in the brain “unexecuted decisions” or a “decision-making process”? When studying the brain we don’t see “decisions”, we see chemical stuff.

  16. 16
    phoodoo says:

    neil,

    Why would calling something an emergent account require a different analysis from calling our thoughts a mental event?

    What is an emergent event, and how is it different?

    It seems your side is wanting to invoke magic into the word emergent, which as far as I can tell only means the result of a completed system, as opposed to just using some of the parts. Like a car doesn’t run with just an engine, if it has no drive train and wheels. The only emergence is the completed car works, whilst the incomplete car goes no where.

    What’s the big deal?

  17. 17

    Box

    By what reason is it justified to term physical processes in the brain “unexecuted decisions” or a “decision-making process”? When studying the brain we don’t see “decisions”, we see chemical stuff.

    Well, no, we don’t see “chemical stuff”, usually, in living brains, and in any case, when we study the brain, we study what it does when someone is using it!

    And one big area of research is into decision-making, as that is key to the whole domain of cognitive neuroscience.

    So yes, we do see decisions – we see what the person decides, and we see what her brain is doing during the process of making that decision, and we can also see what the brain does when the decision is hard, or leads to an error, for instance.

    We can also see differences between what the brain does when a person subsequently reports that they were aware of having made an error, and when they weren’t.

    So neuroimaging can tell us a lot about the neural processes that underpin decision making.

    In particular, we do experiments that allow us to see what happens when a person starts to decide to take one action, but then ends up taking another, and at what point in the process an action becomes “irrevocable”.

    One field of study that has been particularly useful in this regard is eye-movements. When we make an eye movement, we first shift our attention to the thing we are about to focus on. This can be thought of as an “unexecuted eye movement” (and can be cancelled) – and uses the same brain regions, but to a less marked extent, than the brain regions we use to actually execute the movement.

  18. 18

    I think it is important to be specific regarding what we can and cannot explain even upon acknowledging that “the hard problem” of consciousness may indeed be “hard.”

    It is fair to ask: in what organisms is consciousness present? Using Nagel’s definition that consciousness is present when “it is like something to be a ____ ,” I’d say that it is certainly “like something” to be a cat, dog or any other mammal. By that definition, mammals are conscious. Alternatively, it probably isn’t like something to be a rock or a chair. It follows that field mice pose the “hard problem” no less than human consciousness. Why does organizing matter into a field mouse, with a mouse brain, result in subjective experience (not just behavioral complexity) – while organizing matter into a rock or a chair does not?

    That’s the hard problem. Note that the hard problem is posed by mouse-consciousness even absent human cognitive abilities such as speech or anything resembling verbal thought, representation of self versus others, theory of mind, declarative memory, the human capacity for abstraction and imagination, envisioning alternative futures and choosing behaviors on that basis, a sense of self that persists throughout one’s lifetime, and so on.

    The hard problem notwithstanding, neuroscience (and clinical experience with brain injury) explains a great deal about what it is that endows a human being, and not a field mouse, with those cognitive capacities and how they work – what “causes” them in human beings, and not in mice. It therefore explains the material basis of those capacities – “explains” in the sense of understanding why they are present in human beings but absent in field mice – even given the unsolved “hard problem.” Without resort “immaterial minds,” neuroscience also has a great deal to say about sensory and cognitive endowments that are shared by field mice and human beings: place memory, various fundamental affective and motivational states (e.g., fear), classical conditioning and operant learning, the basics of visual and auditory processing, the basics of voluntary behavior, and so on. Bear in mind (something a mouse can’t do – and we know why) that there are many other levels of explanation – such as evolutionary, social, developmental etc. – for those capabilities of which we also have a very good grasp. Altogether there is a massively successful research program underway that may eventually shed light on even the hard problem, or at least how to think about it productively.

    At the same time, postulating dualism, and “an immaterial mind,” offers exactly zero explanation for the presence of these human endowments and their absence in mice. Moreover, it is as helpless before the “hard problem” as any materialistic explanation – and offers no hooks from which to bootstrap scientific investigation into that problem.

  19. 19
    Box says:

    Lizzie #17,

    too much stupidity and nonsense to merit a response

  20. 20
    Barry Arrington says:

    In response to the scientism on display at Liddle’s comment at 17, I commend to our readers clinical neuroscientist Raymond Tallis’ What Neuroscience Cannot Tell Us About Ourselves (see here).

    I will excerpt only a couple of snippets below, but I highly recommend the entire article for those seeking an antidote to the neuroscientism snake oil Liddle is peddling.

    [Neuroscience] reveals some of the most important conditions that are necessary for behavior and awareness. What neuroscience does not do, however, is provide a satisfactory account of the conditions that are sufficient for behavior and awareness.

    The pervasive yet mistaken idea that neuroscience does fully account for awareness and behavior is neuroscientism, an exercise in science-based faith.

    Liddle: “So yes, we do see decisions”

    This statement is absurd. We do not “see” decisions. We see material events; not mental ones.

    Tallis again:

    Like all material items, nerve impulses lack appearances absent an observer. And given that they are material events lacking appearances in themselves, there is no reason why they should bring about the appearances of things other than themselves. It is magical thinking to imagine that material events in a material object should be appearings of objects other than themselves. Material objects require consciousness in order to appear.

    There we are again. Liddle is pushing materialist magic.

  21. 21
    Barry Arrington says:

    RB @ 19:

    The hard problem notwithstanding, neuroscience (and clinical experience with brain injury) explains a great deal . . .

    No one is arguing that it does not. Of course it does. Liddle has erred by not by suggesting what neuroscience does and can explain. She has erred by averring that neuroscience explains what it has not explained (and in principle probably can never explain). Her faith in scientism is very strong, and her error can be traced to her faith.

  22. 22

    Barry wrote:

    Here is a scientific account of Case 1; How rocks came to rest at the bottom of a hill.

    The rocks came to rest at the bottom of the hill because water acting in accordance with well understood principles of hydro-dynamics carried away the soil supporting the rocks on the side of the hill. When that soil was carried away, the rocks came loose from the side of the hill and rolled to the bottom in accordance with our models of gravity.

    Contrast that with Case 2; How chemicals cause consciousness

    There is a neural architecture that allows for the re-entry of the output of unexecuted decisions as input into the decision-making process, by means of a “dynamic core” of brain networks. And there is also consciousness. The first thing causes the second thing by means of “emergence” (synonym: “magic”).

    OK, well, let me try one more time (but I’m not going to start with “chemicals” as you have already stipulated that you don’t want the story of how the ingredients of food become brain-stuff – I’m going to start with brain architecture):

    Let’s take the example of being conscious that someone has entered the room.

    First of all, sensory signals arrive at the sense organs – the sounds of the door latch, footsteps, breathing, possibly a polite cough, possibly a slight change in the illumination on your newspaper, or whatever – possibly even a slight change to the infra-red flux on your skin. Those sensory signals arrive at the primary sensory regions of the brain, and are transmitted, through both hard-wired (i.e. configurations governed by genes) and learned circuitry, reproduce, in the brain, the same firing patterns as are regularly associated with the event in question – patterns associated with the proximity of a human being, and possibly with a specific human being, combined with patterns associated with the approach of a human being, at that particular door.

    This cascade of network firing is re-entrant – each time round the circuit, parts of the circulating pattern are reinforced by responding signals from some parts of the brain, and inhibited by others. So, in the early stages, fear responses may be initiated, but inhibited by responses from parts of the brain that represent associations of those particular footsteps with familiarity.

    And as this re-entrant process continues, rather like a complex pattern of vortices, each of which can reinforce, or inhibit others, patterns that will send activation to muscles that will execute a response (turning the head, shifting gaze, preparing a greeting) are also part of the swirl.

    And, in addition to this, just as brain regions activated by association with the presence of the person who just entered, so brain regions associated with the presence of YOU, including your location, mental state, etc, are also brought into the vortex, adjusting the output to the muscles (smile muscles, standing up muscles, get-ready to hug muscles etc).

    And all the time, this complex cascade of – of neural firing is feeding back the effects of the previous output, in a non-linear process in which information not only as to who has entered the room, but what relation they bear to you, and what your own response is, and whether you are happy with that response, thus actually creating a system in which you, the owner of the brain, are represented, dynamically, in relation to the other person in the room, in a manner that allows the two of you to interact.

    And thus we have a system in which one entity – you – is conscious not only of the presence of another – the person who just entered, but of your own presence, and the relationship, and banquet of possible actions that could be taken, and events that might transpire, as a result of the person’s arrival.

    And the reason that “re-entry” and the “dynamic core” are crucial (both of which we have good evidence for) is that because that the re-entry allows the system to both non-linear and dynamic, giving rise to the capacity not to “be conscious” but to be conscious OF your own presence, your location, your future possible actions, the other person’s presence, their posssible future actions, all constantly updated in the light of both new sensory input and re-turning input from the brain itself.

    I’m sure it won’t persuade you, but, as I say, it persuades me.

  23. 23

    Barry:

    Her faith in scientism is very strong, and her error can be traced to her faith.

    Then trace it.

  24. 24
    Barry Arrington says:

    RB @ 19 continued

    At the same time, postulating dualism, and “an immaterial mind,” offers exactly zero explanation for the presence of these human endowments and their absence in mice.

    Unless dualism is true and an immaterial mind exists, in which case it would be the explanation. And why should we rule out dualism a priori? “Because it is not consonant with my metaphysics” you might say. To which I would respond, “And why, exactly, should I put on blinkers just because you find them amenable?

    Moreover, it is as helpless before the “hard problem” as any materialistic explanation – and offers no hooks from which to bootstrap scientific investigation into that problem.

    Which probably means that the problem is not susceptible to a scientific answer. I am OK with that. Whoever said that only scientific answers are permitted.

  25. 25

    Barry

    Liddle: “So yes, we do see decisions”

    This statement is absurd. We do not “see” decisions. We see material events; not mental ones.

    I meant, of course, the decisions that people make. Some are indeed invisible (poker players are good at keeping them that way), but most are manifest in action.

  26. 26
    Popperian says:

    To quote from a comment on another thread…

    it seems that objections to emergence confuse a type of explanation with a specific emergent explanation itself. Merely saying “It’s emergent” isn’t an explanation, it’s a classification.

    For example, to determine how long it will take for water to boil, I do not need to know the initial conditions, the an exact count or the exact path of each water molecule will take, let alone that kind of detail for all of the initial external influences that act outside the tea pot. These details are completely untraceable by current day computers operating till the age of the universe. But, fortunately, in the majority of cases we don’t really care about those details. Their complexity resolves into higher-level of simplicity.

    If I want to make tea, all I need to know is the mass, power output of the heading element, etc., which are easy to measure. The relationships between containers, heating elements and boiling bubbles can be explained in terms of each other, without a direct reference to the atomic level or even lower. The sort of behavior of this entire class of higher-level phenomena is quasi-autonomous, which is nearly self contained.

    IOW, emergence is the resolution of explainability at this higher, quasi-autonomous level. It’s a kind or classification of explanation.

    As such, It’s unclear why, even if we lack an emergent explanation for how conciseness emerges, this prevents us from staying that any such expiation would itself be at a this higher level and quasi-autonomous.

    An example of this in the brain is our understanding of synaptic connections between neurons.

    To create a virtual neocortical column, the Blue Brain project had to distribute neurons in their simulation. This required them to create a higher level principle that was simple enough to allow distribution without mapping exact positions of neurons in a real neocortical column.

    From this article on the project.

    “This is a major breakthrough, because it would otherwise take decades, if not centuries, to map the location of each synapse in the brain and it also makes it so much easier now to build accurate models,” says Henry Markram, head of the BBP.”

    IOW, the connections between neurons resolves into a simpler, quasi-autonomous, higher level of explanation, which makes the simulation much easier to create and model. This is an example of an emergent explanation, which is a class of explanation, not a concrete explanation itself. Nothing about it is “magic”.

    From the article….

    Virtual Reconstruction

    To solve the mystery, a research team from the Blue Brain Project set about virtually reconstructing (simulated on a computer) a cortical microcircuit based on unparalleled data about the geometrical and electrical properties of neurons — data from over nearly 20 years of painstaking experimentation on slices of living brain tissue.

    Each neuron in the circuit was reconstructed into a 3D model on a powerful Blue Gene supercomputer. About 10,000 virtual neurons were packed into a 3D space in random positions according to the density and ratio of morphological types found in corresponding living tissue. The researchers then compared the model back to an equivalent brain circuit from a real mammalian brain.

    To their great surprise, they found that the locations on the model matched that of synapses found in the equivalent real-brain circuit with an accuracy ranging from 75 percent to 95 percent.

    Random connections

    This means that neurons grow as independently of each other as physically possible and mostly form synapses at the locations where they randomly bump into each other.

    A few exceptions were also discovered, pointing out special cases where signals are used by neurons to change the statistical connectivity. By taking these exceptions into account, the Blue Brain team can now make a near perfect prediction of the locations of all the synapses formed inside the circuit.

    The goal of the BBP is to integrate knowledge from all the specialized branches of neuroscience, to derive from it the fundamental principles that govern brain structure and function, and ultimately, to reconstruct the brains of different species — including the human brain — in silico. The current paper provides another proof-of-concept for the approach, by demonstrating for the first time that the distribution of synapses or neuronal connections in the mammalian cortex can, to a large extent, be predicted, EPFL scientists say.

  27. 27
    Barry Arrington says:

    For those interested in an extended account of why Liddle’s comment at 22 and Popperian’s comment at 26 do not come even remotely close to explaining consciousness, I again commend Tallis’s article, which I linked to in 20.

    Here is more:

    Physical science is thus about the marginalization, and ultimately the elimination, of phenomenal appearance. But consciousness is centrally about appearances. The basic contents of consciousness are these mere “secondary qualities.” They are what fill our every conscious moment. As science advances, it retreats from appearances towards quantifying items that do not in themselves have the kinds of manifestation that constitute our experiences. A biophysical account of consciousness, which sees consciousness in terms of nerve impulses that are the passage of ions through semi-permeable membranes, must be a contradiction in terms. For such an account must ultimately be a physical account, and physical science does not admit the existence of anything that would show why a physical object such as a brain should find, uncover, create, produce, result in, or cause the emergence of appearances and, in particular, secondary qualities in the world. Galileo’s famous assertions that the book of nature “is written in the language of mathematics” and that “tastes, odors, colors … reside only in consciousness,” and would be “wiped out and annihilated” in a world devoid of conscious creatures, underline the connection, going back to the very earliest days of modern physical science, between quantification and the disappearance of appearance.

  28. 28
    Popperian says:

    BA:

    Unless dualism is true and an immaterial mind exists, in which case it would be the explanation. And why should we rule out dualism a priori?

    We do not rule it out, Barry. It just doesn’t add to the explanation. As such we see no need to include it.

    What I want from explanations is their content, not their providence. Apparently, you want more, but that’s your problem, not mine.

  29. 29
    Barry Arrington says:

    Popperian @ 28:

    We do not rule it out, Barry. It just doesn’t add to the explanation. As such we see no need to include it.

    Reminds me of the guy walking down the street at night. He encounters a man on his hands and knees under a street lamp and the following exchange ensues:

    Man 1: What are you looking for?

    Man 2: My keys.

    Man 1: Where did you lose them?

    Man 2: About a hundred yards from here.

    Man 1: Then why why don’t you go look for them in the area where you lost them.

    Man 2: Silly, the light is much better here.

  30. 30

    Can I point out, Barry, that I am not “pushing” or “peddling” anything at all, let alone “snake oil”.

    You pressed the subject in another thread, on quite a different topic. I responded. You pressed harder. I responded some more.

    Now you start an entire thread. So I respond.

    I am selling nothing. I have frequently stated that I understand how, from a particular philosophical standpoint, the problem is “Hard”. Indeed I’d go further – I’d say that if Chalmers is right, it’s impossible, not Hard.

    And I used to agree with him.

    I then saw it differently, and no longer agree with him. I am asking no-one to agree with me, I am simply attempting, as requested, to explain why I disagree.

    So your allegations are baseless.

    And, moreoever, I will point out that far from my “faith in scientism” leading me to “error”, I started from a quite different position, and came to a reasoned conclusion.

    I am not an advocate of “scientism” and it is not a “faith” that I hold. Indeed, until the the day I perceived what I consider Chalmers’ error, I was persuaded that our sense of our own consciousness was the best argument in favour of theism.

    And I may have made a mistake about his error. I don’t think so, obviously, because if I thought I was wrong, I would change my mind. But I think he has made a mistake. I think the mistake is quite subtle, and it’s not easy to pinpoint. My best simple version is that it treats consciousness as a state, not as a capacity.

    But no matter. It’s my honest view, and I’m “selling” it to nobody. I’m merely explaining, at your request, why I hold it.

  31. 31
    Barry Arrington says:

    EL @ 23

    Then trace it.

    I am astonished that you do not realize that my entire OP and the comments afterwards has been an exercise in tracing it.

  32. 32
    Box says:

    Lizzie:

    Box: By what reason is it justified to term physical processes in the brain “unexecuted decisions” or a “decision-making process”? When studying the brain we don’t see “decisions”, we see chemical stuff.

    Lizzie: Well, no, we don’t see “chemical stuff”, usually, in living brains, (..)
    So yes, we do see decisions (…)

    When studying the brain directly—when a skull is opened and we look inside—, what do we see? Decisions or “chemical stuff”?

    Lizzie: I meant, of course, the decisions that people make. Some are indeed invisible (poker players are good at keeping them that way), but most are manifest in action.

    Mind readers aside, no one can see the decisions other ppl make. We can see the material consequences of a decision made by someone else. But in principle you can only see your own consciousness, own thoughts, own feelings, own decisions and so forth.

  33. 33

    Barry:

    Unless dualism is true and an immaterial mind exists, in which case it would be the explanation. And why should we rule out dualism a priori?

    I certainly did not. Indeed, dualism was my prior, as I hope I have made clear.

    Not that monism is incompatible with theism, and is at least one reason for believing in the Resurrection of the Body.

  34. 34
    Barry Arrington says:

    Liddle @ 30:

    It’s my honest view, and I’m “selling” it to nobody. I’m merely explaining, at your request, why I hold it.

    I have never suggested otherwise. I really do believe that you believe someone somewhere has provided an explanation for how physical things cause mental things. The point of this exercise is not to question your honesty. The point is to show not only that you are wrong, but that your error is obvious to anyone not wearing materialist blinkers.

    Just as a medieval churchman reached wrong conclusions about whether the earth orbited the sun based on his blinkered insistence on an erroneous interpretation of the Bible, your blinkered insistence on neuroscientism has led you to the equally erroneous conclusions that brain activity is not only necessary but also sufficient to explain consciousness.

  35. 35

    Box: When studying the brain directly—when a skull is opened and we look inside—, what do we see? Decisions or “chemical stuff”?

    Post mortem studies aren’t on living brains. What we can do on living brains sometimes involves opening the skull, but only to implant electrodes, and those will only tell us anything if we are also looking at what the person is doing – deciding.

    Most neuroimaging is in vivo – we don’t look at the brain, but what it’s doing, and we don’t only look at what the brain is doing, we look at what the person is doing. That’s what functional neuroimaging is all about.

    Lizzie: I meant, of course, the decisions that people make. Some are indeed invisible (poker players are good at keeping them that way), but most are manifest in action.

    Mind readers aside, no one can see the decisions other ppl make. We can see the material consequences of a decision made by someone else. But in principle you can only see your own consciousness, own thoughts, own feelings, own decisions and so forth.

    Sure, but the output of those decisions is, in many cases, rapidly detectable, and that’s how the science is done.

    Like most forms of scientific investigation, we don’t observe the phenomenon directly – we make proxy measures, and those include behavioural outputs (eye movements, button-presses, etc), physiological outputs (measure of arousal, for instance, like skin conductivity, heart rate, and pupil diameter), and self-reports.

  36. 36
    Box says:

    Raymond Tallis: [Neuroscience] reveals some of the most important conditions that are necessary for behavior and awareness.

    Necessary for behavior may make sense if by behavior is meant “bodily behavior”. About the questionable hypothesis that the brain is necessary for awareness … well that is the topic under debate. For one thing it’s squarely contradicted by NDE experiences.

  37. 37

    Barry:

    Liddle @ 30:

    It’s my honest view, and I’m “selling” it to nobody. I’m merely explaining, at your request, why I hold it.

    I have never suggested otherwise.

    Barry:

    …but I highly recommend the entire article for those seeking an antidote to the neuroscientism snake oil Liddle is peddling.

    Barry:

    There we are again. Liddle is pushing materialist magic

    Let me repeat: I am selling/peddling/pushing nothing, snake oil or otherwise.

    I am explaining to you, in response to your request, why I reached the conclusion I did.

    It was not because of an “a priori” monism – I was a dualist at the time I found myself persuaded by the opposite argument. In fact, it was rather a shock. My world rocked for a bit.

  38. 38

    Barry:

    Just as a medieval churchman reached wrong conclusions about whether the earth orbited the sun based on his blinkered insistence on an erroneous interpretation of the Bible, your blinkered insistence on neuroscientism has led you to the equally erroneous conclusions that brain activity is not only necessary but also sufficient to explain consciousness.

    Except that the temporal order was reversed. I had no “blinkered insistence on neuroscientism” when I drew the conclusion. I was in the middle of an argument in which I was making the opposite point, i.e. the point made by Tallis, Chalmers, and indeed yourself.

    So your assumption is completely incorrect.

    And I still have no “blinkered insistence on neuroscientism”. I am perfectly open to the possibility that one day someone will persuade me that I was right in the first place. But, obviously, right now, I don’t think so.

  39. 39
    Popperian says:

    Barry:

    For those interested in an extended account of why Liddle’s comment at 22 and Popperian’s comment at 26 do not come even remotely close to explaining consciousness, I again commend Tallis’s article, which I linked to in 20.

    Did you have a response to my actual comment, Barry? You know, the one I actually wrote?

    For your convenience…

    it seems that objections to emergence confuse a type of explanation with a specific emergent explanation itself. Merely saying “It’s emergent” isn’t an explanation, it’s a classification.

    […]

    As such, It’s unclear why, even if we lack an emergent explanation for how conciseness emerges, this prevents us from staying that any such expiation would itself be at a this higher level and quasi-autonomous.

    As such, your argument that emergence is “like magic” isn’t even wrong. It represents confusion as to how emergence represents a level of explanation, rather than any specific concrete explanation.

    We have a number of hypotheses about specific aspects consciousness. However, Qualia is one aspect is particular difficult because we cannot predict, for example, what it will be like to see the color red. But this doesn’t mean that saying “God wanted it to be like that” adds to the explanation. That’s a form of justificationism, which is bad philosophy.

    Again, what I want from ideas are their content, not their providence.

  40. 40
    groovamos says:

    Steven Poole: Few people these days seriously doubt that consciousness arises solely from physical activity inside our skulls.

    Box: The insanity of our days on display.

    Hee hee I love it when we expose these guys. I would just build upon the previous to say

    (groov): The insane arrogance of the self-appointed elite is plainly on display.

    And I write this based on what Poole is actually saying, interpreted with a little help from bracketed:

    Steven Poole: Few people [worth paying attention to] these days seriously doubt that consciousness arises solely from physical activity inside our skulls.

    -the obvious falsehood otherwise no secret. Man, I have such fun watching you guys shred the opposition.

  41. 41
    drc466 says:

    To steal the analogy, here’s my summary of the convo thus far:
    Consciousness in the brain is like a pile of rocks…hanging in mid-air.
    To simply state that “hanging in mid-air is an emergent property of this particular pile of rocks” is equal to saying “it’s magic”.

    Liddle’s response appears to be: Look, we can measure how the rocks move within the floating pile as it makes decisions about where to go. It appears to use “re-entry” to determine its next position in the air, so the existing property of “floating re-entry” may be the key to how it began floating in the first place.

    Popperian’s response: “Emergence” is just a way of categorizing floating rocks as an unusual phenomenon. We can model in a computer where the rocks sit within the floating pile, so the fact that we can’t explain why it is floating is not important. Just accept it!

    Box/BA’s response: Umm, guys – it’s floating. FLOATING! Doesn’t that concern you at all?

    [Edit] (Sorry, didn’t mean to leave RB out. RB’s response: there are floating piles of rocks (humans), non-floating piles of rocks (mice), and just plain rocks. we can measure a lot of statistics on the individual rocks of floating and non-floating piles, and find that they are different. So those differences must explain the floating thing. Somehow. And saying that “the floating doesn’t have a materialistic explanation” doesn’t explain the floating, so emergence automatically wins).

  42. 42
    Barry Arrington says:

    Popperian @ 39. You have not been paying attention. “It emerged” is not an explanation at any level of generality unless the proponent explains how, in principle, the system from which the property supposedly emerged can cause the property.
    If the proponent cannot provide such a justification (and in the case of neuroscientism, no proponent has), the explanation is exactly on the same level as saying “It’s magic.” Here is an extended discussion of this that was linked in the OP and which you apparently failed to read:

    http://www.uncommondescent.com.....t-poofery/

    We have a number of hypotheses about specific aspects consciousness.

    None of which even remotely comes close to explaining how the physical produces the mental.

    And why do you insist on tilting at an explanation that is not under consideration? Your smug dismissal of an alternate explanation that no one is proposing only makes you look stupid. Stop it.

  43. 43
    Barry Arrington says:

    drc466 @ 41. Classic.

    But you forgot Popperian’s favorite rejoinder: “Saying that God likes floating rocks is no kind of explanation! Never mind that no one has proposed that as an explanation.”

  44. 44
    Box says:

    If rationality and consciousness are emergent properties of brain states, then brain states are causally primary—are in the driver’s seat.
    If we put blind material forces in the driver’s seat of e.g. reason—as emergentism does—then things don’t make much sense, as Reppert points out.

    . . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

    For another thing, if ‘blind’, ‘bereft of overview’, ‘uninterested in truth and logic’, particles produce our “rationality” we don’t have any reason to trust our reason.

    IOW emergentism doesn’t make materialism any less irrational.

  45. 45
    NetResearchGuy says:

    As a software engineer, I like to think about the problem of consciousness in terms of what it would take to make a conscious software program. In principle, any purely physical system can be simulated on a sufficiently powerful computer, so if conscious minds are physical then they are also computable.

    The argument I have against conscious software is that presumably it resulted from taking a simpler non-conscious software program, and adding some type of code, some sort of machine instruction that switched its status from non conscious to conscious. For example incrementing a variable or storing to a location in memory. The problem is that it’s nonsensical to conceive of a single line code modification that would have that sort of power.

    More usually, materialists instead claim that consciousness isn’t binary, but a continuum (EL claimed this in another post). OK, that implies that every piece of software is a little bit conscious. Perhaps the program “hello world” must be a tiny bit conscious, and the Windows OS quite a bit more. Again, nonsensical — clearly both of those programs are exactly zero percent conscious, and every software program besides those, as well as the entire collection of software created by the human race as a unit is zero percent conscious. If you believe any of them have fractional consciousness, explain why, and what percentage you would assign.

    EL seems to be arguing that anything that can sense input, identify patterns in the input, and respond to it in a feedback loop is conscious. I could make a software program attached to a camera that would identify faces, and whether they were attractive or not (based on known rules of proportion and symmetry), and connect to a monitor that would produce a frown or smile. EL would claim that program is a little bit conscious, while I would disagree. According to EL, our brain is just that software program with a LOT more patterns and feedback loops, but ultimately just reducible to those patterns and feedback loops. My argument is that no matter how many similar patterns and feedback loops you add to the software program, it’s not conscious. The problem is there is no whole or self to gather all the feedback loops into a single conscious unit. The feedback loop theory is merely a regress and makes zero progress on identifying where the whole or self comes from.

    As a manager, I’ve interviewed a ton of programmers, and when you ask someone a hard question, some of them make progress toward a solution, and others engage in “hand waving”. Hand waving I define as pretending to solve a problem by explaining how you would handle all the easy aspects of solving the problem, while being completely unaware of the hard aspects of the problem. That’s what I’m seeing in that book on consciousness, EL’s reasoning, and the field of evolution in general: easy aspects of the problem are tackled in ever increasing detail, while hard aspects are completely ignored.

    When I was younger, I was a publicly declared atheist, and thought people were meat computers, and strong AI was possible. So I’m coming from the perspective of someone whose mind was changed, and changed first due to reason, and only later by revelation.

  46. 46
    Popperian says:

    Barry

    You have not been paying attention. It emerged is not an explanation at any level of generality unless the proponent explains how, in principle, the system from which the property supposedly emerged can cause the property. See here for an extended discussion.

    I am paying attention, Barry. Nor do I care who is mistaken about emergence.

    Anyone who mistakenly suggested emergence is a concrete explanation, including you, isn’t even wrong. That’s my point. So, it’s not the same as saying “It’s magic”. They are mistaken at a fundamental level.

    Nor does this prevent others who do understand emergence as a classification of explanation from applying it to any potential explanation of conciseness.

    Yes, we do not yet know how to program consciousness. However, this does not mean that we cannot apply other theories to the problem and reach conclusions. For example, the law of computation tells us that any material object can be simulated to an arbitrary degree of accuracy due to the universality of computation. This would entail genuine artificial intelligence is possible given the laws of physics. Computation, quantum or otherwise, is an emergent explanation. As such, should we make the philosophical breakthrough necessary to program it, consciousness would be an emergent explanation.

    None of that entails “Magic” or “immaterialism” anymore than our explanation for how we can build universal computers out of cogs, transistors or qbits, which resolves at a higher, quasi-autonomous level.

  47. 47
    eigenstate says:

    @Arrington #20

    In response to the scientism on display at Liddle’s comment at 17, I commend to our readers clinical neuroscientist Raymond Tallis’ What Neuroscience Cannot Tell Us About Ourselves (see here).

    I will excerpt only a couple of snippets below, but I highly recommend the entire article for those seeking an antidote to the neuroscientism snake oil Liddle is peddling.

    [Neuroscience] reveals some of the most important conditions that are necessary for behavior and awareness. What neuroscience does not do, however, is provide a satisfactory account of the conditions that are sufficient for behavior and awareness.

    The pervasive yet mistaken idea that neuroscience does fully account for awareness and behavior is neuroscientism, an exercise in science-based faith.

    This is lazy reading of Tallis’ words here at best, and deceptive if it’s not simply lazy. Tallis does not find “satisfactory” accounts of neurophysiological basis for consciousness. The putative implication here from you via Tallis is that the science is “unsatsifactory”. Notwithstanding that science is never fully settled or finished for any theory, and is thus “permanently unsatisfactory” in the sense that it bears continual refinement and improvement, what Tallis complains about has nothing to do with the scientific depth or content.

    Later in the article he says:

    The failure to distinguish consciousness from neural activity corrodes our self-understanding in two significant ways. If we are just our brains, and our brains are just evolved organs designed to optimize our odds of survival — or, more precisely, to maximize the replicative potential of the genetic material for which we are the vehicle — then we are merely beasts like any other, equally beholden as apes and centipedes to biological drives. Similarly, if we are just our brains, and our brains are just material objects, then we, and our lives, are merely way stations in the great causal net that is the universe, stretching from the Big Bang to the Big Crunch.

    His dissatisfaction is not with the merits of the science, as you imply here, wittingly or otherwise, but obtains in his dissatisfaction with the consequences of the scientific implications. That is to say, he is committing the argumentum ad consequentiam fallacy, here.

    Just to make sure we know he’s making this error, in the next sentence he says:

    Most of those who subscribe to such “neuroevolutionary” accounts of humanity don’t recognize these consequences.

    Whoops. So much for your critique from Tallis on the merits of the science. So what it you or Tallis have issues with the implications? Not liking the consequences doesn’t make the science false or less credible in any way. Anyway, for readers who suppose there is a “scientifically unsatisfactory” problem from Tallis here would be mistaken. Read the whole article, and it’s your garden-variety crop of complaints that the science we do have and are working jsut can’t succeed, because superstitions and intuitions about the dualist self mean it just can’t. The science is irrelevant in this view — it’s impossible in principle to Tallis to cite just one of the many glaring appeals to superstition in the article:

    But nerve impulses do not have any appearance in themselves; they require a conscious subject observing them to appear — and it is irrelevant that the observation is highly mediated through instrumentation. Like all material items, nerve impulses lack appearances absent an observer.

    Liddle: “So yes, we do see decisions”

    This statement is absurd. We do not “see” decisions. We see material events; not mental ones.

    This begs the question, Barry. On the scientific view, mental events *are* material events. A decision is a completely physical process. If you think there’s more, why? I understand your intuitions suggest there’s more and you’ve got a reflex that responds to ideas that run counter to your intuitions as patently “absurd”, but scientifically, what’s the problem? Your (or Tallis’) dissatisfaction adds nothing to any model or explanation for the phenomena, nor does it challenge the existing models at all. “That’s absurd” is not a scientific objection. If science could only proceed with the countenance of your intuitions, we’d be back in the Dark Ages still.

    There we are again. Liddle is pushing materialist magic.

    You are equivocating, here, Barry. Remember you priority on plain, everyday, common definitions?
    Webster:

    a : the use of means (as charms or spells) believed to have supernatural power over natural forces
    b : magic rites or incantations
    2
    a : an extraordinary power or influence seemingly from a supernatural source
    b : something that seems to cast a spell : enchantment
    3
    : the art of producing illusions by sleight of hand

    You can deride a materialist explanation all you like, but right or wrong, one thing we can surely say it is not, is “magic”. It’s “anti-magic” — it is impersonal, no sorcerer, no conjuring or conjurer, nothing supernatural. Correct or no, these models are mundane, mechanical, the antithesis of magic. Words mean what we agree they should mean, but you’re using an epithet here that doesn’t apply to scientific models. Consider turning water into wine, though, say at a wedding. *That* would be “magic” per common usage.

  48. 48
    Popperian says:

    But you forgot Popperian’s favorite rejoinder: “Saying that God likes floating rocks is no kind of explanation! Never mind that no one has proposed that as an explanation.”

    If God doesn’t play a hard to vary, functional role, then how is that not an equivalent statement? You’re left with God wanting it that way, which doesn’t actually add to the explanation. Rather, God plays the role of a justifier and authoritative source of knowledge.

    But that’s a specific philosophical view about knowledge. Theism is a specific case of justificationism. As such, your argument is narrow in scope. it does not appeal to me because what I want from ideas are their content, not their providence.

  49. 49

    I should say, in reference to NetResearchGuy’s post, that of course a corollary to the view that consciousness is an emergent property (or, as I would prefer, capacity) of material configurations) that, in theory at least, a human-made robot could be conscious.

    I’ll just make three points about that:

    Firstly, I say “robot” advisedly, because it is my view that the origin of consciousness lies in the capacity to interact with, and navigate the world. I think it is no coincidence that organisms we attribute consciousness to tend to be able to move, namely animals, while we don’t atttribute consciousness so readily to plants. Except, interestingly, perhaps, Venus Fly Traps. They are seriously spooky. So if there is a candidate for AI, I think it will be in the field of robotics, specifically, because if the thing can move it can control its own input, as we do (e.g. when something catches our attention, we move our eyes, or reach out to touch it, thus bringing relevant sensory input online).

    Secondly, I think it is highly unlikely that we will ever actually do it unless of course we find a way of getting it to evolve i.e. by using a GA. That is because I think designing a neural system capable of anything we’d want to call human-like consciousness (as opposed to some dim eternal-present awareness) would need to be firstly, vastly complex, and secondly, “Darwinian” in its actual operation. There is a term “Neural Darwinism” to describe how brain networks are formed by a dynamic process that relies on the essentially Darwinian principle of Donald Hebb’s “what fires together wires together” (and what doesn’t tend to unwire), in which the connectivity is constantly changing. And I think the best way of designing a Hebbian system would be to get it to evolve.

    Thirdly, on the other hand, we can already model a great deal of human cognition, in silico, and design software that will do just what NetResearchGuy says – recognise faces and respond to them, interpret verbal instructions, learn, decide, solve problems, navigate dynamic environments, seek relevant information, decipher facial expressions, etc. And a lot of this is already proving useful, practically, as well as informing us about the kinds of mechanisms that are likely to subserve these cognitions in human brains (the information exchange goes both ways there, of course – vision science has contributed to image transmission software and reverse). It doesn’t make any of these machines consciousness, because, as Tononi and Edelman lay out, it’s the coordination and re-entrant looping at many different scales that they, and I, think is crucial. But we have some pretty clever cognitive modules – right there in our phones already!

  50. 50
    groovamos says:

    eigenstate: You are equivocating, here, Barry. Remember you priority on plain, everyday, common definitions?

    Thank you for providing the trapdoor for what Barry is saying

    Webster: 2b : something that seems to cast a spell : enchantment

    I worked for more than two years at the UT Austin electronics design shop, and of course almost everyone around that department were materialists (except for one elderly faculty near retirement). I have seen the following scenario more than once: a self-assured guy with multiple degrees, turned to stuttering mush by a beautiful, poised young student. I as a 30-year-old graduate student in engineering took a 20-year-old, athletic, buxom lady to a party, and one of these guys made an absolute fool of himself, staring at her bust and repeatedly commenting on it. I and my date were so pissed that I confronted the guy the following Monday in private.

    This was one of those people who had been conditioned in the echo chamber of materialism. These young women with not nearly the experience, education or “the authority” of these materialists find themselves stunned at such lack of self assuredness and face it, self control. These materialists, if they were really just material, become spellbound and lose it instead of their brains behaving as good material brains should. And they don’t know why and neither does eigenstate. You material guys, so-designated, have no clue as to why these masters of materialism blow it so bad and become neophytes in certain situations. And you will never understand it from your worldview. Enchantment is just to hard for some people to experience when in the grip of a false understanding of self (materialism). Materialism is in most cases an unconscious choice brought on by personality issues, when the echo chamber opens its door.

  51. 51
    Carpathian says:

    groovamos:

    These materialists, if they were really just material, lose it instead of their brains behaving as good material brains should. And they don’t know why and neither does eigenstate. You material guys, so-designated, have no clue as to why these masters of materialism blow it so bad and become neophytes in certain situations. And you will never understand it from your worldview.

    Jimmy Swaggart and Jimmy Bakker were not materialists and yet they also had little control, despite their worldview.

  52. 52
    phoodoo says:

    Liddle @ 49

    Wait, you mean that the only organisms we know of that are conscious move, so movement must be a necessary ingredient of consciousness?

    I think all of the organisms that we know of to be conscious also have a round head. Would a round head be necessary for consciousness then? What about hair? Most things that are conscious seem to have hair.

    And what about people that are paralyzed, are they conscious?

    Are you aware that many of our thoughts come from our stomach and the bacteria inside it? So I guess if you want to make a computer with consciousness, you will have to get it to eat first.

    I think you were better off when you stuck with, we know nothing about how or why something is conscious, except for something about emergence and magic.

  53. 53

    No, phoodoo. I think that consciousness evolved in organisms that move, because they need to be aware of their surroundings and of where they are in relation to other things, including predators and prey.

    I don’t think you have to be able to move to be conscious – that is clearly not the case. I do think that the capacity to be conscious evolved in tandem with the capacity to move around.

  54. 54
    Popperian says:

    Elizabeth,

    You might find this TED Talk by self proclaimed “movement chauvinist” Ted Wolpart interesting.

    Neuroscientist Daniel Wolpert starts from a surprising premise: the brain evolved, not to think or feel, but to control movement. In this entertaining, data-rich talk he gives us a glimpse into how the brain creates the grace and agility of human motion.

  55. 55

    Popperian: Well, I have to confess to be a Wolpert fan!

    But I hadn’t realised he’d come to the same conclusion. That’s cool.

  56. 56
    Joe says:

    Elizabeth:

    I think that consciousness evolved..

    Evolved how- by design or via accumulations of genetic accidents, errors and mistakes?

    in organisms that move, because they need to be aware of their surroundings and of where they are in relation to other things, including predators and prey.

    The evidence says that organisms that move in any respect are the result of intelligent design.

    The existence of neurons alone is evidence for ID. Do you really think neurons evolved via differing accumulations of genetic accidents, errors and mistakes? Model it- find some way to operationalize the concept. And then get back to us.

  57. 57
    Barry Arrington says:

    eigenstate @ 47:

    Barry quotes Tallis

    eigenstate responds: “This is lazy reading of Tallis’ words here at best.”

    E, let me clue you in. A “reading” of someone’s work is an interpretation of their work. Quoting their work is not a “reading” of their work. It is a quotation of their work.

    Now, what you did after making this idiotic assertion would be a “reading,” of Tallis work, a reading, BTW, that misses him at every level.

    You really should think things through before you post silly crap on the internet.

  58. 58
    Barry Arrington says:

    AS @ 60.

    My thesis is that neuroscience has not shown how physical things result in mental things. EL has argued that it has. She has lost that argument, badly. The hard problem has not been solved. So how, exactly, was what I said “silly”?

  59. 59
    Carpathian says:

    Barry Arrington:

    My thesis is that neuroscience has not shown how physical things result in mental things.

    A patient had stated to his doctor that his parents had been replaced with exact copies.

    He had been in a car accident in which one part of his brain that coupled emotional memories with visual ones had been damaged.

    Thus he recognized his parents but was missing the emotional component and determined they must be replicas.

    This is clearly a case of a physical change in the brain modifying a mental “thing”.

    If our sense of consciousness was not material, why the change in feeling to someone we recognized and loved?

  60. 60
    Barry Arrington says:

    Carpathian @ 62. Your question is answered in Tallis’ article that I linked to in 20. Go read the article and if you don’t understand the explanation we can discuss it then.

  61. 61
    Barry Arrington says:

    AS @ 55:

    So there’s no point in me suggesting Michael Graziano’s Consciousness and the Social Brain, I guess.

    Yes, there is no point. When an idiot says he is not subjectively self-aware, it make no difference how many letters are behind his name. He is still an idiot.

  62. 62
    eigenstate says:

    @Barry,

    Now, what you did after making this idiotic assertion would be a “reading,” of Tallis work, a reading, BTW, that misses him at every level.

    You’re aware that the entire text of his article is available just a click away, for anyone to read and judge for themselves, right?

    Do you maintain, then, that Tallis’ objections are scientific objections? If so, I can’t find them, save for the “unsatisfactoriness” that inheres in all science — our models are never complete or exhaustive or perfect. Or, do you understand Tallis’ objections to be located around the problematic consequences he identifies if the scientific understand *is* correct, and the “absurdity” he associates with that? Those are definitely solid sources of dissatisfaction with the science, but they are not *scientific* dissatisfactions.

    Which seemed to be the cargo you were hoping that citation would carry, namely that “clinical neuroscientist Tallis” finds the project “unsatisfactory” *as* a scientist, on scientific grounds. Tallis’ article does not bear this out, and instead invokes, for lack of a better term “Barryisms”, complaints that lament the *implications* of said science, rather than faulting the science itself.

    I believe I can provide a rich set of quotes from the article which support my reading, and you are unable to supply *any* that support the “unsatisfactory” nature of neuroscience qua science, rather than just the continuing source of consternation and cognitive dissonance for your worldview.

  63. 63
    Barry Arrington says:

    AS @ 64:

    For one, I’ve not seen Elizabeth Liddle making the claim “physical things result in mental things” here or anywhere else.

    Then you have not been paying attention. That is what the entire OP and following thread has been about. Try to do better.

    For another, “the hard problem” is a human construct . . .

    Translation from the materialist into English:

    The fact that I have subjective self-awareness cannot be accounted for in my metaphysics. Therefore, rather than abandon my incoherent metaphysics, I will pretend the fact does not exist.

    AS, when it boils down to you saying “don’t bother me with the facts,” there really is not much point in continuing the discussion with you. Bye bye.

  64. 64
    Barry Arrington says:

    E @ 66:

    You’re aware that the entire text of his article is available just a click away, for anyone to read and judge for themselves, right?

    Yes, that is why I provided a link to the article. I am not going to go on arguing with you about what the article says. I will let the readers click that link and decide for themselves.

    Besides, you did not address the thrust of my comment, which was that you were stupid to suggest that I was interpreting Tallis when I was merely quoting him. BTW, the appropriate response to that would be: “Yeah, you’re right; my bad.”

  65. 65

    Barry:

    Unless dualism is true and an immaterial mind exists, in which case it would be the explanation.

    Not in any sense that meets the requirement for explanation as articulated by Nagel, and advocated in your OP: “Merely to identify a cause [of consciousness] is not to provide a significant explanation, without some understanding of why the cause produces the effect.”

    How do immaterial minds create consciousness? You’ve no idea. How do immaterial minds interact with material objects (like brains) and impact their functioning? You’ve not the slightest. How do material brains interact with immaterial minds? No clue. What determines whether an object or organism has an immaterial mind? You’ll pass on that. Why can’t rocks have immaterial minds? Comments on this thread are closed.

  66. 66
    Box says:

    Aurelio Smith: Elizabeth Liddle explains to Barry (using seemingly endless patience 🙂 )

    Without any reason whatsoever, if I may so. Whatever Liddle has “explained” thus far does not address the problem presented in the OP:
    emergentism is a mere appeal to magic and no explanation at all..

    Barry:

    To count as an explanation, one must also give some understanding of why the putative cause produces the effect.

  67. 67
    Barry Arrington says:

    RB @ 69:

    I realize you are desperate to change the subject away from the poverty of materialist explanations for consciousness. If I were on your side I would be too.

  68. 68
    drc466 says:

    Floating Rock Analogy Update:

    Box: Rocks can’t convince other rocks to float
    NRG: We’re really good at making piles of rocks. None of them float. Or almost float. Or begin to float
    Popperian: I like my definition of emergence better
    es: I don’t understand quotes. And as long as I don’t use the word “magic”, you’re not allowed to call my materialist explanation “magic”
    Popperian: Non-materialistic explanations aren’t materialistic, therefore they are not explanations.
    EL: Contra NRG, I have Faith that we can make rocks float. Or begin to float. Someday. Just-so Story #1
    groovamos: Materialist explanations for rational rock movement can’t explain why the floating rocks sometimes do loop-de-loops
    Carpathian: (whooshing sound as rocks, and g’s point, go over his head)
    phoodoo points out lots of stuff moves, and the floating rocks do more than move
    EL responds with Just-So Story #2
    AS brings Bucket-o-Condescension snack for everyone, and reference to Floating-Rock Cult.
    es character-assassinates floating rock skeptic
    BA: They’re rocks. That float. Explain the float. Buh-bye.

  69. 69
    Popperian says:

    Barry:

    E, let me clue you in. A “reading” of someone’s work is an interpretation of their work. Quoting their work is not a “reading” of their work. It is a quotation of their work.

    After you quote someone’s work, it still need to be interpreted. Words are ultimately undefined. Nor can we extrapolate their meaning in a mechanical sense from any quote. The take away is that, as Popper put it…

    “Always remember that it is impossible to speak in such a way that you cannot be misunderstood: there will always be some who misunderstand[s] you.”

    Barry:

    Now, what you did after making this idiotic assertion would be a “reading,” of Tallis work, a reading, BTW, that misses him at every level.

    Merely defining yourself as being after the quotation but before the interpretation doesn’t mean your take away isn’t actually an interpretation.

  70. 70
    Barry Arrington says:

    Box @ 70.

    Did you notice this in Liddle’s comment at 4:

    But the reason I find it persuasive is not the neuroscience per se (though the theory would fall down if their model was not supported by data), but because, philosophically, I am of the view that consciousness is the kind of thing that can be explained by re-entry, i.e. that it requires an object.

    She continues to act as if there were some explanation in the book about how physical things result in mental things. And she continues to act as if the reason I reject that “explanation” is becajuse I am personally credulous.

    She mulishly ignores the fact that there is NO EXPLANATION.

    And here’s the kicker. I am fairly sure she actually believes what she says. She is simply unable to grasp even what her follow atheist Poole writes in The Guardian:

    Why does matter arranged in this way, and not others, give rise to minds? This is a question that Gerard Edelman and Giulio Tononi signally fail to answer, despite the grand promise of their subtitle.

    Notice how none of her comments mention Poole.

  71. 71

    Aurelio Smith:

    So to talk of solving “the hard problem” is like trying to discover phlogiston.

    Yes, nice parallel. Except phlogiston turned out not to exist, and consciousness does.

    But the point is that what was really going on with phlogiston (which was turning up with all kinds of weird properties like sometimes having negative mass) is that something very real was happening, but “phlogiston” was a poor formulation of it.

    For me, the answer is to move from “Consciousness-as-state” to “Consciousness-as-capacity”.

    Turn it from a noun to a verb, as I said: think of it as something we do (like oxidise or reduce) not something we are or have (like phlogiston).

    If I’ve got my Lavoisier right, which I may not!

  72. 72
    Carpathian says:

    Barry Arrington:

    From the article:
    The inward causal path does not deliver your awareness of the glass as an item explicitly separate from you — as over there with respect to yourself, who is over here.
    ………………………………………..

    For in either case, while appearances are “nothing but” neural activity, we still must be able to explain why some neural activity leads to the sensation (or illusion) of appearance while other neural activity does not; and we must be able to distinguish between the two by looking only at the material neurons.
    ………………………………………..

    Within perception, each of the senses of vision, hearing, smell, and so forth has different pathways and destinations. And within, say, visual perception, different parts of the brain are supposed to be responsible for receiving the color, shape, distance, classification, purpose, and emotional significance of seen objects. When, however, I see my red hat on the table, over there, and see that it is squashed, and feel cross about it, while I hear you laughing, and I recognize the laughter as yours, and I am upset, and I note that the taxi I have ordered has arrived so that I can catch the train that I am aware I must not miss — when all of these things occur in my consciousness at once, many things that are kept apart must somehow be brought together. There is no model of such synthesis in the brain. This is the so-called “binding” problem.

    This is exactly what was demonstrated by the patient.

    A purely physical localized damage to the brain caused the patient to determine his parents were no longer the people he loved. His recognition of his parents were there and all the good memories he had of them and yet he no longer had an emotional attachment.

    If this “mental thing” did not come from the brain, from where did it come?

    From outside of the brain in a place that was not physically affected by the accident?

  73. 73
    Barry Arrington says:

    EL @ 75.

    Yes, nice parallel. Except phlogiston turned out not to exist, and consciousness does.

    Yes, AS. Great point except that the point you are trying to make is totally wrong.

    EL: Use as many verbs as you like. Hopefully when you string enough verbs together you will have a coherent theory of how chemicals can have a rich inner life. I won’t be holding my breath.

  74. 74
    Carpathian says:

    Elizabeth Liddle:

    Turn it from a noun to a verb, as I said: think of it as something we do (like oxidise or reduce) not something we are or have (like phlogiston).

    Yes. “Consciousness” can be seen as something we “do” as opposed to something we have.

  75. 75
    Mung says:

    Perhaps the program “hello world” must be a tiny bit conscious…

    Indeed. How else would it know there’s a world out there to say hello to?

  76. 76

    Barry

    EL: Use as many verbs as you like. Hopefully when you string enough verbs together you will have a coherent theory of how chemicals can have a rich inner life. I won’t be holding my breath.

    I don’t need to string many verbs together, Barry. I’ve strung all the verbs I need, which was only one, the verb to be.

    And I don’t even have to mangle English usage.

    By considering consciousness as being a matter of what were are conscious OF, we can stop regarding it as a property that some configurations of matter have, and some (possibly identical) configurations somehow don’t have (as in philosophical zombies), and consider it as a something that some configurations of matter can do by virtue of their configuration, namely be conscious of things

    And viewed in that way, the Tononi and Edelman provides a good (and well supported) approach to precisely what a configuration of matter would have to be like in order for it to be conscious of things including the capacity to receive signals from those things, and the capacity to respond to those signals in the light of input about its present state/location/needs.

    That is a perfectly tractable problem. It is no longer Hard.

  77. 77

    Exactly, Aurelio. There is much less confusion about the word “aware” and even “attention”, and we know lots about how those work.

    The question then becomes is “consciousness” something different from either? I suggest not.

  78. 78
    Barry Arrington says:

    EL @ 82: Now you have gone into full “blah blah blah blah” mode. I have learned that when you do that and steadfastly refuse to even address, far less answer, the question under discussion, there is no point in continuing the conversation. Peace.

  79. 79

    Barry, with respect, if you assume that if you, Barry, either disagree with something, or find it confusing, that the other person is “posting sewage” or has “gone into full blah blah blah blah mode”, then you have locked yourself into a position from which it is impossible to emerge, even if you turn out to be wrong.

    I am addressing the question under discussion. I am pointing out that, as happens quite often in such discussion, the problem may lie in the nature of the question. The classic example is “have you stopped beating your wife?” which presupposes that the person being questioned has a wife, and beats her.

    There is no coherent answer to that question that does not itself bring into discussion the questionner’s premise.

    I am disputing the premises of the question, namely, that consciousness is something you “are” or “have”, and that the task of anyone who claims that it can arise from material bodies must account for how that matter either (in the first case) transforms into consciousness, or (in the second case) acquires the stuff.

    This is, as you say, an impossible task. So either we can conclude, as you do, that consciousness does not arise from matter, or, we can conclude that there is something amiss with the question.

    And I propose that what is amiss with the question is that it presupposes that consciousness is a thing or a state.

    I propose that it is neither: rather that it is a capacity – something that certain configurations of matter can do, and that when they do it, there is always something they are conscious of.

    But every time I try to explain this, instead of rebutting it by saying: oh, no, consciousness isn’t that, it is this, and providing an argument to support your premise, you simply tell me that I am posting blah.

    Well, I am, from that PoV, just as I would post “blah” if you asked me whether I had stopped beating my wife. “I haven’t got a wife” I would say; “and I don’t even beat my husband”. “You are denying the existence of wife” you would retort; “you can string all the verbs together you want, but you still can’t tell me whether or not you are still beating your wife”.

    Stop assuming my words don’t make sense – try reading them, for meaning, and working out where you disagree.

    You may persuade me that my premise is wrong; on the other hand, you open yourself to the possibility that my premise is valid.

  80. 80
    Barry Arrington says:

    AS @ 86. Congratulations. Your comment is pristine in its lack of substance. It is some trick to type that many words without actually saying anything.

  81. 81
    Mung says:

    *Aurelio reminds himself he is a pest on this blog and bites tongue (hard)*

  82. 82
    eigenstate says:

    @Barry,

    Besides, you did not address the thrust of my comment, which was that you were stupid to suggest that I was interpreting Tallis when I was merely quoting him. BTW, the appropriate response to that would be: “Yeah, you’re right; my bad.”

    It would be stupid to suppose you were “merely quoting” Tallis, without some basis or reason for doing so. if you read the article — and it’s less clear as you go on here that you actually did — you were necessarily interpreting what he said. And something in your interpretation of his words was consonant with your thinking on the subject, which is why you went to the trouble of providing the quote in the first place.

    Do I have that right, or do you insist that your “merely quoting” was not related to any rhetorical or illustrative point for your post?

  83. 83
    Barry Arrington says:

    EL @ 85:

    I am addressing the question under discussion.

    Nonsense. The question under discussion is how chemicals become conscious. You have steadfastly avoided that question. Do you really think that we have not noticed that?

    I am disputing the premises of the question, namely, that consciousness is something you “are” or “have”, and that the task of anyone who claims that it can arise from material bodies must account for how that matter either (in the first case) transforms into consciousness, or (in the second case) acquires the stuff.

    Yes, that is exactly what a materialist must demonstrate.

    This is, as you say, an impossible task.

    Well, at least we agree on something.

    I propose that it is neither: rather that it is a capacity – something that certain configurations of matter can do, and that when they do it, there is always something they are conscious of.

    You seem to find comfort in linguistic dodges to problems that are intractable on materialist premises. That is why, I suppose, you are so prone to resort to blah blah blah.

    You seem to believe that saying that certain configurations of matter can “do” something somehow advances the ball. It does not. You have merely pushed the question back one step. And that next step is: “What is it about this state of matter that gives it the capacity to do?” And you are right back to where we began.

    I know you will not agree with me. I take comfort in the fact that I am obviously correct when I say that “chemicals do not have a rich inner life.”

  84. 84
    Barry Arrington says:

    EL @ 85 cont’d

    Stop assuming my words don’t make sense – try reading them, for meaning, and working out where you disagree.

    Translation: You’re not smart enough to understand me. Try harder.

    Which pretty much ever sophist has said since about 450 BC.

  85. 85

    I am addressing the question under discussion.

    Nonsense. The question under discussion is how chemicals become conscious. You have steadfastly avoided that question. Do you really think that we have not noticed that?

    Because I don’t think that “chemicals” “become” something we call “conscious”. I don’t think that “conscious” is a state that chemicals can take. I don’t think that consciousness is a thing that chemicals can transform themselves into.

    I think there is a problem with the question, just as there is a problem with the question “have you stopped beating your wife?” when addressed to someone who has no wife or doesn’t beat one. Other examples are “how many angels can dance on the head of a pin?” or, as asked by my son: “how does a tornado see where to suck?”

    They raise good questions, but the questions they raise turn out not to be the actual question asked.

    And that is why I am trying to explain what I think the more useful (and answerable) question to ask is, namely: “how is it that certain configurations of matter are able to be conscious of things?”

    And that one, I have attempted to answer, many times, and is the question I suggest that is addressed by Tononi and Edelman.

    If you think it is an invalid, or mistaken, version of your question, please explain why. But please stop accusing me of not addressing the one you asked directly, because to do so would be tantamount to accepting its underlying premises, which I think are wrong.

    I am disputing the premises of the question, namely, that consciousness is something you “are” or “have”, and that the task of anyone who claims that it can arise from material bodies must account for how that matter either (in the first case) transforms into consciousness, or (in the second case) acquires the stuff.

    Yes, that is exactly what a materialist must demonstrate.

    This is, as you say, an impossible task.

    Well, at least we agree on something.

    Right. Or, this “materialist” may consider that the problem lies in conceptualisation that underlies the form of the question. As I have tried to explain.

    I propose that it is neither: rather that it is a capacity – something that certain configurations of matter can do, and that when they do it, there is always something they are conscious of.

    You seem to find comfort in linguistic dodges to problems that are intractable on materialist premises. That is why you are so prone to resort to blah blah blah.

    It is not a “linguistic dodge” to try to express a question in a way that may deliver answers. When Lavoisier’s predecessors asked “what is the nature of this element, phlogiston, that is release when substances burn?” they could not find a sensible answer. It was resistant to enquiry. Sometimes it had positive mass, sometimes negative. The answer turned out to be to ask a question that was based on a different premise: “what is it that changes when a substance burns?” That was not a “linguistic dodge” and the answer was not “blah blah blah” even though when Lavoisier started babbling on about hydrogen and oxygen and water not being an element, many might have that that’s exactly what he was doing. Maybe that’s why they sent him to the guillotine.

    You seem to believe that saying that certain configurations of matter can “do” something somehow advances the ball. It does not. You have merely pushed the question back one step. And that next step is: “What is it about this state of matter that gives it the capacity to do?” And you are right back to where we began.

    Yes, it most certainly does advance the ball. Or rather, it gets us to the starting blocks (to change the sporting metaphor). It gets us on to the “right track”, and that is where Tononi and Edelman begin. And it also reveals that much of the work is already done, because, now we are on “the right track” we can see that the entire neuroscience literature (stretching back a century or so) on attention (starting with James) and awareness is all absolutely pertinent. It tells us that we need to look at perception and action, and the physical components that underlie these – the nerves that serve our muscles, the sensory organs that bring in data from the outside world.

    It also brings in both “feedback” and “feedforward” models of motor control (see Danny Wolpert, referenced above) and the literature (including lots of experimental literature) on how we model ourselves in space, and predict our future location of space and the position of our limbs, and the resistance they are about to encounter.

    And once you have the notion of the “forward model” – a dynamic system for modelling yourself in the future, and comparing it to the present, then you have precisely what you need for a system that models itself as an agent able to respond and decide, i.e. aware of her environment, and able to make decisions about how she will navigate it, able to remember it, able to change it, able to milk it for information that will inform her future goals. In other words, be conscious of herself as a being in the world.

    I know you will not agree with me. I take comfort in the fact that I am obviously correct when I say that “chemicals do not have a rich inner life.”

    No, they do not. But people do.

  86. 86

    Translation: You’re not smart enough to understand me. Try harder.

    Which pretty much ever sophist has said since about 450 BC.

    .

    And sometimes they have been right.

    Except that it isn’t that I don’t think you aren’t smart enough, Barry. I think you aren’t willing.

    So yes, try harder. Or at least leave open the possibility that the problem is not that I am talking “blah” but that somehow, for reasons that may be the fault of neither of us, we have failed to communicate.

  87. 87
    Mung says:

    Because I don’t think that “chemicals” “become” something we call “conscious”. I don’t think that “conscious” is a state that chemicals can take. I don’t think that consciousness is a thing that chemicals can transform themselves into.

    Are materialists being consistent? Do they talk this way about everything, or just consciousness?

  88. 88
    Joe says:

    Elizabeth:

    And that is why I am trying to explain what I think the more useful (and answerable) question to ask is, namely: “how is it that certain configurations of matter are able to be conscious of things?”

    Intelligent Design. 😎

  89. 89
    Barry Arrington says:

    Because I don’t think that “chemicals” “become” something we call “conscious”. I don’t think that “conscious” is a state that chemicals can take. I don’t think that consciousness is a thing that chemicals can transform themselves into.

    I think there is a problem with the question, just as there is a problem with the question “have you stopped beating your wife?” when addressed to someone who has no wife or doesn’t beat one.

    No Liddle. You don’t get to avoid the problem by linguistic fiat. The materialist says that STEM (space, time, energy, matter) is all there is. It follows that everything must be reducible to STEM (because it exhausts all possibilities).

    You and I know beyond the slightest doubt that consciousness exists. We also agree that chemicals cannot become conscious, that the phrase “group of subjectively self-aware molecules” is meaningless.

    Now let us consider two questions:

    1. How do the electo-chemical processes of the brain give rise to subjective self-awareness?
    2. Have you stopped beating your wife? (addressed to someone who has no wife or does not beat their wife)

    You say that both questions are equally meaningless. Nonsense. The first question is not meaningless, because consciousness does exist and all the materialist has available to account for it is STEM (to carry the analogy further; there is a wife and someone is beating her).

    John Searle writes: “where consciousness is concerned, the existence of the appearance is the reality.” You try to get around that by denying the reality. Denying reality is no way to live your life Elizabeth.

    You keep coming back to this question as if it makes a difference: “how is it that certain configurations of matter are able to be conscious of things?” The obvious answer that to that question is “Because they have capacity for subjective self-awareness; intentionality, subject-object duality and qualia.” Which leads to this question: How can chemicals have the capacity for subjective self-awareness; intentionality, subject-object duality and qualia”? It amazes me that you think pushing the question back one degree of generality accomplishes anything whatsoever.

    It is not a “linguistic dodge” to try to express a question in a way that may deliver answers

    Just so. But it is a linguistic dodge to try to express a question in a way that allows one to avoid addressing the fundamental issue, which is what you’ve done.

    And once you have the notion of the “forward model” – a dynamic system for modelling yourself in the future, and comparing it to the present, then you have precisely what you need for a system that models itself as an agent able to respond and decide . . .

    You see what you’ve done there? You’ve smuggled. Tallis again:

    The belief among neurophilosophers that the brain, a material object, can generate tensed time is one among many manifestations of the insincerity of their materialism. As we have seen, under cover of hard-line materialism, they borrow consciousness from elsewhere, smuggling it into, or presupposing it in, their descriptions of brain activity. This ploy is facilitated by a mode of speaking which I call “thinking by transferred epithet,” in which mental properties are ascribed to the brain or to parts of the brain (frequently very tiny parts, even individual neurons), which are credited with “signaling,” and often very complex acts such as “rewarding,” “informing,” and so forth. The use of transferred epithets is the linguistic symptom of what Oxford philosopher P.M.S. Hacker and University of Sydney neuroscientist M.R. Bennett described, in their 2003 book Philosophical Foundations of Neuroscience, as the “mereological fallacy”: ascribing to parts properties which truly belong to wholes. This fallacy bids fair to be described as the Original Sin of much neurotalk, and it certainly allows the mind-brain barrier to be trespassed with ease.

    Barry:

    I know you will not agree with me. I take comfort in the fact that I am obviously correct when I say that “chemicals do not have a rich inner life.”

    Liddle:

    No, they do not. But people do.

    But on materialist premises, people are “nothing but” chemicals. So your statement is incoherent. For you to be right, Crick would have to be wrong when he said:

    “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased: “You’re nothing but a pack of neurons.”

    Whether he is right or wrong, Crick at least takes materialism where it must go as a matter of simple logic. You try to avoid that logic and is almost always the case with you, you wind up tying yourself up in linguistic knots (i.e, blah blah blah).

  90. 90
    Box says:

    Lizzie: (..) we don’t observe the phenomenon directly – we make proxy measures, and those include behavioural outputs (eye movements, button-presses, etc), physiological outputs (measure of arousal, for instance, like skin conductivity, heart rate, and pupil diameter), and self-reports.

    I would like to argue that “self-report” is not merely an item amongst equals on a list, but stands out as absolutely crucial. That is, all the other outputs are perfectly meaningless without it. Without self-report researchers cannot reliably link any other output to mental phenomena—the best they can do is base their findings on self-reports by others.
    The point being that output data—even data gathered by measuring brain-activity—on its own doesn’t tell us anything about the mind. The link with the mind can only be provided by means of self-reporting.
    And the person who self-reports may even be untruthful—there is no way to know for sure.
    IOW “from the outside” we cannot look into the mind—we do not see decisions, thoughts, consciousness or any other “part” of the mind.

  91. 91
    Barry Arrington says:

    EL @ 96

    And sometimes they have been right.

    No, they have never been right. A sophist gives that retort only when he has been caught in his sophistry. And when he is caught in his sophistry he is, by definition, wrong.

    Or at least leave open the possibility that the problem is not that I am talking “blah” but that somehow, for reasons that may be the fault of neither of us, we have failed to communicate.

    I have left open that possibility Liddle. The problem is that what you’ve said is blithering nonsense. It is not that I am unable to understand you. I do understand you. And you are wrong.

  92. 92

    Well, we will have to agree to disagree, Barry.

    Perhaps I am talking nonsense. Perhaps I am not. I’ll let you know if I conclude that I am.

  93. 93
    Barry Arrington says:

    EL @ 102:

    Fair enough. Adieu.

  94. 94

    Ah, didn’t scroll up enough. OK, a relatively quick response as I’ve got to go soon:

    No Liddle. You don’t get to avoid the problem by linguistic fiat. The materialist says that STEM (space, time, energy, matter) is all there is. It follows that everything must be reducible to STEM (because it exhausts all possibilities).

    To quote a US president: it depends what the meaning of is is.

    No, I don’t think it is all there is, if by “is” you mean things that exist. I don’t think that because a thing is “reducible to” those elements, i.e. made of them, or extended along them, that they are no more than those things.

    That is what emergence is all about – that an object can exist that has properties – including capacities – that its constituent parts do not, and vice versa. This is straightforwardly true: sodium choride has properties that neither atomic sodium nor atomic chlorine have, and vice versa, and a solution of salt contains ions that have properties not possessed by either the salt crystal or the atomic elements.

    And I think that people have properties that their “chemicals” do not, and that it it is not ridiculous to suppose that those properties arise from the configuration of their “chemicals”.

    You and I know beyond the slightest doubt that consciousness exists. We also agree that chemicals cannot become conscious, that the phrase “group of subjectively self-aware molecules” is meaningless.

    Just as a group of sodium chloride molecules cannot ignite on contact with water, or bleach your hair.

    But added to water, they can conduct electricity. Just because the constituent parts, unconfigured, or in a different configuration, cannot do something, doesn’t mean that they can’t if configured in some specific manner.

    Again this is straightforward. Large scale entities do not “reduce” to their parts without losing properties inherent in the system of which they formed a part. Thus “chemicals” are not conscious, but configured as a human being, may be. Not because they have changed their properties qua chemicals, but because the properties of the human being are inherent to the configuration, not the parts.

    Now let us consider two questions:

    1. How do the electo-chemical processes of the brain give rise to subjective self-awareness?
    2. Have you stopped beating your wife? (addressed to someone who has no wife or does not beat their wife)

    You say that both questions are equally meaningless. Nonsense. The first question is not meaningless, because consciousness does exist and all the materialist has available to account for it is STEM (to carry the analogy further; there is a wife and someone is beating her).

    No. The “materialist” also has the configuration of the “STEM” items.

    But more to the point, the configuration may have capacities that the parts do not. Configuring the molecules does not make the molecules conscious. What it does is to create something new, whose properties include things like signal conductivity, energy storage, energy usage, buoyancy, none of which are possessed by its constituent parts unconfigured, or in some other configuration (as road-kill, for instance).

    John Searle writes: “where consciousness is concerned, the existence of the appearance is the reality.” You try to get around that by denying the reality. Denying reality is no way to live your life Elizabeth.

    No, I am not “denying the reality”. I am absolutely clear that consciousness is real, just as salt water is real, and ionic currents through salt water are real. They just aren’t properties of the parts. They are properties of the whole.

    You keep coming back to this question as if it makes a difference: “how is it that certain configurations of matter are able to be conscious of things?” The obvious answer that to that question is “Because they have capacity for subjective self-awareness; intentionality, subject-object duality and qualia.” Which leads to this question: How can chemicals have the capacity for subjective self-awareness; intentionality, subject-object duality and qualia”? It amazes me that you think pushing the question back one degree of generality accomplishes anything whatsoever.

    Firstly, they can have them by virtue of their configuration. But if we want to explain how a configuration can produce conscious experience, I think we have to consider consciousness as something that the configuration – the system – the organism – does, not as something the individual “chemicals” do. Each “chemical” has a role, but what that role is, is not to “be conscious of something”. The role of the sodium ions is to modulate membrane potentials for instance. The role of the neurons is to summate incoming signals, and, when a threshold is reached, to send an electrical potential to an array of downstream neurons. etc. And the role of networks of neurons is to sum, selectively, signals from widely distributed brain regions etc. But I’ve said all this.

    It is not a “linguistic dodge” to try to express a question in a way that may deliver answers

    Just so. But it is a linguistic dodge to try to express a question in a way that allows one to avoid addressing the fundamental issue, which is what you’ve done.

    I have not.

    And once you have the notion of the “forward model” – a dynamic system for modelling yourself in the future, and comparing it to the present, then you have precisely what you need for a system that models itself as an agent able to respond and decide . . .

    You see what you’ve done there? You’ve smuggled.

    No, I have not. You are thinking statically again. I’m trying to describe a dynamic mapping process, in which what is mapped is also the mapper.

    Tallis again:

    The belief among neurophilosophers that the brain, a material object, can generate tensed time is one among many manifestations of the insincerity of their materialism. As we have seen, under cover of hard-line materialism, they borrow consciousness from elsewhere, smuggling it into, or presupposing it in, their descriptions of brain activity. This ploy is facilitated by a mode of speaking which I call “thinking by transferred epithet,” in which mental properties are ascribed to the brain or to parts of the brain (frequently very tiny parts, even individual neurons), which are credited with “signaling,” and often very complex acts such as “rewarding,” “informing,” and so forth. The use of transferred epithets is the linguistic symptom of what Oxford philosopher P.M.S. Hacker and University of Sydney neuroscientist M.R. Bennett described, in their 2003 book Philosophical Foundations of Neuroscience, as the “mereological fallacy”: ascribing to parts properties which truly belong to wholes. This fallacy bids fair to be described as the Original Sin of much neurotalk, and it certainly allows the mind-brain barrier to be trespassed with ease.

    I disagree with Tallis.

    Barry:

    I know you will not agree with me. I take comfort in the fact that I am obviously correct when I say that “chemicals do not have a rich inner life.”

    Liddle:

    No, they do not. But people do.

    But on materialist premises, people are “nothing but” chemicals. So your statement is incoherent. For you to be right, Crick would have to be wrong when he said:

    “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased: “You’re nothing but a pack of neurons.”

    The significant word here being “pack”.

    Whether he is right or wrong, Crick at least takes materialism where it must go as a matter of simple logic. You try to avoid that logic and is almost always the case with you, you wind up tying yourself up in linguistic knots (i.e, blah blah blah).

    Well, I plead guilty to having failed to communicate my argument, Barry.

  95. 95

    Adieu to you too, Barry! And thanks for the conversation, even if we didn’t get very far 🙂

  96. 96
    Popperian says:

    Barry:

    The question under discussion is how chemicals become conscious. You have steadfastly avoided that question. Do you really think that we have not noticed that?

    So, the title of the OP “Elizabeth Liddle Agrees: Saying “It’s Emergent!” is no Better than Saying “It’s Magic!” has nothing to do with the question? That’s totally unexpected.

    Barry:

    You seem to find comfort in linguistic dodges to problems that are intractable on materialist premises. That is why, I suppose, you are so prone to resort to blah blah blah.

    It’s unclear how this intractability is unique to “materialism.” Nor is it actually a problem anymore that the intractability of the initial conditions, the an exact count or the exact path of each water molecule will take, let alone that kind of detail for all of the initial external influences that act outside a tea pot, is actually a problem for making tea. Fortunately, we are not only uninterested in explaining those properties, but predicting most of them either, despite the fact that they are the overwhelming majority of what occurs when making tea.

    IOW, you’re not looking at consciousness as a problem to solve, but something to justify. Of course, no such ultimate justification is possible, in practice. Nor is this unique to consciousness or materialism.

  97. 97
    wallstreeter43 says:

    Elizabeth , then I won’t dare to mention Nde’s as you might give the same brain based explanation that doctor patricia churchland tried in her interview of skeptiko and when her interviewer showed her how no nde researcher or study agrees with her she hangs up the phone on him. At least we know that elizabeth would never do that .

  98. 98
    Box says:

    On a general note I would like to state that emergentism regards consciousness a (near-)useless add-on. So, e.g. rationality is seen as something that can be produced without consciousness.
    To me that is a deeply incoherent conception of mind and rationality.

    Kairosfocus quotes Reppert in post#2 pointing out the divide between chemical coherence and rational/propositional coherence, but the point I’m trying to convey here is even more fundamental: without consciousness no rationality whatsoever.
    Consciousness—the primordial datum—is the source of overview and context. It is foundational to rationality. It is what is assumed when we speak about rationality. Understanding is not possible without a context. Understanding something is placing it in a context.

    NetResearchGuy #45 makes a similar argument: you need consciousness to begin with

    According to EL, our brain is just that software program with a LOT more patterns and feedback loops, but ultimately just reducible to those patterns and feedback loops. My argument is that no matter how many similar patterns and feedback loops you add to the software program, it’s not conscious. The problem is there is no whole or self to gather all the feedback loops into a single conscious unit. The feedback loop theory is merely a regress and makes zero progress on identifying where the whole or self comes from.

  99. 99
    Robert Byers says:

    On another thread on uD about dreams i looked at the wiki article for dreams. I read the papers from Jie Zhang. The best I thought.
    The truth is that its just the soul of ours that reads/uses our memory/mind.
    as others said here about memory.
    A threat to this emerging conclusing of seeing the mind as a giant memory machine is that evolutionists etc will say as Zhang says. that our conscience is simply one memory system in play with other memory systems. (dreaming being a result of this) and so a illusion of a separate conscience dealing with our brain is made.
    This is not true but would be the ditch they would retreat to as the important of the memory is realized as behind most human thought.
    it is behind it but in front is our soul/heart.
    keep a eye on them. Knocking one crowd down brings in another.

  100. 100

    Barry:

    John Searle writes: “where consciousness is concerned, the existence of the appearance is the reality.” You try to get around that by denying the reality. Denying reality is no way to live your life Elizabeth.

    More John Searle on that reality:

    “The famous mind-body problem, the source of so much controversy over the past two millennia, has a simple solution. This solution has been available to any educated person since serious work began on the brain nearly a century ago, and in a sense, we all know it to be true. Here it is: Mental phenomena are caused by neurophysiological processes in the brain and are themselves features of the brain…Mental events and processes are as much part of our biological natural history as digestion, mitosis, meiosis or enzyme secretion.” The Rediscovery of Mind

  101. 101
    Cross says:

    NetResearchGuy @ 45

    Great post, for an IT guy like me you have nailed it.

    What some materialists have missed follows:

    poperian @ 27 quoted the following:

    “Virtual Reconstruction

    To solve the mystery, a research team from the Blue Brain Project set about virtually reconstructing (simulated on a computer) a cortical microcircuit based on unparalleled data about the geometrical and electrical properties of neurons — data from over nearly 20 years of painstaking experimentation on slices of living brain tissue.

    Each neuron in the circuit was reconstructed into a 3D model on a powerful Blue Gene supercomputer. About 10,000 virtual neurons were packed into a 3D space in random positions according to the density and ratio of morphological types found in corresponding living tissue. The researchers then compared the model back to an equivalent brain circuit from a real mammalian brain.”

    Some facts about Blue Genie:

    “Design
    The Blue Gene/Q Compute chip is an 18 core chip. The 64-bit PowerPC A2 processor cores are 4-way simultaneously multithreaded, and run at 1.6 GHz. Each processor core has a SIMD Quad-vector double precision floating point unit (IBM QPX). 16 Processor cores are used for computing, and a 17th core for operating system assist functions such as interrupts, asynchronous I/O, MPI pacing and RAS. The 18th core is used as a redundant spare, used to increase manufacturing yield. The spared-out core is shut down in functional operation. The processor cores are linked by a crossbar switch to a 32 MB eDRAM L2 cache, operating at half core speed. The L2 cache is multi-versioned, supporting transactional memory and speculative execution, and has hardware support for atomic operations.[35] L2 cache misses are handled by two built-in DDR3 memory controllers running at 1.33 GHz. The chip also integrates logic for chip-to-chip communications in a 5D torus configuration, with 2GB/s chip-to-chip links. The Blue Gene/Q chip is manufactured on IBM’s copper SOI process at 45 nm. It delivers a peak performance of 204.8 GFLOPS at 1.6 GHz, drawing about 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. The chip is mounted on a compute card along with 16 GB DDR3 DRAM (i.e., 1 GB for each user processor core).[36]

    A Q32[37] compute drawer will have 32 compute cards, each water cooled.[38]

    A “midplane” (crate) of 16 compute drawers will have a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes, 16,384 user cores and 16 TB RAM.[38]

    Performance
    At the time of the Blue Gene/Q system announcement in November 2011, an initial 4-rack Blue Gene/Q system (4096 nodes, 65536 user processor cores) achieved #17 in the TOP500 list[1] with 677.1 TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack system achieved the top position in the Graph500 list[3] with over 250 GTEPS (giga traversed edges per second). Blue Gene/Q systems also topped the Green500 list of most energy efficient supercomputers with up to 2.1 GFLOPS/W.[2]”

    Thats 1.4 mega watts of power consumption, 16,384 user cores and 16 TB RAM. This has “simulated 10,000 virtual neurons”.

    The human brain is estimated to contain 15–33 billion neurons,[1] each connected by synapses to several thousand other neurons. It uses about 20 watts.

    It would be good for the materialists here to step back and consider the above. Engineers and scientists best effort in the creation and programming of Blue Genie, is the size of a building, consumes megawatts of power can only simulate 10,000 neurons.

    The brain weighs a couple of kilos, uses 20 watts of power and has billions of neurons.

    By what stretch of the imagination can you believe that this brain evolved by accident. Your faith amazes me!

    Cheers

  102. 102
    Upright BiPed says:

    Cross, you don’t understand evolution.

  103. 103
    Cross says:

    Upright BiPed @ 112

    Obviously not! It’s magic but there’s no magician, right? 😉

    Cheers

  104. 104
    Barry Arrington says:

    RB @ 110.

    You make a good point. Searle has contributed enormously to these issues. His Chinese Room experiment is utterly brilliant. Yet, like most in his field, he has drunk deeply from the materialist Koolaid, and for that reason he fails to see where his own conclusions must inevitably lead.

  105. 105
    NetResearchGuy says:

    I wanted to make an analogous argument to explain why I think the book EL cites does not “solve” the problem of consciousness, or even begin to.

    Consider the technology of a consumer available self driving car, meaning one that can drive on any public street without incident (i.e. in an unconstrained environment). How close are we to “solving” that problem? It depends on how you weigh the importance of various aspects of the solution to the problem. If you primarily consider mechanical aspects (availability of sensors and actuators sufficient to the task, and mechanical means of connecting them to a vehicle), the problem is solved.

    However, in terms of software, we are only beginning to solve it. Someone who has barely thought about the problem of machine vision and navigation might assume that final piece of the puzzle should be trivial, but it’s anything but. Several open research problems have to be solved, and it may be decades before they are. There are also moral and legal problems to solve, like what happens if a self driven car kills someone, but let’s ignore those for this discussion.

    Self driving cars involve sensors, pattern recognition, feedback loops, real time state tracking, future state prediction, and behavioral decisions based on all that. Heck, the car likely has a survival instinct programmed in (don’t crash!). By EL’s definition, a self driving car is conscious, at least a tiny bit. EL: if you believe a self driving car is NOT conscious, can you explain how it differs qualitatively from something that is conscious? Be specific if you can.

    Back to the main point: how close are we to solving self driving cars? In mechanical terms, 99% solved, in software terms, likely well under 1% (I can explain why in more detail, but it’s not important). Let’s say someone does R&D to improve the mechanical side of things, for example developing a more reliable servo or a camera with higher pixel resolution. While these are nice, they don’t significantly affect how near one is to the solution! You are optimizing the 1% easy part of the problem, not the 99% hard part. A neuroscientist identifying ever more correlations in the brain, or proposing ever more detailed stimulus / behavior feedback loops is not addressing the problem of how those loops or correlations become consciousness, as opposed to just producing philosophical zombies or robots.

  106. 106
    phoodoo says:

    Lizzie says:

    “And I think that people have properties that their “chemicals” do not.”

    Wait, wait , wait. That is NOT what you claim to think.

    Lizzie only believes that we are chemicals, so of course the particular chemicals mixed together have the properties they do, and the chemicals mixed together to form gold have the property they do. Different mixes of chemicals have different properties.

    When did the the people stop being a version of the chemicals??? Lizzie is truly a theist!

  107. 107
    Andre says:

    NetResearchGuy & Cross

    How can I say this? Amen brothers. I’m not sure the average biologist actually understands that the biological technology used for living systems are orders of magnitude more efficient than what can engineer.

    Example:

    http://www.telegraph.co.uk/tec.....ivity.html

  108. 108
    Zachriel says:

    NetResearchGuy: how close are we to solving self driving cars?

    While the problem is difficult, most researchers in artificial intelligence believe the problem is not intractable. Many think the solution will involve neural nets, programs that simulate the learning process of the brain.

  109. 109
    AnimatedDust says:

    Andre, the article you referenced somewhat concludes that brain functional parity with human designed computers is only a few years away. My sense is that you disagree, and I do as well. Did you get that take as well?

  110. 110
    Andre says:

    AnimatedDust

    Yes they are extremely optimistic about that but we’ve been there before with AI’s possibilities, Moore’s law…… You can’t fault materialists about their optimism except when its about God…….

    This just in about Moore’s law……. interesting!

    http://www.scientificamerican......oores-law/

  111. 111
    Silver Asiatic says:

    Cross

    It would be good for the materialists here to step back and consider the above. Engineers and scientists best effort in the creation and programming of Blue Genie, is the size of a building, consumes megawatts of power can only simulate 10,000 neurons.

    The brain weighs a couple of kilos, uses 20 watts of power and has billions of neurons.

    By what stretch of the imagination can you believe that this brain evolved by accident. Your faith amazes me!

    No, no, no … it’s all very simple. Not just by accident. But add selection for reproductive advantage and now it all makes sense. 🙂

    Billions of integrated neurons had to evolve so humans could find something to eat. After all, they were in Africa where no other organisms could have survived without a massive increase in intelligence, outperforming the best computers teams of scientists can produce today. The environment designed the brain. That’s what a rainy season will do for ya.

    Emergence happens – just like magic. You don’t need intelligence to produce algorithms. They just emerge. 🙂

  112. 112
    Popperian says:

    @cross

    First, You’re assuming we need even faster and more powerful computers before we could achieve artificial intelligence. But this isn’t necessarily the case. Blue Brain is designed to simulate a physical brain. That is, it’s goal isnt just artificial intelligence, but to also understand how our brains works, including research on mental illnesses, etc. Emulating a physical system has significant overhead, especially one that is so different from silicon computers.

    If our goal was merely artificial intelligence, it’s not clear that modeling those physical features would be necessary. In fact, I’d suggest what’s holding us back from developing hard AI is a philosophical breakthrough, not computer power. A slower computer would be just as intelligent. You could dial down its clock speed and it would still create explanations- just slower.

    Second, you conveniently ignored the actual substance of my comment. Specifically, the simulation is possible today because we have developed a high level explanation about how synapses form. This allows the model to be significantly easier to set up, as compaired to requiring a one to one mapping of neurons in a real neocortical column. IOW, placement can be random, with a few exceptions, yet still get the same results.

  113. 113
    Andre says:

    From the article I posted how friggen cool is this biomimicry…..

    Cognitive Computers

    To build chips “at least as ‘smart’ [as a] housefly,” researchers in IBM’s cognitive computing group are exploring processors that ditch the calculator like Von Neumann architecture. Instead, as Pavlus explains, they “mimic cortical columns in the mammalian brain, which process, transmit and store information in the same structure, with no bus bottlenecking the connection.” The result is IBM’s TrueNorth chip, in which five billion transistors model a million neurons linked by 256 million synaptic connections. “What that arrangement buys,” Pavlus writes, “is real-time pattern-matching performance on the energy budget of a laser pointer.”

    http://www.scientificamerican......oores-law/

    Reverse engineering means only one thing…….. It was engineered to start off with 🙂

  114. 114
    Popperian says:

    Box:

    Kairosfocus quotes Reppert in post#2 pointing out the divide between chemical coherence and rational/propositional coherence, but the point I’m trying to convey here is even more fundamental: without consciousness no rationality whatsoever.

    Again, the law of computation tells us that it’s possible to emulate any physical object at an arbitrary level of detail. Cogs are unlike transistors which are unlike quantum bits. Yet, they can each be used to perform the same computations. That’s what it means to say a computer is universal.

    NetResearchGuy #45 makes a similar argument: you need consciousness to begin with

    NetResearchGuy wrote:

    The argument I have against conscious software is that presumably it resulted from taking a simpler non-conscious software program, and adding some type of code, some sort of machine instruction that switched its status from non conscious to conscious. For example incrementing a variable or storing to a location in memory. The problem is that it’s nonsensical to conceive of a single line code modification that would have that sort of power.

    Yet, that’s exactly what happens in the case of building a universal turning machine. Either a computer is universal or it is not. Either it has the requisite computations for Turing completeness or it does not. Adding a single additional specific computation to a system that is not Turing complete can expand its reach to universality, which is a powerful leap. Is that nonsensical too?

    More usually, materialists instead claim that consciousness isn’t binary, but a continuum (EL claimed this in another post)..

    I would disagree. For example, regular expressions are a rare example of a language that is very powerful, yet stops short at making the leap to universality. Regular expressions can solve some of the problems that UTMs can, but its reach is universal. So, while there is overlap in abilities, this doesn’t mean that all languages are universal. The same can be said for consciousness. The problem is, there is no clear definition in regards to conciseness, as there is with Turing completeness.

    According to EL, our brain is just that software program with a LOT more patterns and feedback loops, but ultimately just reducible to those patterns and feedback loops. My argument is that no matter how many similar patterns and feedback loops you add to the software program, it’s not conscious. The problem is there is no whole or self to gather all the feedback loops into a single conscious unit. The feedback loop theory is merely a regress and makes zero progress on identifying where the whole or self comes from..

    First, we’re not networked together. My feedback loops are isolated from your feedback loop. When we figure out how to bridge the gap, we can make progress on the issue.

    Second, while I cannot speak for Elizabeth, that is a gross simplification of my position. Feedback loops would be necessary for being able to detect changes, but not enough for conciseness to be present. But, again, not everyone agrees on what consciousness is, unlike Turning completeness. Any such consensus will be due to, in part, a philosophical breakthrough about what it means to be a person and that actually solves problems.

  115. 115
    kairosfocus says:

    Popperian, emulate and presumably behaviourally. Responsible freedom and linked rationality are not equal to blind chance or mechanical necessity. Categorically distinct. And without such freedom and responsibility, rationality collapses. The view that attempts such therefore is self referentially incoherent and self falsifying. KF

  116. 116
    Popperian says:

    KF:

    Responsible freedom and linked rationality are not equal to blind chance or mechanical necessity. Categorically distinct.

    Yes, KF. We know that is your position. Nor am I suggesting that intuition is not an important part of the processes. But intuitions are just the starting point. Popper’s view, as is mine, is that we start with conjectures, intuitions, etc. But we then criticize those creative ideas.

    As I’ve pointed out elsewhere, adding a God to the mix doesn’t improve the problem. All it does is attempt to justify rationally and our perception of making choices in a justificationist sense. But that’s bad philosophy. And that’s why I’m not a theist.

    Is God free to make choices? Is he rational? if so, how are they justified, etc. All this does is push the problem up a level without improving it.

    Also, in the case of intuition, I’m not a solipsist just because it conflicts with my intuition or common sense. I’m not a solipsist because solipsism is also a convolved elaboration of reality.

    Solipsists accept all of the observations we do as a realist. This includes objects obeying the laws of physics, other people disagreeing with us about Solipsism, etc. As such all of these observations are “compatible” with Solipsism and one could just as well claim that those observations “prove” it is true, just as much as Realism. The key difference is that Solipsists add one more thing to Realism that does nothing at all but negate Realism itself, which is our best, current accepted explanation for all of those observations: they are just facets of our internal selves.

    Solipsism doesn’t explain why object like facets of my internal self would follow laws of physics-like facets of my internal self, or why other conscious being-like facets of my internal self would disagree with me on Solipsism. It one fell swoop, it negates all of these explanations while simultaneously explaining nothing itself.

    IOW, Solipsism is a convoluted elaboration of Realism. As such, I reject it.

    ID suffers from the very same problem, as it accepts all of the observations that Dariwnism does. But it adds one thing more that does nothing but negate our current, best explanation. Darwinsm only appears to be true, but an abstract designer with no limitations did it. ID doesn’t explain opportunity and means, which includes the knowledge of how, motive, etc. In one fell swoop, it negates the underlying explanation of Darwinism while simultaneously explaining nothing itself.

  117. 117
    Mung says:

    Elizabeth on Emergence:

    That is what emergence is all about – that an object can exist that has properties – including capacities – that its constituent parts do not, and vice versa. This is straightforwardly true: sodium choride has properties that neither atomic sodium nor atomic chlorine have, and vice versa, and a solution of salt contains ions that have properties not possessed by either the salt crystal or the atomic elements.

    We have no physical/material explanation for this. So we call it emergence. Magic. Poofery!

    It just makes one wonder at the design if the physical elements that when combined, magical new properties appear.

  118. 118
    Box says:

    Popperian #124: That’s what it means to say a computer is universal.

    Computers are incapable of understanding (semantics).

    Searle (1984) presents a three premise argument that because syntax is not sufficient for semantics, programs cannot produce minds.

    (1)Programs are purely formal (syntactic).
    (2)Human minds have mental contents (semantics).
    (3)Syntax by itself is neither constitutive of, nor sufficient for, semantic content.

    Therefore, programs by themselves are not constitutive of nor sufficient for minds.

    The Chinese Room thought experiment itself is the support for the third premise.

    The claim that syntactic manipulation is not sufficient for meaning or thought is a significant issue, with wider implications than AI, or attributions of understanding. Prominent theories of mind hold that human cognition generally is computational. In one form, it is held that thought involves operations on symbols in virtue of their physical properties. On an alternative connectionist account, the computations are on “subsymbolic” states. If Searle is right, not only Strong AI but also these main approaches to understanding human cognition are misguided.

    [my emphasis]
    – – –
    [source: Stanford.edu]

  119. 119
    Zachriel says:

    Popperian: the law of computation tells us that it’s possible to emulate any physical object at an arbitrary level of detail

    Law of Computation?

  120. 120
    Andre says:

    I’m not a programmer but my best coding work was back in the days when we had to optimize DOS by changing the config.sys and autoexec.bat files. I wonder how long it will take for this command to write itself

    LH C:MOUSEMOUSE.EXE

    Worse still I recently reinstalled DOS and try as I might I could not get the mouse to work no tweaking worked because you see DOS just does not know USB. So when biological hardware changes why do Darwinists assume that both the hardware and software will still work?

  121. 121
    Box says:

    Barry Arrington: Searle has contributed enormously to these issues. His Chinese Room experiment is utterly brilliant.

    I have been reading up on Searle, specifically on syntax and semantics being “observer-relative”. It’s an important argument against “prominent theories of mind that hold that human cognition generally is computational” (stanford.edu). I cannot help but noticing that these are testing times …. times in which we are forced to argue the blatantly obvious—that rationality is based on understanding (observer-relative) and cannot exist without consciousness.

  122. 122
    Barry Arrington says:

    Box,

    Feser has it about right regarding Searle:

    Searle is also an effective critic of other materialist theories of the mind. But though he rejects all extant forms of materialism, Searle also famously denies being any kind of dualist. Still, his critics regularly insist that his views nevertheless entail dualism whether he realizes it or not, and that this suffices to show that they are mistaken. In short, Searle says: “My arguments are correct, and they do not entail dualism,” while his critics say: “Searle’s arguments do entail dualism, and therefore they are incorrect.” In my view both sides are partly right and partly wrong: Searle’s arguments are correct, and they do entail dualism.

  123. 123
    Box says:

    Barry,

    Bill Vallicella says just about the exact same thing:

    As I said earlier, John R. Searle is a great philosophical critic. Armed with muscular prose, common sense, and a surly (Searle-ly?) attitude, he shreds the sophistry of Dennett and Co.

    (…)

    (…) The last quotation explains why Searle is not a materialist: he is not trying to reductively identify something essentially first-personal with something essentially third-personal. So far so good. But then why does he fight shy of being called a dualist? Even if he is not a substance dualist like Descartes, why does he not own up to being a property dualist?

    The answer, I am afraid, is that he is in the grip of the ideology of scientific naturalism. In contemporary philosophy of mind, nothing is worse than to get yourself called a dualist. For then you are an unscientific superstitious fellow who believes in spook stuff, ghosts in machines, and worse. Next stop: the Twilight Zone.

  124. 124
    Cross says:

    Andre @ 117, 123

    Thanks for the links, I’m sure Popperian will not be swayed though.

    Andre @ 130
    “Worse still I recently reinstalled DOS and try as I might I could not get the mouse to work no tweaking worked because you see DOS just does not know USB. So when biological hardware changes why do Darwinists assume that both the hardware and software will still work?”

    Good point, it’s why I know they will never come up with a plausible OOL explanation, the hardware and software cannot appear at the same time, the odds are staggering.

    The DNA code and the hardware to process it, along with the immediate ability to reproduce just did not happen without design. Any computer chip is designed in hardware first, then the software is written, it can’t be any other way.

    Cheers

  125. 125
    Cross says:

    Silver Asiatic @ 121

    Thanks for the correction, it all sounds so simple now! 😉

    Cheers

  126. 126
    Cross says:

    Popperian @ 122

    Actually, it is you who have missed the point, no matter how better we engineer the computers, reducing there size, power consumption etc. it will still be DESIGNED!

    Cheers

  127. 127

    In Reference and Reality Hilary Putnam parenthetically remarked, “As Wittgenstein often pointed out, a philosophical problem is typically generated in this way: certain assumptions are made which are taken for granted by all sides in the subsequent discussion”

    I’ve often genuinely wondered why anyone believes that invoking dualism, and in particular an ontology that includes something like nonphysical mental states, solves the problems of consciousness, intentionality and so forth. It’s a fair question to ask how physical systems (like brains and their states) can be “about” other states, can be conscious, etc. But to respond to this difficulty by invoking a dualist ontology, and then assigning intentionality (and or consciousness, or selfhood, or agency) to the nonphysical side of one’s dualistic coin is to my ear an absolutely empty response. That is because no one has the slightest notion of how a nonphysical mentality might instantiate intentional states (or consciousness, or selfhood, or agency), or how one might go about investigating those questions. How is a nonphysical mentality “about” something else? At least brain states offer many intriguing hooks vis the complex nature of sensory consciousness and representation that may or may not yield insights into this question as cognitive neuroscience progresses. There is no science of non-physical mentality, nor do i see how there could be one.

    Ultimately, I suspect that the sequestering of phenomena such as intentionality, consciousness and agency within nonphysical mentality works for many simply because such qualities are smuggled in as the immaterial mind (or soul, or intelligence, or agency, or consciousness, or whatever) is defined as that which nonphysically bears intentionality, consciousness, agency, etc. independent of material states, To then “explain” those phenomena in nonphysical terms becomes essentially a exercise in tautology. But how or why that might be the case, or how to make that notion do any work, no one has clue.

  128. 128
  129. 129
    Joe says:

    RB:

    There is no science of non-physical mentality, nor do i see how there could be one.

    There is no science of a purely physical (chemical) mentality, nor do I see how there could be one. The physio-chemical explanation for life is non-existent and if it can’t account for life then it can’t account for consciousness.

  130. 130
    Mapou says:

    RB:

    There is no science of non-physical mentality, nor do i see how there could be one.

    The evidence is there though, if you’re willing to open your eyes to see it. The problem is that materialists are blind to it and willingly so. Just yesterday a friend sent me this link:

    Spontaneous Events Drive Brain Functional Connectivity?

    Here, the researchers use the word “spontaneous” to indicate that they have clear evidence for an effect but they have no idea what is causing it. They have no explanation and they never will. What they are observing, IMO, is the subject’s consciousness (spirit) moving its attention from one thing to another.

    The mainstream scientific culture is a dictatorship of blind fools, mostly old fools with calcified brains. Wearing blinders is not a good way to conduct science. It leads to wrong conclusions and, in the end, stupidity. But that’s OK with me. I’ll just keep moving right along.

  131. 131
    tgpeeler says:

    “But that’s a specific philosophical view about knowledge. Theism is a specific case of justificationism. As such, your argument is narrow in scope. it does not appeal to me because what I want from ideas are their content, not their providence.”

    Provenance?

  132. 132
    Box says:

    RB: That is because no one has the slightest notion of how a nonphysical mentality might instantiate intentional states (or consciousness, or selfhood, or agency), (…)

    If we conclude that consciousness is irreducible to matter and therefor is something else entirely, then it does not follow that the same questions wrt matter apply to consciousness. Consciousness—from a dualist perspective—is often regarded as a primordial datum; an irreducible whole, not constituted/produced by parts—unlike e.g. the brain.

    RB: How is a nonphysical mentality “about” something else?

    It is simply a property of consciousness. “Aboutness” is troublesome for materialistic accounts of the mind—neurons are just a clump of matter, they are not intrinsically about your mom—it is incoherent to assume that the same problem applies to ‘unextended in time and space’ consciousness under dualism.
    Similarly we do not ask how much consciousness weighs or how it copes with the effects of 2nd law and gravity.

  133. 133

    Box:

    It is simply a property of consciousness

    Exactly the response I expected, and predicted:

    “Ultimately, I suspect that the sequestering of phenomena such as intentionality, consciousness and agency within nonphysical mentality works for many simply because such qualities are smuggled in as the immaterial mind …is defined as that which nonphysically bears intentionality, consciousness, agency, etc., independent of material states. To then “explain” those phenomena in nonphysical terms becomes essentially a exercise in tautology. But how or why that might be the case, or how to make that notion do any work, no one has clue.”

  134. 134
    Box says:

    Reciprocating Bill: To then “explain” those phenomena in nonphysical terms becomes essentially a exercise in tautology.

    If by “explained” is meant ‘reducible to parts’ then what has been identified by a theory as ‘primordial datum’ cannot be explained—reduced to parts—by definition.
    This was also the idea behind the ancient theory “atomism”. Here the primordial datum is simple, minute, indivisible, and indestructible particles which are posited as the basic components of the entire universe. How are these atoms and their properties explained (reduced)? They are not, since they are irreducible—‘atom’ means ‘uncuttable’.

    According to Rosenberg:

    The basic things everything is made up of are fermions and bosons. That’s it. Perhaps you thought the basic stuff was electrons, protons, neutrons, and maybe quarks. Besides those particles, there are also leptons, neutrinos, muons, tauons, gluons, photons, and probably a lot more elementary particles that make up stuff. But all these elementary particles come in only one of two kinds. Some of them are fermions; the rest are bosons. There is no third kind of subatomic particle. And everything is made up of these two kinds of things. Roughly speaking, fermions are what matter is composed of, while bosons are what fields of force are made of.

    [Rosenberg, Chapter 2, THE ATHEIST’S GUIDE TO REALITY]

    BTW any material primordial datum is incoherent with the universe having a beginning. Logically the ancient Atomists assumed an eternal universe.

    Summing up: a primordial datum cannot be explained in principle.

    – – –

    Surely most dualists ground consciousness (and matter) in the “ultimate” primordial datum: God.

  135. 135

    Box:

    Summing up: a primordial datum cannot be explained in principle.

    Let’s try out your response on the questions I posed to Barry upthread.

    RB: How do immaterial minds create consciousness?

    Box: It cannot be explained in principle.

    RB: How do immaterial minds interact with material objects (like brains) and impact their functioning?

    Box: It cannot be explained in principle.

    RB: How do material brains interact with immaterial minds?

    Box: It cannot be explained in principle.

    RB: What determines whether an object or organism has an immaterial mind?

    Box: It cannot be explained in principle.

    RB: Why can’t rocks have immaterial minds?

    Box: It cannot be explained in principle.

    RB: Postulating dualism, and “an immaterial mind,” offers exactly zero explanation for the presence of these human endowments and their absence in mice. Moreover, it is as helpless before the “hard problem” as any materialistic explanation – and offers no hooks from which to bootstrap scientific investigation into that problem.

    Barry: Unless dualism is true and an immaterial mind exists, in which case it would be the explanation.

  136. 136
    Box says:

    Reciprocating Bill,

    RB: How do immaterial minds create consciousness?

    We are running around in circles. This will be my last attempt to explain this fairly simple principle:
    * if consciousness is posited as primordial datum it is not created by anything else *.

    IOW if consciousness is a primordial datum, then it is not produced, created, constituted by something else—just like the atoms of the ancient Atomists, and Rosenberg’s fermions and bosons (see #144). As a primordial datum consciousness is irreducible to matter, ‘immaterial mind’ or whatever.

    RB: How do immaterial minds interact with material objects (like brains) and impact their functioning?

    Now this is a valid question which may pose a problem for dualists. However it is noteworthy that our view on what “matter” is is radically changed since the time the interaction problem came up. So, today, it is no longer clear what your question entails.
    – – –
    BTW I don’t regard myself a dualist. I’m a monist who holds that everything is ultimately spiritual.

  137. 137

    RB: How do immaterial minds interact with material objects (like brains) and impact their functioning?

    There are theories about this from spiritual (or informational) monism that only appears as dualistic representations, to quantum wave-collapse. IOW, your question might be a big non-question upon a proper understanding of what “matter” is (entangled, quantum or infromational fields) and what mind may be (a conscious, observational locus that collapses such fields into perceived states appearing as arrangements of matter).

    Under such perspectives, the difference between immaterial and material is only a matter of aspect, not kind.

  138. 138
    Box says:

    Follow-up #146,

    Again, I have stated that any “primordial datum” of any theory cannot be explained by parts—because it is supposed to have none. IOW it’s irreducible to anything else—that’s what “primordial datum” means.
    This goes for the ‘atoms’ of the ancient atomists and this goes for the ‘fermions and bosons’ of Rosenberg. It also goes for ‘consciousness’ of some versions of dualism.
    Now, Reciprocating Bill takes what I said as meaning that all sorts of things and interactions cannot be explained in principle.

    RB: How do material brains interact with immaterial minds?

    Now RB thinks that it follows from what I said that my answer would be “It cannot be explained in principle”.
    I cannot understand why RB thinks this is so. Unless RB believes that I hold that the interaction between material brains and the immaterial mind is a primordial datum—which I have not indicated in any shape or form—this does not make sense at all.

  139. 139

    Box:

    Now RB thinks that it follows from what I said that my answer would be “It cannot be explained in principle”.

    I’m afraid it’s your non-sequetur, Box. I’ve been remarking on the explanatory emptiness of dualism – it’s inability to explain anything – from my first comment.

    “At the same time, postulating dualism, and “an immaterial mind,” offers exactly zero explanation for the presence of these human endowments and their absence in mice. Moreover, it is as helpless before the “hard problem” as any materialistic explanation – and offers no hooks from which to bootstrap scientific investigation into that problem.”

    “How do immaterial minds create consciousness? You’ve no idea. How do immaterial minds interact with material objects (like brains) and impact their functioning? You’ve not the slightest. How do material brains interact with immaterial minds? No clue. What determines whether an object or organism has an immaterial mind? You’ll pass on that. Why can’t rocks have immaterial minds? Comments on this thread are closed.”

    “To then “explain” those phenomena in nonphysical terms becomes essentially a exercise in tautology. But how or why that might be the case, or how to make that notion do any work, no one has clue.”

    Your remark, directed to me in response to one of the above:

    If by “explained” is meant ‘reducible to parts’ then what has been identified by a theory as ‘primordial datum’ cannot be explained—reduced to parts—by definition.

    …is, of course, a complete non-sequetur. And it’s yours.

  140. 140
    Box says:

    Reciprocating Bill: How do immaterial minds create consciousness? You’ve no idea.

    There you again. Obviously, you don’t read my posts.

  141. 141

    There you again. Obviously, you don’t read my posts.

    C’mon Box, keep up. That’s a quote of my remark at 69, posted three days ago, illustrating that my concern all along has been that dualism does no explanatory work – concern with what an “immaterial mind” explains, not what explains an immaterial mind.

Leave a Reply