Intelligent Design

Searle on Consciousness “Emerging” from a Computer: “Miracles are always possible.”

Spread the love

Thanks to johnnyb for alerting us to John Searle’s talk at Google in his last post.  Johnny said he had only listened to about 45 minutes in when he wrote his post.  Too bad, because the best part of the entire vid is the following colloquy between a questioner and Searle that begins at 58:25:

The questioner posits the following:

You seem to take it as an article of faith that we are conscious, that your dog is conscious, and that that consciousness comes from biological material, the likes of which we can’t really understand.  But – forgive me for saying this – that makes you sound like an intelligent design theorist, who says that because evolution and everything in this creative universe that exists is so complex, that it couldn’t have evolved from inert material.  So somewhere between an ameba and your dog, there must not be consciousness, and I am not sure where you would draw that line.  And so if consciousness in human beings is emergent or even in your dog, at some point in the evolutionary scale, why couldn’t it emerge from a computation system that is sufficiently distributed, networked, and has the ability to perform many calculations and maybe is even hooked into biologic systems?

Searle responds:

Well, about ‘could it emerge,’ miracles are always possible.  You know.  How do you know that you don’t have chemical processes that will turn this [holding up comb] into a conscious comb?  OK.  How do I know that?  Well, it’s not a serious possibility.  I mean the mechanisms by which consciousness is created in the brain are quite specific, and remember – this is the key point – any system that creates consciousness has to duplicate those causal powers.  It’s like saying, ‘you don’t have to have feathers in order to have a flying machine, but you have to duplicate and not merely simulate the causal power of the bird to overcome the force of gravity in the earth’s atmosphere.’  That’s what airplanes do.  They duplicate causal powers.  They use the same principal, Bernoulli’s principal, to overcome the force of gravity.  But the idea that somehow or other you might do it just by doing a simulation of certain formal structures of input-output mechanisms, of input-output functions; well, miracles are always possible, but it doesn’t seem likely.  That’s not the way evolution worked.

The questioner responds:

But machines can improve themselves, and you are making the case for why an ameba could never develop into your dog over a sufficiently long period of time and have consciousness.

Serle:

No, I didn’t.  No.

Questioner:

You’re refuting that consciousness could emerge from a sufficiently complex computation system.

Searle:

Complexity is always observer relative. If you talk about complexity you have to talk about the metric.  What is the metric by which you calculate complexity?  I think complexity is probably irrelevant.  It might turn out that the mechanism is simple.  There is nothing in my account that says a computer could never become conscious.  Of course.  We’re all conscious computers as I said.  And the point about the ameba is not that amoebas can’t evolve into much more complex organisms.  Maybe that’s what happened.  But the ameba as it stands, a single-celled organism, that doesn’t have enough machinery to duplicate the causal powers of the brain.  I am not doing a science fiction project to say ‘well, there can never be an artificially created consciousness by people busy designing computer programs.’  Of course, I am not saying that is logically impossible.  I’m just saying it is not an intelligent project.  If you’re thinking about your life depends on building a machine that creates consciousness, you don’t sit down at your console and start programing things in some programing language.  It’s the wrong way to go about it.

30 Replies to “Searle on Consciousness “Emerging” from a Computer: “Miracles are always possible.”

  1. 1
    Dionisio says:

    If you’re thinking about your life depends on building a machine that creates consciousness, you don’t sit down at your console and start programing things in some programing language. It’s the wrong way to go about it.

    Ok, so what could be the right way to go about it?

  2. 2

    Nothing new here. Just more evidence of faith-based atheism.

  3. 3
    gpuccio says:

    Searle is a serious disappointment.

    I have read part of his last book, ad it is pitiful to see how he tries to invent new “ways” to evade the conflict that is so evident in his thinking.

    It’s really sad that the same person who was able to express so clearly a fundamental error in the common dogmas about AI (in the Chinese room example) has to recur to so blatantly false philosophies to avoid the consequences of his own brilliant intuition.

    Something like that can be said of Penrose, too.

    The simple truth is: the hard problem of consciousness, as clearly elucidated by Chalmers, is a problem that cannot be solved: consciousness must be treated empirically as an observable phenomenon that cannot, in any way, be explained by any configuration of objects, either electronic or biological.

  4. 4
    News says:

    gpuccio writes at 3: “I have read part of his last book, ad it is pitiful to see how he tries to invent new “ways” to evade the conflict that is so evident in his thinking.” That is the fate of science under naturalism. ‘
    From brilliance to blather in one easy step.

  5. 5
    Origenes says:

    You can’t get from syntax (input symbols / protocols) to semantics (understanding), that’s the core of the Chinese Room argument — at least in my understanding, please correct me if I am wrong.
    Therefore, it is not only ridiculous but also besides the point to argue that ‘the room understands’ (the so-called “system reply”). Even if the room (system) is somehow conscious there is (also) no way for the room to get from syntax to semantics.
    Again, for me, the Chinese Room argument is about understanding and rationality, not so much about consciousness.
    But Searle, despite his own CR argument, holds that consciousness can bridge the divide between syntax and semantics:

    I can’t get from the syntax to the semantics, but the room can’t either. How does the room get from the syntax of the computer program, of the input symbols, to the semantics of the understanding of the symbols? There is no way that the room can get there because that would require some consciousness in the room in addition to my consciousness, and there is no such consciousness.
    [Searle at 16.00 min. on video]

    Notice that last sentence. It doesn’t make a whole lot of sense. There is a little (but essential) lie in there. The room doesn’t get from syntax to semantics whether it is conscious or not, because there is no way for anyone or anything to get from syntax to semantics. But Searle, seemingly unimpressed by his own CR argument, posits, implicitly, that consciousness can somehow close the gap. Somehow a conscious computer can get from syntax to semantics. IOWs, according to Searle, a conscious computer can understand things and can be rational.

    Searle: There is nothing in my account that says a computer could never become conscious. Of course. We’re all conscious computers as I said.

    So, our thinking can be completely determined bottom-up by (physical) laws (syntax), but because of the miracle of consciousness we are capable of reaching understanding (semantics) and be rational. The miracle of consciousness, which by itself miraculously closes the bridge between syntax and semantics, saves Searle from having to denounce materialism.

    But consciousness cannot solve the problem. And here is why:

    … let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

    [Reppert]

    Consciousness cannot control blind chemistry. Consciousness cannot be responsible for blind chemistry. Consciousness without top-down causal powers over thoughts is unthinking — irrational. And blind chemistry is not rational, never was and has no intention whatsoever to become rational.

  6. 6
    Dionisio says:

    gpuccio @3:

    The simple truth is: the hard problem of consciousness, as clearly elucidated by Chalmers, is a problem that cannot be solved: consciousness must be treated empirically as an observable phenomenon that cannot, in any way, be explained by any configuration of objects, either electronic or biological.

    Is that about David Chalmers?

    Chalmers characterizes his view as “naturalistic dualism”: naturalistic because he believes mental states are caused by physical systems (such as brains); dualist because he believes mental states are ontologically distinct from and not reducible to physical systems.
    https://en.wikipedia.org/wiki/David_Chalmers

  7. 7
    bornagain77 says:

    Of related interest, a certain percentage of the heat generated by computers is because of something known as Landauer’s principle.

    Landauer’s principle
    Of Note: “any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,,
    http://en.wikipedia.org/wiki/L....._principle

    Moreover, Landauer’s principle implies that when a certain number of computer operations per second have been exceeded, the computer will produce so much heat that the heat will be impossible to dissipate efficiently.

    Quantum knowledge cools computers – Published: 01.06.11
    Excerpt: The fact that computers produce heat when they process data is a logistical challenge for computer manufacturers and supercomputer operators. In addition, this heat production also imposes a fundamental limit on their maximum possible performance. According to the so-called Landauer Principle formulated by the physicist Rolf Landauer in 1961, energy is always released as heat when data is deleted. Renner says, “According to Landauer’s Principle, if a certain number of computing operations per second is exceeded, the heat generated can no longer be dissipated.” Assuming that supercomputers develop at the same rate as in the past, this critical limit will probably be reached in the next 10 to 20 years.
    http://www.ethlife.ethz.ch/arc.....u/index_EN

    And yet the brain, though having as many switches, if not more switches, as all the computers, routers, and Internet connections, on earth,,,

    Human brain has more switches than all computers on Earth – November 2010
    Excerpt: They found that the brain’s complexity is beyond anything they’d imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study: …One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth.
    http://news.cnet.com/8301-2708.....2-247.html

    Smart neurons: Single neuronal dendrites can perform computations – October 27, 2013
    Excerpt: The results challenge the widely held view that this kind of computation is achieved only by large numbers of neurons working together, and demonstrate how the basic components of the brain are exceptionally powerful computing devices in their own right.
    Senior author Professor Michael Hausser commented: “This work shows that dendrites, long thought to simply ‘funnel’ incoming signals towards the soma, instead play a key role in sorting and interpreting the enormous barrage of inputs received by the neuron. Dendrites thus act as miniature computing devices for detecting and amplifying specific types of input.
    https://www.sciencedaily.com/releases/2013/10/131027140632.htm

    And yet the brain, though having as many switches, if not more switches, as all the computers, routers, and Internet connections, on earth, does not have a problem with heat as would be presupposed by Landauer’s principle:

    Does Thinking Really Hard Burn More Calories? – By Ferris Jabr – July 2012
    Excerpt: So a typical adult human brain runs on around 12 watts—a fifth of the power required by a standard 60 watt lightbulb. Compared with most other organs, the brain is greedy; pitted against man-made electronics, it is astoundingly efficient.
    http://www.scientificamerican......d-calories

  8. 8
    bornagain77 says:

    Thus, the brain is either operating on reversible computation principles that no computer can presently come close to imitating (Charles Bennett; IBM), or, as is much more likely, the brain is not erasing information from its memory, as the material computer is required to do during computer operations, because our memories are stored on the ‘spiritual’ level of consciousness rather than on the material level of particles. This argument has been developed more formally here:

    Sentient robots? Not possible if you do the maths – 13 May 2014
    Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. ,,,
    But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “1” if A and B are the same and “0” if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.
    Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously hemorrhaging information.”,,,
    Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.
    http://www.newscientist.com/ar.....3LD5ChuqCe

    Mathematical Model Of Consciousness Proves Human Experience Cannot Be Modeled On A Computer – May 2014
    Excerpt: The central part of their new work is to describe the mathematical properties of a system that can store integrated information in this way but without it leaking away. And this leads them to their central proof. “The implications of this proof are that we have to abandon either the idea that people enjoy genuinely [integrated] consciousness or that brain processes can be modeled computationally,” say Maguire and co.
    Since Tononi’s main assumption is that consciousness is the experience of integrated information, it is the second idea that must be abandoned: brain processes cannot be modeled computationally.
    https://medium.com/the-physics-arxiv-blog/mathematical-model-of-consciousness-proves-human-experience-cannot-be-modelled-on-a-computer-898b104158d

    In the following paper, a scientist in quantum computation argues that even future quantum computers will not be able to achieve conscious awareness:

    Consciousness Does Not Compute (and Never Will), Says Korean Scientist – May 05, 2015 (based on 2007 paper)
    Excerpt: “Non-computability of Consciousness” documents Song’s quantum computer research into TS (technological singularity (TS) or strong artificial intelligence). Song was able to show that in certain situations, a conscious state can be precisely and fully represented in mathematical terms, in much the same manner as an atom or electron can be fully described mathematically. That’s important, because the neurobiological and computational approaches to brain research have only ever been able to provide approximations at best. In representing consciousness mathematically, Song shows that consciousness is not compatible with a machine.
    Song’s work also shows consciousness is not like other physical systems like neurons, atoms or galaxies. “If consciousness cannot be represented in the same way all other physical systems are represented, it may not be something that arises out of a physical system like the brain,” said Song. “The brain and consciousness are linked together, but the brain does not produce consciousness. Consciousness is something altogether different and separate. The math doesn’t lie.”
    Of note: Daegene Song obtained his Ph.D. in physics from the University of Oxford
    http://www.prnewswire.com/news.....77306.html

    Reply to Mathematical Error in “Incompatibility Between Quantum Theory and Consciousness” – Daegene Song – 2008
    http://www.neuroquantology.com.....ad/176/176

    Of related note. Dr. Egnor, who is a brain surgeon, argues that to confuse a representation of a memory encoded on a material substrate with an actual memory of consciousness is to make a fundamental, categorical, mistake in basic logic:

    Understanding Memories: Lovely Metaphors Belong in Songs, Not Science – Michael Egnor – December 16, 2014
    “Memories are stored in the brain” is simply unintelligible. Memories aren’t storable. It is akin to the assertion “the square root of 10 is red.” It is not logically or empirically wrong. It doesn’t rise to the level of testability. It is simply incoherent.,,,
    http://www.evolutionnews.org/2.....92071.html

    Recalling Nana’s Face: Does Your Brain Store Memories? – Michael Egnor – December 8, 2014
    To assert that memories are stored in the brain is gibberish. And don’t fall for the materialist invocation of promissory materialism — “It’s just a limitation of our current scientific knowledge, and we promise that science will solve the problem in due time.” The assertion that the brain stores memories is logical nonsense that doesn’t even rise to the level of empirical testability.
    http://www.evolutionnews.org/2.....91821.html

    along that line of thought:

    A Reply to Shermer Medical Evidence for NDEs (Near Death Experiences) – Pim van Lommel
    Excerpt: For decades, extensive research has been done to localize memories (information) inside the brain, so far without success.,,,,So we need a functioning brain to receive our consciousness into our waking consciousness. And as soon as the function of brain has been lost, like in clinical death or in brain death, with iso-electricity on the EEG, memories and consciousness do still exist, but the reception ability is lost. People can experience their consciousness outside their body, with the possibility of perception out and above their body, with identity, and with heightened awareness, attention, well-structured thought processes, memories and emotions. And they also can experience their consciousness in a dimension where past, present and future exist at the same moment, without time and space, and can be experienced as soon as attention has been directed to it (life review and preview), and even sometimes they come in contact with the “fields of consciousness” of deceased relatives. And later they can experience their conscious return into their body.
    http://www.nderf.org/vonlommel.....sponse.htm

    Quote of note:

    “Either mathematics is too big for the human mind or the human mind is more than a machine.”
    Kurt Gödel

  9. 9
    gpuccio says:

    Dionisio:

    Yes, Chalmers is the “father” of the concept of “hard problem of consciousness”.

    Like Searle with the Chinese room and Penrose with the Godel argument, he is one of those brilliant thinkers who have given brilliant reasons why consciousness is beyond objective configurations of matter, and then cannot really accept the natural consequences of their intuitions.

    Chalmers characterizes his view as “naturalistic dualism”. Well, I don’t like neither the term “naturalistic” nor the term “dualism”.

    My approach is purely empirical: consciousness exists, it is an observable, and it has different properties from objects, which are observables too.

    But objects are observed in consciousness, so in a very important sense consciousness is the mother of all observables.

    The important point is that subjective experiences cannot be derived from configurations of objects: there is simply no reason to believe that, and there is absolutely no objective observation which supports such a silly idea.

    So, we, as scientific reasoners, are left with one option only: to observe both conscious phenomena and objective phenomena, and describe each according to what we observe and to their properties. And to understand the relationships between the two.

    Is that dualism? May be. Let’s call it “methodological dualism”.

    The simple fact is: there is no evidence at all that “mental states” (IOWs, subjective experiences) are “caused by physical systems”. Searle believes that, Penrose believes that, Chalmers believes that. But why? Only by faith, unsupported faith in a philosophy, indeed a very bad philosophy. There is absolutely no science, and no good reason, in that faith.

    All those people try to support their faith with the observation that physical systems, like the brain, are related to mental states. But we have always known that, for millennia. Since the first human being felt pain because of a physical wound, we know that body and consciousness are related. There is no doubt about that.

    But, for millennia, all the best human thinkers have considered that as evidence of an interface between consciousness and body. Now, suddenly, the new brilliant philosophers of our age are so sure that instead that is evidence that the body generates consciousness.

    But has anything been added to the evidence? Absolutely not. We have known for millennia that a wound in the body generates pain. We know exactly the same thing now, only with some more details.

    All that we know is absolutely compatible with the model of an interface between body and consciousness. Nothing, absolutely nothing in the facts supports specifically the idea that the body generates consciousness.

    Searle, Penrose and Chalmers have given us, and themselves, very good reasons why that is simply impossible. They may not believe their own reasons, but I do. I love the Chinese room, the Godel argument, and the concept of the “hard problem of consciousness”. They are strong examples of true reasoning. They are absolute evidence that the idea that configurations of objects can generate subjective experiences is pure, and bad, imagination.

  10. 10
    Dionisio says:

    gpuccio,

    Thank you for the insightful detailed explanation, as usual.

  11. 11
    harry says:

    Consciousness is not a material reality. There isn’t any configuration of matter alone that will bring it about. It will remain a mystery to atheistic, materialistic science.

  12. 12
    Dionisio says:

    gpuccio @9:

    Is this related to the current topic?

    http://www.uncommondescent.com.....ent-620665

  13. 13
    mike1962 says:

    Here’s why claims like his are horse sh*t…

    Nobody can prove they themselves are conscious to another person…

    So, how ridiculous to make a claim that a computer can be conscious.

    Think about it. If you can’t prove you are conscious (whatever that means), how can you prove a computer is conscious?

    Yawwwwn. Going back to bed now.

  14. 14
    gpuccio says:

    mike1962:

    It is absolutely true that:

    “Nobody can prove they themselves are conscious to another person…”

    But…

    1) Each of us has certainty of his own consciousness, because each of us can observe it: for each of us, it is a fact.

    2) We easily infer consciousness in other human beings because of the extreme similarities in appearance and behaviour. It is an inference by analogy, probably the strongest in out map of reality, and I doubt that anyone can really doubt it (including, probably, solipsists).

    3) There are some obvious markers of consciousness, that make the inference immediately obvious. the best of them is functional complexity. Functional complexity, as in language and software and machines, is only observed from conscious intelligent beings.

    4) IOWs, the inference of design, and in particular of design in biology, which is the core of ID theory, is extremely similar, and practically as strong as, the basic inference of consciousness in others. They are both inferences by analogy, and they are both based on universally observed facts.

    5) And yet, while the inference of consciousness in other human beings is accepted practically by all with the highest level of certainty, the inference of design, and in particular of design in biology, is refused by most (with the happy exception of us IDists 🙂 ).

    That should tell us many interesting things about human nature…

  15. 15
    gpuccio says:

    mike1962:

    On the other hand, I think that the inference that our computers have no subjective experiences is as strong as the inference that other human beings do have them.

    Why? It’s not only because computers are different (and they are). It’s essentially because their behaviour is completely different.

    And the main difference, again, is that computers cannot generate any new original functional complexity.

  16. 16
    gpuccio says:

    Dionisio:

    Yes, of course it is.

    I am not sure I really understand their approach. However, I still have the strong impression that even that approach is not able, even in principle, to differentiate between an interface model and a cause and effect model.

    The simple idea is: as long as we can look only at phenomena in the body which are related to conscious events, it is obvious that we can find some form of mapping: we are looking, by definition, to phenomena generated at the consciousness-body interface.

    Conscious experiences like NDEs, which presumably happen, at least in part, beyond the interface, are a more interesting field of study to prove the interface model as superior to the cause-effect model.

  17. 17
    Dionisio says:

    gpuccio @15:

    […] computers cannot generate any new original functional complexity.

    Can we say that anything computers may do is deeply rooted in original rules, criteria, algorithms and functional complexity initially generated by the intelligent beings that designed the computers?

  18. 18
    Dionisio says:

    gpuccio @16:

    Interesting concepts. Thank you.

    Are the below links somehow related to the current topic?
    http://www.uncommondescent.com.....eel-again/

    http://www.uncommondescent.com.....al-one-is/

    http://www.uncommondescent.com.....-her-mind/

  19. 19
    gpuccio says:

    Dionisio:

    “Can we say that anything computers may do is deeply rooted in original rules, criteria, algorithms and functional complexity initially generated by the intelligent beings that designed the computers?”

    Yes. They can compute new complexity for the original specifications, but they cannot invent new specifications. And even if the complexity can increase, the Kolmogorov complexity remains the same, because the new complexity is computed by the computer itself. So, in a sense, no new specification and no new original complexity is generated.

    Sometimes the algorithms may include new information and complexity from outside. That, too, is part of the original plans, of the original rules.

    Let’s say we have a software which was designed to compute decimal digits of pi. As the software works, new complexity is added to the output, as new digits of pi are computed. But the specification (computing digits of pi) remains the same, and so the Kolmogorov complexity because, once the complexity of the output becomes greater than the complexity of the software, the Kolgomorov complexity of the output becomes the same as the complexity of the software, and remains the same.

    No new specification can be generated: the software is designed to compute digits of pi, and that’s what it will always do. It cannot compute for some other, new, unpredicted output.

    Even if the software is designed to change, it will change according to the original plan, and maybe to new inputs from the environment, or to random variables, which can be part of the algorithm.

    That’s not what happens in conscious design. In design, the subjective experiences of the designer, both of meaning and of purpose, are fundamental in generating the specifications and the complexity. That’s why conscious intelligent designers can easily generate tons of new, original, complex functional information, and non conscious systems, even if designed, cannot.

  20. 20
    Dionisio says:

    gpuccio:

    Eccellente!
    Mile grazie!

  21. 21
    Dionisio says:

    gpuccio:

    This seems related to the current discussion, isn’t it?

    One of the most mysterious phenomena in science is the nature of conscious experience.

    Due to its subjective nature, a reductionist approach is having a hard time in addressing some fundamental questions about consciousness.

    The material basis of subjective conscious phenomena remains one of the most difficult scientific questions.

    Using category theory to assess the relationship between consciousness and integrated information theory.
    Tsuchiya N, Taguchi S, Saigo H
    Neurosci Res. 107:1-7.
    doi: 10.1016/j.neures.2015.12.007.
    http://www.sciencedirect.com/s.....0215002989

  22. 22
    Dionisio says:

    gpuccio @19:

    […] conscious intelligent designers can easily generate tons of new, original, complex functional information, and non conscious systems, even if designed, cannot.

    Are the politely-dissenting interlocutors taking note of all the comments you’ve written in this discussion thread?
    Do they understand? Let’s hope so.

    The politely-dissenting interlocutors usually complain about the alleged lack of scientific discussion in this site, but when interesting scientific topics are posted the politely-dissenting interlocutors are not too visible around. However, they seem to like the less scientific discussions more. Why? Really puzzling, isn’t it?

  23. 23
    gpuccio says:

    Dionisio:

    “Really puzzling, isn’t it?”

    Indeed.

  24. 24

    “Miracles are always possible.” True indeed.

  25. 25
    Dionisio says:

    gpuccio:

    Here’s another related paper:

    Phenomenal consciousness is currently the target of a booming neuroscientific research program, which intends to find the neural correlates of consciousness (NCCs).

    […] it is not obvious what makes something fall under the term “NCC.”

    The neuroscience of consciousness searching for NCC2.0s can in principle progress like any other science: by competing in the game of predictive fit.

    A Deeper Look at the “Neural Correlate of Consciousness”
    Sascha Benjamin Fink
    Front Psychol. 7: 1044.
    doi: 10.3389/fpsyg.2016.01044

  26. 26
    gpuccio says:

    Dionisio:

    Well, at least that paper says, at a certain point:

    “We must take the following two options seriously. First, consciousness may cross-cut any reasonable way to type the natural world such that there is no ontological one-to-one-type-type-mapping between neural and phenomenal types. This would be the case if, for example, strong 4E-accounts (according to which consciousness is extended, embedded, enactive, or embodied) are right. Then, consciousness has no fixed neural basis, and therefore there is no one neural type corresponding to any given phenomenal type. Second, there might be a bijective type-type-mapping, but the neural types could be so weirdly constructed and complicated that we would likely never find the right way of typing neural events, making the type-type-mapping epistemically impenetrable. Neither option can and should be excluded a priori.”

    Emphasis mine.

    OK, that is an acceptable scientific approach, in principle.

    However, there is a basic problem with this kind of reasoning. Any inquiry about neural correlates of consciousness (NCC) seems to be an inquiry about mapping: how some specific experiences of consciousness map to specific neural states.

    That is interesting, but still it seems to be a deeper approach to the easy problem of consciousness.

    Some basic objections apply:

    1) Even if we find specific mapping of certain experiences to certain neural correlates, that is not demonstration of any correlation with the basic problem of consciousness: why do those configurations become subjective experiences? The subjectivity is the basic property of all conscious experiences, not only of some. That is the essence of the hard problem of consciousness. Finding neural correlates fro some specific experiences seem to be more about the content of those experiences than about the subjectivity of them.

    2) Even states of consciousness are indeed specific contents. One of the basic errors that can be made in investigating consciousness is to consider some states as more “conscious” than others. In general, most tend to believe that “awareness” in the waking state is a synonym for consciousness. That is not true.

    When we dream, for example, our dreams are as subjective as our daily experiences. Consciousness is equally present in dreams as in daily experiences. And the simple assumption that, when we sleep but we are not dreaming, consciousness is interrupted, is completely arbitrary. As far as we know, we could simply be conscious of other things, which are not retained in daily awareness. Our consciousness, while remaining subjective, could simply be aware of less formal contents.

    And special kinds of experiences, let’s quote NDEs for example, are very different from daily awareness, and more difficult to define and describe, being less “common” than daily experiences or dreams. And they are certainly real, and certainly subjective.

    The only property which is really common to all conscious experiences is one and only one: the subjective nature of them, the presence in all of them of an “I” which refers to itself all the different contents of the experience. IOWs, in all conscious experiences, and not only in daily awareness, there is always a perceiver and something that is perceived.

    Very simply, that is the hard problem: why is there a perceiver? Why is the content of the experience, whatever its form, experienced? Why and when does an objective configuration, whatever it is, become a subjective representation?

  27. 27
    Dionisio says:

    gpuccio:

    Very simply, that is the hard problem: why is there a perceiver? Why is the content of the experience, whatever its form, experienced? Why and when does an objective configuration, whatever it is, become a subjective representation?

    Those seem very difficult questions to answer on the basis of what science has revealed to us.

    Perhaps some easy to understand illustrations could help?

    Could the below linked example somehow -at least partially- relate -in a less rigorous way- to the essence of the above quoted text?

    http://www.uncommondescent.com.....ent-619696

  28. 28
    Dionisio says:

    Just noticed that with almost 30 comments posted in this thread, none has a counterargument from the politely dissenting interlocutors.
    That’s quite a strong sign that they’re running out of steam. Poor things. Maybe that’s why they seem to prefer other venues to vent their frustration? Don’t blame them. Nobody wants to be in a den full of lions. 🙂

  29. 29
    Dionisio says:

    gpuccio @19:

    Since this thread is practically inactive by now, I’ll post this off-topic anecdotal commentary regarding your timely mention of Kolmogorov complexity.

    I recall visiting a couple times the university where professor Kolmogorov worked, but never met him personally, though had heard about “Kolmogorov complexity” in the information theory lectures in my school, which was in another part of the city.
    Who would have thought back then that 40 years later an Italian medical doctor will remind me of Kolmogorov again, but this time related to biology!
    Thank you for your comments @9, 16, 19 and 26.
    I hope many anonymous visitors read this thread, specially your insightful comments.
    Mile grazie caro dottore!

  30. 30
    gpuccio says:

    Dionisio:

    Many thanks to you! 🙂

    As a medical doctor with strange interests, I feel really flattered. 🙂

Leave a Reply