Intelligent Design

Deep Blue Never Is (Blue, That Is)

Spread the love

In the comment thread to my last post there was a lot of discussion about computers and their relation to intelligence.  This is my understanding about computers.  They are just very powerful calculators, but they do not “think” in any meaningful sense.  By this I mean that computer hardware is nothing but an electro-mechanical device for operating computer software.  Computer software in turn is nothing but a series of “if then” propositions.  These “if then” propositions may be massively complex, but software never rises above an utterly determined “if then” level.    This is a basic Turing Machine analysis. 

This does not necessarily mean that the output of computer software is predictable.  For example, the “then” in response to a particular”if” might be “access a random number generator and insert the number obtained in place of the variable in formula Y.”  “Unpredictable” is not a synonym for “contingent.”  Even if an element of randomness is introduced into the system, however, the way in which the computer will employ that random element is determined. 

Now the $64,000 question is this:  Is the human brain merely an organic computer that in principle operates the same way as my PC?”  In other words, does the Turing Machine also describe the human brain ?  If the brain is just an organic computer, even though human behavior may at some level be unpredictable, it is nevertheless determined, and free will does not exist.  If, on the other hand, it is not, if there is a “mind” that is separate from though connected to, the brain, then free will does exist. 

This issue has been debated endlessly, and I refer everyone to The Spiritual Brain for a much more in depth analysis of this subject.   For my purposes today, I propose to approach the subject via a very simple thought experiment. 

First a definition.  “Qualia” are the subjective responses a person has to objective experience.  Qualia are not the experiences themselves but the way we respond to the experiences.  The color “red” is the classical example.  When light of wavelength X comes into my eye, my brain tells me I am seeing the color red.  The quale (singular of “qualia”) is my subjective experience of the “redness” of red.  Maybe the “redness” of red for me is a kind of warmth.  Other qualia might be the tanginess of a sour taste, the sadness of depression, etc.

Now the experiment:  Consider a computer equiped with a light gathering device and a spectrograph.   When light of wavelength X enters the light gathering device, the spectrograph gives a reading that the light is red.  When this happens the computer is programmed to activate a printer that prints a piece of paper with the following statement on it “I am seeing red.”

I place the computer on my back porch just before sunset, and in a little while the printer is activated and prints a piece of paper that says “I am seeing red.”

 Now I go outside and watch the same sunset.  The reds in the sunset I associate with warmth, by which I mean my subjective reaction to the redness of the reds in the sunset is “warmth.”

1.  Did the computer “see” red?  Obviously yes.

2.  Did I “see” red.  Obviously yes.

3.  Did I have a subjective experiences of the redness of red, i.e., did I experience a qualia?  Obviously yes.

4.  Did the computer have a subjective experience of the redness of red, i.e., did it experience a qualia?  Obviously no.

Conclusion:  The computer registered “red” when red light was present.  My brain registered “red” when red light was present.  Therefore, the computer and my brain are alike in this respect.  However, and here’s the important thing, the computer’s experience of the sunset can be reduced to the functions of its light gathering device and hardware/software.  But my experience of the sunset cannot be reduced to the functions of my eye and brain.  Therefore, I conclude I have a mind which cannot be reduced to the electro-chemical reactions that occur in my brain.

193 Replies to “Deep Blue Never Is (Blue, That Is)

  1. 1
    CN says:

    Ok, devil’s advocate coming: Why could you not view the brain as a hugely complex macro-kernal (computer speak) that performs a zillion computations for some simple stimuli, and gives the appearance of qualia (and free-will)?

    I guess a materialist is forced to come to this conclusion? Materialists: please comment:-)

  2. 2
    Reed Orak says:

    I’m not a materialist, but maybe I can offer a slightly different perspective.

    It’s clear that neurons do something that is, in some ways, comparable to computation. Whether we want to call that computation is a matter of mere semantics. It is also clear that neurons are very different from silicon logic gates–for example, in that they are capable of spontaneously forming useful connections. This much, I think, is not controversial.

    What should also be uncontroversial: the brain is responsible for at least some (if not most or all) mental processes. Visual processing and recognition, for example, is clearly a biological brain function.

    Now, maybe there are some mental processes (by which I mean to include consciousness, experience of qualia, etc.) that are not the result of brain activity. That’s not a hypothesis that we should take lightly. Right now there is no candidate for a mechanism for a dualistic mind/brain connection, no detailed explanation of what the mind is, what it does, how it works, how it comes to be, or really anything other than a list of hard to explain mental phenomena that it may be responsible for.

    Again, I’m not a materialist. I am not in any way committed to the proposition that “matter and energy are all that exist” or anything like that. But I do expect that if someone is going to claim that something exists (e.g., an immaterial mind), then they had better have some idea of what it is and how it works, and those ideas had better be instructive in some way.

  3. 3
    Mapou says:

    Barry, while I agree with you that our subjective awareness of color sensations (and other types of qualia) should be enough to convince anybody that our minds are more than just a bunch of neurons, I’m afraid that this is not enough to convince the materialists and agnostics among us.

    I think that a potentially more successful avenue of inquiry is to find something that the human mind can do that cannot be explained with neural networks alone. It must be something that can be quantified experimentally, i.e., objectively. Lately, I am of the opinion that human episodic memory is biologically implausible. This is especially true with regard to autistic savants who can instantly memorise an entire complex musical piece and play it back flawlessly. Some savants can remember every last detail of their lives, even what they were thinking and feeling at any given moment. This would mean that the the brains of these savants are recording their own state, moment by moment! Having studied AI and neural networks for many years, I can assure you that this is completely biologically impossible. The reason is that the size of the brain makes no difference since it must record its own state over and over.

    Of course, the materialists will always fall back on the old tired but worthless argument that we don’t yet know everything that is going on in the brain, therefore we cannot draw a definite conclusion. I think we already know enough about how neurons work and how many neurons are in the brain to arrive at a definite conclusion that the brain could not possibly retain all this information.

  4. 4
    BarryA says:

    CN, let’s approach your question from the other side. If Deep Blue’s progamers put in a sub-routine that said: “If red light is sensed, feel warmth” would that give Deep Blue a subjective experience of warmth when it saw the sunset? Obviously not.

  5. 5
    Gerry Rzeppa says:

    Barry –

    It’s interesting, I think, how everyday language assumes your position. If, for example, you lost a limb, God forbid, we might describe the event by saying, “Barry lost his arm in an accident” – indicating that we still consider (what’s left of) “you” to be the same old Barry.

    And even in a case of severe head trauma, God forbid, we would probably find ourselves saying things like, “Barry’s lost his ability to communicate” or perhaps even “Barry’s lost his mind” – but the implication would still be that the Barry we know/knew and love/loved is still around… somewhere.

  6. 6
    aiguy says:

    BarryA,

    Did the computer have a subjective experience of the redness of red, i.e., did it experience a qualia? Obviously no.
    You go wrong a couple of different ways here, I think.

    1) First, you state the answer as “obviously no”, and you could ridicule anyone who disagreed by saying, “Oh, you really think my Dell PC has qualia? Hah! Then you better not turn it off – that would be murder!” and so on. But as obvious as it seems, your answer is an intuition rather than a principled response. So the interesting questions are:
    a) Are there any empirically-grounded principles which can answer that question? (NO)
    b) Would our intuitions hold up under different circumstances (i.e. if we managed to create a very human-seeming robot)? (NO)

    2) It is not at all clear what the relationship is between qualia and design abilities. For all we know scientifically, the “intelligent designer” that IDers suppose is responsible for life might have every mental ability that human beings have (and to a far greater degree, since living things are more complicated than what we can design), but lack qualia!

  7. 7
    aiguy says:

    Mapou,

    This would mean that the the brains of these savants are recording their own state, moment by moment! Having studied AI and neural networks for many years, I can assure you that this is completely biologically impossible. The reason is that the size of the brain makes no difference since it must record its own state over and over.

    This would be a very interesting result to demonstrate – you should publish it. It does entail that you understand how memory is stored biologically however, which most of us cognitive scientists agree is not the case. (I would rethink your conception of episodic memory as lossless storage of complete state).

    Of course, the materialists will always fall back on the old tired but worthless argument that we don’t yet know everything that is going on in the brain, therefore we cannot draw a definite conclusion.

    Actually, a materialist would argue that we do know that minds reduce to brains. I myself am not a materialist, and I’m honest enough to admit we do not know anything of the sort, either way. Which of course is a perfectly valid argument for denying that a theory like ID that relies on the metaphysical supposition of dualism cannot be considered scientific.

  8. 8
    Jason Rennie says:

    “It is not at all clear what the relationship is between qualia and design abilities. ”

    I suspect you would find that it is not possible to intelligent in any recognizable sense and not have qualia.

    Certainly an agent will by definition have “something it is like to be that agent”. But that “something it is like to be” is an essential part of what Qualia are.

  9. 9
    aiguy says:

    Hi Jason,

    I suspect you would find that it is not possible to intelligent in any recognizable sense and not have qualia.
    I suspect you’re wrong. But as the saying goes, if suspicions were theories, we’d all be scientists.

    Certainly an agent will by definition have “something it is like to be that agent”. But that “something it is like to be” is an essential part of what Qualia are.

    You can define your terms however you’d like; if definitions were theories, we could simply define the answer to any question. But they’re not.

    In the end, there is no justification for imagining that whatever the cause of life was, it experienced qualia. Thus, if you define intelligence as entailing qualia, then there is no justification for imagining that the cause of life was “intelligent” by that definition.

  10. 10
    Emkay says:

    “Conclusion: The computer registered “red” when red light was present. My brain registered “red” when red light was present. Therefore, the computer and my brain are alike in this respect…”

    And alike also in that both were intelligenty designed and intelligently pre-programmed to arrive at a specific response to a specific stimulus?

  11. 11
    Gerry Rzeppa says:

    aiguy says:

    “In the end, there is no justification for imagining that whatever the cause of life was, it experienced qualia.”

    I reply:

    Unless one holds that a cause must always be greater than its effect: a Creator without the experience of qualia could not impart that attribute to His creatures.

  12. 12
    GilDodgen says:

    This is a subject about which I know something, having spent countless thousands of hours over the past 19 years programming computers in an attempt to simulate the human reasoning process in games-playing AI (artificial intelligence). I can speak with some authority on this subject, having won both silver and gold medals in two international AI games-playing competitions in two different disciplines.

    These attempts have been both spectacularly successful and spectacularly unsuccessful. By combining the brute force of a computer (a machine with two CPUs performing a billion integer calculations per second, each accessing two gigabytes of RAM, over a period of nearly two months, nonstop, 24/7) with some highly sophisticated, intelligently designed algorithms, the program was able to solve some problems that no human has been able to solve. (See here.)

    On the other hand, the programs are completely stupid. By that I mean that they have no capacity to learn and modify their “thinking” process on their own, as do the humans who create them.

    I have come to the conclusion that life, consciousness, and creative intelligence represent the three most interesting phenomena in the universe, because they are all highly negentropic — and all attempts to explain them away in materialistic terms result in logical absurdity and self-refutation.

  13. 13
    WinglesS says:

    I think the discussion is about whether our feeling are in any way a reliable indicator that man and machine are fundamentally different at some level. I think feelings and emotions are the basis of many such discussions for example over the existance of the soul.

    While I believe it can’t be known if machines can be programed to ‘feel’ in any meaningful sense of the word, nor is it possible to prove that our feelings are purely illusionary, I’ll say that is is dangerous for man to assume that they are. For if suffering, feelings and other subjective experiences are assumed to be purely the result of some form of programming and are purely illusionary, we could one day rationalise all kinds of crime against humanity under the (possible) pretense that man and machine are fundamentally alike, and that we can treat a human being no differently from a car, computer or briefcase.

  14. 14
    aiguy says:

    Gerry,

    Unless one holds that a cause must always be greater than its effect: a Creator without the experience of qualia could not impart that attribute to His creatures.

    I don’t think this principle holds up very well. If it did, people couldn’t build airplanes 🙂

  15. 15
    aiguy says:

    Gil,

    Your checkers program couldn’t learn, but other programs can; see for example http://www.aaai.org/AITopics/html/machine.html

    In any event, if you’d like to define intelligence as requiring “the capacity to learn and modify their ‘thinking’ processes on their own”, then it is very clear that ID can never demonstrate that the cause of life was intelligent, since we have no evidence that it was capable of learning. (And of course for those who believe the Designer was the God of the Bible, one would suppose He didn’t learn anything, since He has always been omniscient!).

  16. 16
    Mapou says:

    aiguy wrote:

    This would be a very interesting result to demonstrate – you should publish it. It does entail that you understand how memory is stored biologically however, which most of us cognitive scientists agree is not the case. (I would rethink your conception of episodic memory as lossless storage of complete state).

    Well, that’s just it. It does not matter how memory is encoded biologically in this instance. If the brain of a savant can record its entire state at every instant, it needs another exact copy of his/her brain to do so. I have read that some savants have demonstrated perfect photographic memory in which they can recall every last detail of scenes that they experienced years ago. If this is true, it flies in the face of the materialist hypothesis.

  17. 17
    Gerry Rzeppa says:

    aiguy says:

    I don’t think this principle [that a cause has to be greater than its effect] holds up very well. If it did, people couldn’t build airplanes.

    I reply:

    That’s a common misconception (and a rather materialistic one, at that). You’re assuming that the idea of an airplane is something less than a particular physical instance of one. This, obviously, isn’t true. What was in the Wright Brother’s heads was clearly far superior to the planes they actually built and flew. And prior, as well.

    We all know this from personal experience; it’s one of the reasons we’re always somewhat disappointed with our creations. We can never quite acheive what we imagine, and that’s evidence that “what we imagine” is the greater of the two.

  18. 18
    aiguy says:

    Mapou,

    It does not matter how memory is encoded biologically in this instance. If the brain of a savant can record its entire state at every instant, it needs another exact copy of his/her brain to do so. I have read that some savants have demonstrated perfect photographic memory in which they can recall every last detail of scenes that they experienced years ago. If this is true, it flies in the face of the materialist hypothesis.

    There is no reason to think that every detail of our visual memory would entail storing the entire physical state of the brain at every “instant”. Do you think the entire brain does nothing but store visual images? Of course not. When a digital video camera records 1000 “instants” of image, does the camera require 1000 exact copies of itself? Of course not. And so on.

    But much more importantly, it is very clear that however brains store memories, it is very different from how computers do it, so making declarations about how much data the brain should be able to store is simply groundless. Imagine, for example, that the brain used “non-linear holographic quantum interference effects” (whatever those are) to store memory. Would that “fly in the face of the materialist hypothesis”? No. Would the brain be able to store as much data as we see in savants? Who knows???

    It sounds like you are very intent on finding a way to fly in the face of materialists, but you can’t just pretend that we can estimate how much data a brain should be able to hold, and how much data an “instant” of visual memory should require, without knowing how brains work.

  19. 19
    aiguy says:

    Gerry,

    Ok, let’s get this straight. On one hand, you argue that since people can experience qualia, the creator of people must be able to experience qualia, on the grounds that “a cause must always be greater than its effect”. I point out that since airplanes can fly, then your rule would entail that the creator of airplanes must also be able to fly. But people can’t fly, so your rule doesn’t hold up.

    You respond that I’m laboring under a materialistic misconception? I don’t see it, sorry. Nor do I understand in what sense we can judge if an “idea” is less than or greater than a “physical instance”; that sounds like a category error to me.

    In any event, my point here is just to say that ID can’t point to any evidence for the conjecture that the cause of life felt qualia, and I think that point stands.

  20. 20
    Anton says:

    Aiguy,

    Thanks for googling “machine learning” for us. Please expand on how such programs and devices came into being through purely random natural events refined by natural selection.

  21. 21
    aiguy says:

    Hi Anton,

    Thanks for googling “machine learning” for us.

    Actually I work in AI. I just wanted to correct any misunderstandings that might have arisen from Gil’s post about how computers can’t learn, and also point out how “learning” isn’t a very good criteria for IDists to associate with the concept of “intelligent agency”.

    Please expand on how such programs and devices came into being through purely random natural events refined by natural selection.

    We’re discussing here whether or not computers can be considered intelligent, and not how they come to exist in the first place. These are two completely seperate questions. Remember – you don’t need to know the origin of the designer to establish if it is intelligent or not (sound familiar?)

    To say that computers can’t be intelligent because they are themselves intelligently designed doesn’t seem to be a good strategy for ID proponents. After all, you believe that you were intelligently designed, right? Does that mean that you are not really intelligent either?

    So again, we find that there is simply no empirical method to establish whether or not the cause of life is intelligent in any meaningful sense of the word (e.g. if it is conscious, experiences qualia, can learn, and so on).

  22. 22
    jjcassidy says:

    The problem that I’ve always seen for materialists is that if the brain is enough like a computer, then we get stuck on Rice’s theorem. Let’s imagine that the function of the brain were computable by a Turing Machine.

    Assuming we could come to *A* conclusion (i.e. a thing “found” in the Eugenie Scott sense) then the Turing Machine would have to halt. But since we do not know the coding of the TM, it’s as close as we can get to an arbitrary TM number, however whether a given TM has a nontrivial property is undecidable.

    We absolutely no guarantee that a complex Turing Machine, however complex can understand itself and how it arrives and computes its final states. There are no end to computability problems. There is the problem of whether the process can halt in modeling the TM. There is the problem of whether we could halt in judging the distance from the model to the real behavior.

    Turing’s definition allows us only one escape from this blind cycle: insight. Turing defined his machine as emulating the process of manipulating symbols on paper by an average person who has no special insight into the process he’s trying to solve.

    Simple program:

    1: let x = input
    2: if x = 3,000,000,0001 goto 2
    3: return x * x

    This approximates the function x*x for most values. If we randomly shoot numbers at it, it might seem all the world like it was the function x^2. We might never find out that it is not that function and is undefined, or non-halting on 3 billion + 1. Now for the perfect algorithm x^2, there are infinitely many approximate functions. Sequentially, at whatever point we decide that it’s good enough, there are an infinite number of TMs that take more loops.

    If the brain is a computable algorithm, as such its applicability of examining a computable algorithm (say, itself) is only possible if it doesn’t loop forever on other input, and if it doesn’t loop forever on itself when examining that input.

    “Thinking” is either capable of exceeding Turing bounds or it is not. If it is, then no machines built on the Turing model can do what we can (although they might approximate it in ranges); If not, it is a problem to understand the least of what we can understand about our own processes by just pushing around symbols in a formalized fashion. Brain science is only ever conducted on the optimistic hope that we are the type of machine that can understand our mechanism.

    Even proven competency is no test of this feet. As I suggested before if you only put in numbers up to 3 billion, then this algorithm has proven its competency to compute x^2 3 billion times over. For any possible algorithm to compute the function of the human brain (were that possible) there would be several infinite varieties that would fail at it.

    Natural selection, i.e. proven competency just does not have–and is not reputed to have–that kind of accuracy, to select for a machine that understands itself. (The survival value of this is questionable after all, since a majority of species survive without coming near to it.)

    A truism in software design is “Every program has bugs”. That’s true for every designed program. Programs themselves undergo a type of selection. If they don’t work and their authors can’t make them work, the market steps over their bones on the way to a better program. Just because it succeeds at what it has been called on to do (in a majority of cases) does not mean that it will succeed at any task “like it”.

    However, without analogy to computers and computability functions we lose most of our ability to model information processing, much of our ability to make predictions. What does “not really a Turing machine” look like? Can it be formulated by a symbolic process? If so, then it should be TM-computable. If it can’t, how could the skeptics ever accept it without proof.

  23. 23
    BarryA says:

    aiguy writes “I point out that since airplanes can fly, then your rule [i.e., ‘a cause must always be greater than its effect’] would entail that the creator of airplanes must also be able to fly. But people can’t fly, so your rule doesn’t hold up.”

    ai, your reasonsing is unsound. Your answer assumes that “greater” means “able to perform a particular task in a superior manner.” In other words, you assume that an airplane is greater than an airplane designer because the airplane designer has no ability to fly.

    But Gerry’s example does not define “greater” this way. When Gerry used the word “greater,” he meant “generally superior.”

    The smallest baby is almost infinitely greater than the most sophisticated airplane ever built. An airplane can do one thing well, fly. A baby can grow up to change the world.

  24. 24
    ellazimm says:

    Personally, I find acceptance of the ideas that “the mind is what the brain does” and that there is no free will the hardest part of the purely NDE paradigm. You find yourself supporting a theory but acting like it’s not true. Tricky.

  25. 25
    jjcassidy says:

    “To say that computers can’t be intelligent because they are themselves intelligently designed doesn’t seem to be a good strategy for ID proponents.”

    I think that’s a misstatement of the ID position, as I’ve seen it. You are saying that computers have become “intelligent” despite the knowledge or intention of its designers. It is quite a different thing to say that we are intelligent because we could be designed to be.

    Computers aren’t called “thinkers”, they compute. A putative subset of all that is involved in “thinking”. They were designed to compute fast with input algorithms and a good degree of rigidity to the specifications of the algorithms, without distraction or boredom so common to humans when they do some same laborious computations.

    You seem to be saying that despite our ability to know what they are doing–despite that we designed them as sort of meta-hashtables–they have become like us in our ability to think. And we shouldn’t engage in skepticism without jeopardizing the idea of a designer who (I would guess) knows we can think (even if not as well as it) because it designed us not to do it’s taxes and remember tax laws–but often to think.

    I don’t see the conflict in the two models. There is no suggestion that we’ve taken on the ability the designer did not consciously give us, or that as a result we have that ability greater than our “purpose”. In fact the order of competency can easily be Designer >> humans >> computers. You don’t have to believe Humans == computers in order to claim Designer >> humans.

    I can’t imagine an reasonably intelligent ID proponent making the statement “Designed subjects cannot be intelligent”.

    I can’t believe how many “scientists” cannot see the blunder that is Turing’s test: If enough people have the subjective judgment that they are talking to a human, then the machine thinks. Forget that anywhere else the atheist/materialist/reductionist would blow the whistle, throw the yellow flag, and scream “ad Populum!! Entire team disqualified (as morons)!” in almost any other arena. Forget what disdain many humans have for the judgments of their fellow man or their gullibility. Not only that, because we had thrown a shadow on our idea of discriminating between thinking and non-thinking, you can’t disprove it’s thought (ad Ignorantium, in other cases).

    Turing’s test is methodologically stillborn, were gullibility, subjective judgment and Ignorantium leveraged to prove anything else. Why, for example, wouldn’t humans–were they to have freewill–be the best authority on freewill? Is that their judgment? Are they convinced? Can you prove otherwise? => Freewill.

    Gullibility is the reason that we must establish freewill clinically. We cannot trust our subjective judgment (outside of Turing’s test, that is.) Thus we cannot see anything in the world unless it be formally established as a fact in Science.

  26. 26
    kairosfocus says:

    All, esp JJ:

    Let us remember that freedom of choice based on deciding what makes best sense is a condition of having a credible mind.

    That’s a big part of why Sir Francis Crick’s infamous comment fails.

    Namely:

    The Astonishing Hypothesis is that “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.

    Just suppose, Sir Francis was consistent and had prefaced his Nobel Prize speech, thusly:

    “I,” [Sir Francis Crick, and my] joys and [my] sorrows, [my] memories and ambitions, [my] sense of personal identity and free will [including the expression of same in this Nobel Prize acceptance speech], are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.

    See what I mean? [With due acknowledgements to Prof Philip Johnson.]

    GEM of TKI

  27. 27
    aiguy says:

    BarryA,

    ai, your reasonsing is unsound. Your answer assumes that “greater” means “able to perform a particular task in a superior manner.” In other words, you assume that an airplane is greater than an airplane designer because the airplane designer has no ability to fly. But Gerry’s example does not define “greater” this way. When Gerry used the word “greater,” he meant “generally superior.” The smallest baby is almost infinitely greater than the most sophisticated airplane ever built. An airplane can do one thing well, fly. A baby can grow up to change the world.

    Actually, my interpretation was taken from his argument rather than the word “greater”: Gerry argued that whatever designed a conscious thing must itself be conscious:

    X has property P
    X was created by Y
    Therefore Y has property P

    But there is nothing to commend this reasoning at all. And the claim that the cause of something is necessarily “generally superior” to its effect suffers from more than a bit of ambiguity and subjectivity, don’t you think? All you are saying is that by your judgement, humans are “generally superior” to anything else. Was Adolph Hitler really “generally superior” to the paintings he created? I prefer the paintings.

  28. 28
    aiguy says:

    Hi jjcasidy,

    The problem that I’ve always seen for materialists is that if the brain is enough like a computer, then we get stuck on Rice’s theorem. Let’s imagine that the function of the brain were computable by a Turing Machine.
    You’ve equated “materialism” with “Turing-machine functionalism”, but they are not the same. John Searle, for example, is fully materialist but argues (rather effectively) against functionalism.

    Even if the issue was Turing-computable functionalism, I don’t think it’s possible to show that the brain is not subject to the same types of limits we find in automata theory; this is the basis for most objections to the Godelian anti-functionalist arguments (cf Lucas or Penrose). I think they apply to your examples as well (but I’m not sure – I didn’t follow everything you said).

    You are saying that computers have become “intelligent” despite the knowledge or intention of its designers.
    No, I said that we can judge whether or not computers are intelligent without considering their origin.

    It is quite a different thing to say that we are intelligent because we could be designed to be.
    That could be true, but my point was that we can’t know if whatever caused life felt qualia, or was conscious, or could learn, and so on.

    Computers aren’t called “thinkers”, they compute.
    It’s dangerous to mistake linguistic convention for fact statements. This morning my wife told me her car “didn’t want to start”, but I don’t think the car cared one way or the other.

    A putative subset of all that is involved in “thinking”. They were designed to compute fast with input algorithms and a good degree of rigidity to the specifications of the algorithms, without distraction or boredom so common to humans when they do some same laborious computations.
    Computers learn, perform inductions and deductions, interpret and generate language… if you want to call this “computing” instead of “thinking” you can, but the difference is a matter of definition rather than discovery.

    You seem to be saying that despite our ability to know what they are doing–despite that we designed them as sort of meta-hashtables–they have become like us in our ability to think.
    No. First, they are not “meta-hashtables” 🙂 Second, I do not believe that our computers are very much like us in terms of mental abilities at all! Not even remotely close by any reasonable standard!

    I can’t imagine an reasonably intelligent ID proponent making the statement “Designed subjects cannot be intelligent”.
    Right. I was objecting to the idea that we ought to deny that computers are intelligent simply because they are themselves designed by humans, which Anton appeared to be arguing.

    I can’t believe how many “scientists” cannot see the blunder that is Turing’s test: If enough people have the subjective judgment that they are talking to a human, then the machine thinks.
    The whole point of Turing’s test was that there is no objective criteria for establishing other minds! If there was, Turing wouldn’t have suggested the imitation game of course, and ID could become a scientific discipline!

  29. 29
    DaveScot says:

    Barry

    The random elements may in fact be employed to build a random response where no two responses ever have a reasonable chance of being alike. Things like this are done all the time in computer games to make computer generated opponents and strategies unique and unpredictable. I’d argue that how the random number is employed in that case is undetermined.

  30. 30
    Matteo says:

    Well, here’s the way I look at it: there can be no explanation that our consciousness “emerges” because our brains are computers. No known computer needs to be conscious to carry out its function (quite the contrary), so how can it be that positing that our minds are essentially an object analogous to a system in which consciousness is the one thing not needed to explain its operation, how can positing this possibly “explain” consciousness?

  31. 31
    aiguy says:

    Hi kairosfocus,

    You appear unduly enamored of Johnson’s quip about Crick. This reference to “nothing buttery” is an attack on greedy reductionism; I have never met anyone who adheres to this (have you?).

    It has nothing to do with the topic here, however, which involves how we might ascertain if things that are generally unlike humans (such as computers, or Designers of Life) have minds like humans (experience qualia, are capable of learning, and so on). So far, the answer seems to be that there is no empirical way to know, which bodes ill for ID as a scientific enterprise.

  32. 32
    DaveScot says:

    Mapau

    Savants aren’t recording the entire state of the brain over and over. They record selected portions of it and even these are highly generalized. The brain is composed of trillions upon trillions of state switches and God only knows how many analog potentials. Only a tiny fraction of them are needed to record any given meta-state such as the contents of a book. A so-called photographic memory is unusual but upon analysis of what is actually perfectly recorded versus the potential capacity of a state machine with that many switches it’s not a physical impossibility that requires some external storage capacity. Savants typically suffer some deficiency in trade for their remarkable talent. In other words, there’s no such thing as a free lunch. 🙂

    Something that would unambiguously establish some external component of mind apart from brain would be holding accurate information about reality that had never been acquired through the senses. Say someone that had never seen the southern hemisphere sky at night could draw an accurate map of the constellations only visible from the southern hemisphere. That would be unexplainable by brain alone.

  33. 33
    DaveScot says:

    Barry

    What if the computer was programmed to turn down the air conditioner when it sensed a sunset?

    Subjective experiences are just that – subjective – and can only be claimed or described by the subject. You can’t say that the computer has no subjective experience since you can only speak with authority about your own subjective experience and not that of others.

  34. 34
    Gerry Rzeppa says:

    aiguy says:

    Was Adolph Hitler really “generally superior” to the paintings he created? I prefer the paintings.

    I reply:

    Of course he was. He walked better, talked better, and painted better than his paintings. And he was also morally superior to his paintings – in spite of your preferences – because unlike those paintings, he was capable of moral choice.

    aiguy says:

    Gerry argued that whatever designed a conscious thing must itself be conscious.

    I reply:

    Allow me to be more precise. My argument is actually that “whoever designs thing with such-and-such a property must understand that property; must pre-possess the idea behind that property.”

    I don’t have to be able to fly to build an airplane, but I do have to possess an understanding of flight. And I contend that that understanding of flight – or more precisely, the person in possession of that understanding – is “generally superior” to flight itself because with that understanding, he is capable of producing any number of flying machines.

  35. 35
    DaveScot says:

    My take on higher brain function is that it’s a reality modeling engine that is constantly updated and improved by sensory experience. Actions are inspired by running “what if” scenarios through the reality modeling engine, evaluating the model results, and then choosing the action that the model indicates produces a desireable result.

    What makes human higher brain function superior is we have enhanced our sensory apparatus with various kinds of instrumentation and even more importantly we are able to share or collaborate our personal sensory experiences with those of others – living or dead – through written and spoken language.

  36. 36
    DaveScot says:

    Matteo

    A good definition of “consciousness” is

    an alert cognitive state in which you are aware of yourself and your situation

    Consciousness then needs to be nothing more than the incorporation of one’s own body into the brain’s reality modeling engine.

  37. 37
    Gerry Rzeppa says:

    DaveScot says:

    Actions are inspired by running “what if” scenarios through the reality modeling engine, evaluating the model results, and then choosing the action that the model indicates produces a desireable result.

    I ask:

    Don’t you mean, “Actions are inspired by the desire for a result that initiates the running of ‘what if’ scenarios, etc.”? Without a goal there’s no process to model; and without a desire to reach that goal, there’s no impetus to do the modeling.

  38. 38
    aiguy says:

    Gerry,

    Of course he was
    You think Hitler is “superios” to his paintings, and I don’t. That’s fine; we can also argue about how many angels can dance on the head of a pin. Instead, I’ll just stick with my main point here: Since there is no scientific method to distinguish intelligent causation from the rest of nature, ID isn’t science.

    “whoever designs thing with such-and-such a property must understand that property; must pre-possess the idea behind that property.”

    If you’d like to argue now that the cause needs to understand the property, then we are back to where we started: Why does understanding require qualia? My thermostat understands how to regulate the room temperature…

    No matter how you slice it, I’m afraid you are not going to turn a millenia-old philosophical problem into a scientific result. Nobody knows if minds are inside or outside of physical causality, and so for ID to claim that it can detect a “mind” at work without any knowledge of the “agent” responsible is whistling in the wind.

  39. 39
    BarryA says:

    DaveScot writes “The random elements may in fact be employed to build a random response where no two responses ever have a reasonable chance of being alike. Things like this are done all the time in computer games to make computer generated opponents and strategies unique and unpredictable. I’d argue that how the random number is employed in that case is undetermined.”

    Of course it depends on what you mean be “undetermined.” I take it the kind of determinism you have in mind is the type posited by Laplace, who suggested that if a super-intelligent being had perfect knowledge of every particle of the universe, in principle he could then predict every future event.

    Pace Laplace (if you’ll forgive the rhyme), determinism is not inconsistent with unpredictability. In my example I use determinism in the more limited philosophical sense of what happens in the future is a necessary consequence of the state of the world at a given moment as acted upon by mechanical law. Chaotic factors can render the outcome of any event unpredictable. For example, the explosion of a bomb is a completely determined event. The bomb has no free will. It’s function is utterly determined by its components and the laws of chemistry (for a conventional bomb). However, where a piece of the bomb will land after it explodes is completely unpredictable. In other words, “indeterminate” is not the same as “undetermined.” Your example, in my view, is an example of a process that is “indeterminate” but nevertheless determined.

  40. 40
    BarryA says:

    DaveScott: “You can’t say that the computer has no subjective experience since you can only speak with authority about your own subjective experience and not that of others.”

    I don’t agree. I believe the statement: “Deep Blue experienced the blueness of blue” is not even false. It is meaningless. It is no more meaningful to say that an algorithm processing machine had a subjective response to an experience than it is to say that my car engine had a subjective response to driving down the road. Surely this is just common sense.

  41. 41
    DaveScot says:

    Gerry

    The goal is typically related to survival and reproduction which appears to be common to all forms of life. More complex organisms have sensory apparatus for pain and pleasure that mediate goals generally towards reducing pain or inducing pleasure. Goal driven behavior doesn’t seem to require a mystery component or tedious explanation. My computer has various goals built into it. One example is the goal of remaining free of viruses which might impair its function. Towards that goal it has an immune system that constantly monitors for intrusions by a number of possible entry points and it constantly learns and improves its capabilities by acquiring more information and methods from an external source. It’s certainly primitive compared to the immune systems of higher animals but the rudiments are the same and the complexity is constantly expanding.

  42. 42
    DaveScot says:

    Barry

    I agree it’s meaningless because the only entity that can describe Deep Blue’s subjective experience is Deep Blue itself. That’s simply the nature of subjective experience – only the subject can describe it with authority. A chimpanzee almost certainly has subjective experiences but since it lacks sufficient command of common language with humans it can’t describe it to us. How in principle is that situation different from the situation with Deep Blue? In other words, if it did indeed have subjective experience, how would you possibly know about it?

  43. 43
    Gerry Rzeppa says:

    DaveScot says:

    The goal is typically related to survival and reproduction which appears to be common to all forms of life. More complex organisms have sensory apparatus for pain and pleasure that mediate goals generally towards reducing pain or inducing pleasure. Goal driven behavior doesn’t seem to require a mystery component or tedious explanation.

    I reply:

    I take it you never witnessed true, self-sacrificing heroism in the Corps. Or if you did, you’ve forgotten to include it in your reasonings.

  44. 44
    aiguy says:

    DaveScot,

    In other words, if it did indeed have subjective experience, how would you possibly know about it?

    What you describe is the “problem of other minds”, which we generally solve by a comfortable induction that saves us from solipsism: Since other humans are so much like me in so many other respects (including their verbal reports of consciousness) I infer they also have subjective experience like me. The same induction enables us, as you say, to “almost certainly” attribute subjective experience to chimps, but not to Deep Blue, since chimps are very much like us, but computers are not.

    So, can we extend this inference to the Designer of Life, and justify the attribution of subjective experience to the Designer? I think it’s clear that we can’t without any knowledge at all of the Designer.

  45. 45
    StephenA says:

    As I see it, it boils down to this: Either everything in the universe has some sort of subjective experience, or humans (and probably other animals) have something that mere physical objects do not.

  46. 46
    StephenA says:

    Comming at the problem from another angle, what is a mind? Most, if not all, materialists would claim that the mind is a program being run by the computer that is the brain. But this has a problem. Software is a kind of information, which in turn would mean that the mind is made of information. But then, what is information? Can it be described without reference to a mind?

    If there is a rock in the forest, and no one has ever seen the markings on it, do the marks contain information?

  47. 47
    allanius says:

    Question: would a computer be able to participate in this discussion?

  48. 48
    Tim says:

    Help me out here. I’ll make nine comments, and you people can tell me which ones are true/false or up for grabs. I apologize for the imprecise nature of my terminology in advance:

    1> Computers are nothing more than physical embodiments of Turing machines.

    2> Turing machines can only read and move along, or follow, the “tape”.

    3> Therefore, computers can only read and follow.

    4> To clarify, a machines’ “movement” along the tape is determined by the tape NOT by the machine — whether it be a Turing machine or an actual computer.

    5> The fundamental root idea underlying intelligence is the idea of “choice” or “selection”.

    6> Computers never “choose” because they only follow.

    7> Hence, computers are unintelligent.

    8> People choose.

    9> Therefore, people are intelligent.

    Finally, beyond qualia, I would submit that one way in which the human differs from computers is our uncanny ability to make mistakes, and then discover EXACTLY what the mistake is. I don’t know of any computer that can do that (ignoring, of course, diagnostics in which the computer is told what to look for and fix.)

    Perhaps soneone could direct me to the hardware/software package that ends with, “Oh, and be ready for anything.”

  49. 49
    BarryA says:

    Tim, not only are you correct, your statements cannot reasonably be disputed.

  50. 50
    BarryA says:

    aiguy, “What you describe is the “problem of other minds”, which we generally solve by a comfortable induction that saves us from solipsism: Since other humans are so much like me in so many other respects (including their verbal reports of consciousness) I infer they also have subjective experience like me. The same induction enables us, as you say, to “almost certainly” attribute subjective experience to chimps, but not to Deep Blue, since chimps are very much like us, but computers are not.”

    Write this day down. We agree on something. It was bound to happen I guess. After all, there are an infinite number of propositions. We were bound to stumble across one that we agree on. 😉

  51. 51
    D.A.Newton says:

    BarryA,

    If I may be so bold as to rephrase your four-point question.

    1. Did the computer detect light within a certain range of wavelengths? Obviously yes.

    2. Did I detect light within a certain range of wavelengths? Obviously yes.

    3. Did I experience anything at all? Obviously yes, I experienced red.

    4. Did the computer experience anything at all? Obviously no, No more than a collection of abacus beads (an abacus is also a Universal Turing Machine).

    Are we not asking about the existence and validity of experience, rather than the subjectivity of qualia? Obviously yes; this, at least, gets us out from under Science’s allergic reaction to subjectivity. Science is supposed to be empirical, relying on experience as its means of confirmation.

    Does that imply anything?

  52. 52
    WinglesS says:

    aiguy says: Since there is no scientific method to distinguish intelligent causation from the rest of nature, ID isn’t science.

    I think you’re right to say that there is no scientific method which is capable of distinguishing intelligent causation from the rest of nature. After all, an intelligent agent can bring a rock down a slope, but an unintelligent agent can do likewise. When the result is exactly the same, how then can we tell after the fact that one was caused by an intelligent agent and one isn’t? However I think the ID crowd’s argument that an intelligent agent is the best explanation for certain phenomenon is a valid point. While we can’t determine this for sure, for example a random letter generator could come up with all the works of Shakesphere, I do think that it is valid to say that if we found the works of Shakesphere on someone’s computer we can conclude with a very high degree of certainty that he copied it from somewhere rather than generated it randomly. I don’t pretend to know what science should be like, but why shouldn’t ID be considered science if they tried to apply principles of information theory to detect probable intelligent causation?

  53. 53
    aiguy says:

    Hi Tim,

    1> Computers are nothing more than physical embodiments of Turing machines.
    YES

    2> Turing machines can only read and move along, or follow, the “tape”.
    3> Therefore, computers can only read and follow.

    NO: They can also write (set state, changing future behaviors). And remember, in physical embodiments, the “tape” contains information from interactions with the outside world (very important point)!

    4> To clarify, a machines’ “movement” along the tape is determined by the tape NOT by the machine — whether it be a Turing machine or an actual computer.
    NO: It is determined by the tape AND by the machine. The semantics (meaning) of the tape is determined (defined) by the structure of the machine.

    5> The fundamental root idea underlying intelligence is the idea of “choice” or “selection”.
    6> Computers never “choose” because they only follow.

    NO!!!! For your statement to be meaningful, the metaphysical conjecture of libertarian contra-causal free will must be true, and nobody can demonstrate that to be true. Otherwise, OF COURSE computers choose – that is what they do.

    7> Hence, computers are unintelligent.
    8> People choose.
    9> Therefore, people are intelligent.

    You have just illustrated the mistake underlying all of ID, which is to pretend that the metaphysical speculation of contra-causal free will is an empirical fact, when it is a (relatively unpopular) philosophical opinion.

    Finally, beyond qualia, I would submit that one way in which the human differs from computers is our uncanny ability to make mistakes, and then discover EXACTLY what the mistake is. I don’t know of any computer that can do that (ignoring, of course, diagnostics in which the computer is told what to look for and fix.)
    I have been in AI for thirty years, and if I had a nickel for every time somebody said to me “HERE is something that computers can’t do!” then… I would have a whole lot of nickels. No, Tim, my computer programs make mistakes all the time, and then discover EXACTLY what the mistake is, and then they correct it. Read about machine learning here for example: http://www.aaai.org/AITopics/html/machine.html

    And think about the AI programs on deep space probes from NASA – they have to figure out whatever is going wrong with the spaceship or themselves and attempt to fix it, all without any human interaction. Read about some of them here: http://ic.arc.nasa.gov/projects/mba/

    Perhaps soneone could direct me to the hardware/software package that ends with, “Oh, and be ready for anything.”
    I’m waiting to meet a person like this, too. Are you read to perform open heart surgery this morning, and to design a rocket this afternoon?

  54. 54
    aiguy says:

    WingLesS,

    I think you’re right to say that there is no scientific method which is capable of distinguishing intelligent causation from the rest of nature….I don’t pretend to know what science should be like, but why shouldn’t ID be considered science if they tried to apply principles of information theory to detect probable intelligent causation?

    If you agree that we can’t empirically distinguish “intelligent causation” from “non-intelligent causation”, then why do you then turn around and suggest that the principles of information theory are capable of detecting “intelligent causation”? Looks like a pretty direct contradiction there.

  55. 55
    aiguy says:

    BarryA,

    Glad we agree on the solution to the problem of other minds. How about my corollary:

    So, can we extend this inference to the Designer of Life, and justify the attribution of subjective experience to the Designer? I think it’s clear that we can’t without any knowledge at all of the Designer.

  56. 56
    Jack Golightly says:

    re: #47
    No. Computers don’t have enough imagination.

  57. 57
    aiguy says:

    Jack,
    Not only do computers lack “imagination”, but they basically can’t understand English (or any natural language). And the main reason for this is because they lack common sense – the huge body of world knowledge that humans have, both by virtue of being born with some of it and by means of our interactions with the world. Much of AI research over the past twenty years or so has focussed on gathering huge structures of common-sense knowledge and reasoners; this work is just beginning to come to fruition.

  58. 58
    Tim says:

    aiguy,

    I am glad that some of my propositions “pass muster”. I was fond of all of them so I am going to take another stab.

    1> Computers are . . . Turing machines.
    ai . . . YES

    2> Turing machines can read, WRITE (per ai), and follow the “tape”.
    ai “. . . remember, in physical embodiments, the “tape” contains information from interactions with the outside world.”

    But, by “writing” we mean nothing more than following the directions of what to write, no? In other words, following.

    3> Therefore, computers can only read and follow.

    4> “movement” along the tape is determined by the tape. . .
    ai . . . NO: It is determined by the tape AND by the machine. The semantics (meaning) of the tape is determined (defined) by the structure of the machine.

    I would argue that a Turing machine never defines anything at all, and because a Turing machine defines nothing, neither do computers.

    5> intelligence = “choice” or “selection”.
    6> Computers never “choose” because they only follow.
    ai . . . “NO!!!! For your statement to be meaningful, the metaphysical conjecture of libertarian contra-causal free will must be true, and nobody can demonstrate that to be true. Otherwise, OF COURSE computers choose – that is what they do.”

    Uh .. . there are some big words in there that I don’t get: what do libertarians have to do with contras? As for the that last sentence, I just don’t get it and quite frankly believe the opposite to be true of computers. Could you give me an example of a computer choosing?

    7> Hence, computers are unintelligent.
    8> People choose.
    9> Therefore, people are intelligent.
    ai . . .”You have just illustrated the mistake underlying all of ID, which is to pretend that the metaphysical speculation of contra-causal free will is an empirical fact, when it is a (relatively unpopular) philosophical opinion.”

    OK, forget philosophy. I am just speaking for AND ABOUT myself.

    ai . . . “my computer programs make mistakes all the time, and then discover EXACTLY what the mistake is, and then they correct it.”

    What? Give me an (non-pre-programmed diagnostic) example.

    ai . . . “Are you read to perform open heart surgery this morning, and to design a rocket this afternoon?”

    BRING IT ON

  59. 59
    aiguy says:

    Hmmm – can’t seem to make a post now? Let’s see if this small one will go…

  60. 60
    Tim says:

    Do not worry, ai, it is probably just a computer glitch and will fix itself soon. 🙂

  61. 61
    BarryA says:

    Wingless: “While we can’t determine this for sure, for example a random letter generator could come up with all the works of Shakesphere,”

    Not true. The probability that a random letter generator will write even one act of one play is well beyond the universal probability bound. Much less the entire works.

  62. 62
    jstanley01 says:

    aiguy #57

    Not only do computers lack “imagination”, but they basically can’t understand English (or any natural language). And the main reason for this is because they lack common sense – the huge body of world knowledge that humans have, both by virtue of being born with some of it and by means of our interactions with the world. Much of AI research over the past twenty years or so has focussed on gathering huge structures of common-sense knowledge and reasoners; this work is just beginning to come to fruition.

    I suspect that the problem may be many orders of magnitude beyond what you’ve dreamed — language comprehension being not just a matter of world knowledge, but of “reasoners” of usage, grammar and syntax far more intricate than the gloss “common sense” belies.

    I seriously doubt that the simple on-off, yes-no, if-then capability of digital computation machines will ever be able to parse the complexities of how raw knowledge interacts with the norms of usage, the rules of grammar and the requirements of syntax (not to mention “the preexistence intelligence,” nyat, nyat nyat!) necessary to produce human languages. Nevertheless, I’d be interested in being proven wrong.

    Can you point me to specifically where such work is beginning to come to fruition? (Something more than, “Just a little more funding! Honest folks! We’re just about to make a breakthrough! Just a little more funding!”)

    By the by, are you familiar with Tony Veale’s work at the University of Dublin?

    “…dedicated to the computational exploration of language and its creative potential, from lexical phenomena such as Metaphor, Analogy, Metonymy, Polysemy, to complex social phenomena like Humour. As such, we build models of creative language use, and attempt to construct applications from these models.”

    Right. With the emphasis on “attempt,” eh? But Veale at least recognizes that problems usage, grammar and syntax exist.

  63. 63
    magnan says:

    Although this philosophical/metaphysical debate is interesting, it is basically dry and empty, since it blandly ignores a mountain of empirical evidence for a “spiritual” or nonphysical component to man’s consciousness. The empirical evidence trumps any amount of philosophical theorizing. There is an entire dimension of data including quite common human experiences that is conveniently dismissed in the common belief system of our intellectual materialist elite. Whatever the nature of consciousness, it is ultimately not (just) the body and brain. Of course the materialists here will resort to selective hyperskepticism to deny that there really is any valid evidence.

    Concerning computer Turing machines and the Turing test. Even if a new electronic Turing machine actually meets the Turing test, this does not demonstrate anything but the ability of the human designers to develop, for instance in one approach, an exceedingly advanced “expert system” tailored to the human subjects interacting with the machine. The technological Turing machine is fundamentally incapable of experiencing, desiring, willing, intending, etc., having qualia in Chalmer’s sense. That is, the qualia of conscious experience cannot be reduced to matter.

    This of course is closely related to Searle’s Chinese Room argument, which is my favorite for showing the bankruptcy of reductionist materialist theories of mind.

    I find a closely related issue to be intriguing, the “zombie” thought experiment (that is, the rhetorical/philosophical kind, not the Haitian kind). They have no internal experience. They are unconscious, but give no obvious externally measurable evidence of that fact. Materialists like Dennett can be considered by their own theories to be such “zombies”. Searle’s Chinese Room is essentially an example of one of these.

    Computer scientist Jaron Lanier (one of the inventors of “virtual reality”) has written a clever and biting critique of the “zombie” theories of mind, concentrating on the AI notion that advanced computer systems can potentially become self-aware, intelligent beings. The title is You Can’t Argue with a Zombie, at http://www.jaronlanier.com/zom….

    He opens by cleverly quoting an old Navajo proverb: “It is impossible to awaken someone who is pretending to be asleep”.

    Lanier makes some very good arguments against the “mind is a computer program in operation” concept. He says: “Let’s suppose you run a (computer) program ….. that implements the functional equivalent of your brain, a bunch of other people’s brains, and the surrounding environment, so that you and the rest of the brains can have lots of experiences together. (This is the condition in which my test zombies thought that nothing
    fundamental would have changed; they’d still experience themselves
    and each other as if they were flesh.) You save a digital record, on the same disk that holds the program, of everything that happens to all of you. Now the experiences “pre-exist” on the disk. Take the disk out of the computer. Is this free-floating disk version of you still having experiences? After all, the information is all there.
    Why is this information sanctified into some higher state of being by
    having a processor just look at it? After all, the experiences have already been recorded, so the processor can do no new computation. A much simpler process that just copied the disk would perform exactly the same function as running your brain a second time.”

  64. 64
    aiguy says:

    Magnan,

    The empirical evidence trumps any amount of philosophical theorizing.
    Agreed!

    Whatever the nature of consciousness, it is ultimately not (just) the body and brain.
    And let’s see what sort of empirical evidence you might muster to support this view…

    The technological Turing machine is fundamentally incapable of experiencing, desiring, willing, intending, etc., having qualia in Chalmer’s sense. That is, the qualia of conscious experience cannot be reduced to matter.
    This would be philosophical theorizing rather than empirical evidence, Magnan.

    This of course is closely related to Searle’s Chinese Room argument, which is my favorite for showing the bankruptcy of reductionist materialist theories of mind.
    Then you may be surprised to learn that John Searle is a materialist! The Chinese Room argues against functionalism, not materialism. Searle believes that material brains cause consciousness.

    Contrary to what you think, there is no empirical evidence that consciousness does not reduce to material cause. We simply do not know.

  65. 65
    aiguy says:

    Tim,
    Each time I try to post my response to you, it is not accepted – no error message either. I have no theory except the computer doesn’t like what I’m saying to you.

  66. 66
    StephenA says:

    “Contrary to what you think, there is no empirical evidence that consciousness does not reduce to material cause. We simply do not know.”

    I’m sorry… How does reducing the mind to software running in the brain reduce consciousness to material causes? Software is a kind of information. Can you even describe information without reference to a mind? Could information exist before there was anyone to comprehend it?

    If there is a rock in the forest, and no one has ever seen the markings on it, do the marks contain information?

  67. 67
    Q says:

    StephanA asks, in 66 “Can you even describe information without reference to a mind?

    If one is allowed to describe information as bits set to a state in a computer; and one is allowed to describe information as states of neurons, axons, dendrites, and other structures; then “information” can be described without reference to mind.

    Also, StephenA asks, “Could information exist before there was anyone to comprehend it?”

    By some definitions, no there could not have been information before it was comprehended. By some definitions, information is the result of interpretating raw data. For example, a height chart of a person contains raw data. The information in the chart is that the person grew until adulthood. If the raw data is never gathered or interpreted, by that definition, then there is no information without comprehension. Point being, the definition is the king of this discussion. Is that the defintion you were using?

  68. 68
    aiguy says:

    StephenA,

    I’m sorry… How does reducing the mind to software running in the brain reduce consciousness to material causes? Software is a kind of information. Can you even describe information without reference to a mind? Could information exist before there was anyone to comprehend it?
    If there is a rock in the forest, and no one has ever seen the markings on it, do the marks contain information?

    First, the idea that mind is software “running in the brain” is not a description of what it means to have a materialist theory of mind. It is (sort of) descriptive of a theory of mind called functionalism. As I’ve mentioned a couple of times now, John Searle, a prominent critic of strong AI, argues that materialism is true but functionalism is false.

    As for your point about information, I think Q is quite right: I would say that the concept of information entails the concept of something that interprets the information, but says nothing about what the properties of the interpreter must be. It can be an unconscious, unfeeling, fully deterministic interpreter, which is not what we usually mean when we talk about “mind”.

    So once again, this is my problem with ID’s entire notion of “intelligent causation”. It may have nothing at all to do with a “mind”.

  69. 69
    jstanley01 says:

    magnan #63

    Noam Chomsky long ago used linguistics to shoot down B.F. Skinner’s theory, that human beings are machines which respond solely to external stimuli.

    “[In Chomsky’s] devastating critique … [he] argued (in ‘Review of Verbal Behavior, by B.F. Skinner’ in Language, Vol. 35, pp. 26-58) that language in particular and human action in general were not the result of strengthening past verbal habits by reinforcement. The essence of language, he said, is that it is generative: Sentences never said or heard before (such as “There’s a purple Gila monster sitting on your lap”) could nevertheless be understood immediately.” Martin E. Seligman, Ph.D., Learned Optimism, p. 9.

    For what it’s worth, my guess is that the converse theory, that machines are intelligent agents which can respond to internal stimuli, will be shot down with the same gun. My working hypothesis as to why, is that human language represents a tool designed for use by intelligent agents, and that computation machines “ain’t none.”

    To gain an inkling of what the problem really entails — coming up with a robot with language comprehension on par with human beings — all you’ve got to do is devise an algorithm that enables a machine to figure out — with the same accuracy and ease that human beings do — that when Antony says “And Brutus is an honorable man” in Act III Scene ii of Julius Caesar, that what he really means THE EXACT OPPOSITE.

    Don’t mind me if I don’t hold my breath.

  70. 70
    aiguy says:

    jstanley01,

    Behaviorism not really the idea that humans are machines which respond only to external stimuli; it was (past tense since virtually noone, materialist or not, is a behaviorist any more) the idea that we can scientifically understand humans only by considering stimuli and response and ignoring whatever was going inside our heads. What drove Skinner (and Watson, etc) to behaviorism was the desire to make psychology an empirical science; they failed. We still have no empirically-grounded theory of thought (mind, consciousness, sentience, …), which is why ID is built upon the quicksand of unverifiable philosophical conjecture.

    Anyway, the correct interpretation of Antony’s line in Julius Caesar is hard, and may well be misunderstood by humans too. But here are some much easier ones that just about every human would understand, but computers still have a hard time with:

    “The chicken is ready to eat” (Does this mean it is hungry, or that it has been cooked?)
    “Pets must be carried on the escalator” (Does this mean you can’t go on the escalator unless you have one?)

    And so on. Again, without the context and the common sense to apply it, computers can’t untangle the ambiguity in human languages. But progress is being made, and nobody can say how far we will (or won’t) go.

  71. 71
    nullasalus says:

    aiguy,

    “Then you may be surprised to learn that John Searle is a materialist! The Chinese Room argues against functionalism, not materialism. Searle believes that material brains cause consciousness.”

    But Searle’s biological naturalism is also accused of effectively amounting to dualism. And I have a suspicion that eliminative materialists would accuse other self-proclaimed materialists of not being ‘real’ materialists – and on this subject, those labels get complicated. I’m sure McGinn would classify himself as a materialist, even though his position on philosophy of mind expressly amounts to ‘mind arises from the material, but we’ll never know how’.

    On the flipside, Thomas Aquinas – one of the most well-known philosophers on the non-materialist side – has been said to have a theory of mind/being that is not dualist, or at least not substance dualism. As I’ve said before, the moment reductionism is ruled out, what’s left over sure looks a lot like dualism (or, put another way, sure looks a lot like ‘not materialism’.)

    “So once again, this is my problem with ID’s entire notion of “intelligent causation”. It may have nothing at all to do with a “mind”.”

    I’d like to get something clear here. Are you saying that the intelligence ID proposes could have all the features of an intelligent force, but actually not have a mind? Maybe I’m misreading you, but it sounds like you’re arguing that what seems like an Intelligent Designer is possible, but there’s no way to be certain because we wouldn’t have enough knowledge of said designer even if they were actual.

  72. 72
    aiguy says:

    nullasalus,

    But Searle’s biological naturalism is also accused of effectively amounting to dualism.

    Some philosophers do argue this; others have thought of a slew of arguments intended to defeat Searle’s theory (and the conclusion of the Chinese Room) entirely. But I am not here defending monist theories of mind, I am making a different argument entirely:
    1) All of these questions remain squarely in the realm of philosophical debate, where they have been for millenia, unresolvable by appeal to empirical evidence
    2) Without reference to these issues, the bare claim of ID that “intelligent agency” is responsible for life is ambiguous to the point meaninglessness.
    3) Therefore, ID is either based on a metaphysical claim, or is vacuous.

    I’d like to get something clear here. Are you saying that the intelligence ID proposes could have all the features of an intelligent force, but actually not have a mind? Maybe I’m misreading you, but it sounds like you’re arguing that what seems like an Intelligent Designer is possible, but there’s no way to be certain because we wouldn’t have enough knowledge of said designer even if they were actual.

    This is almost what I am saying, but here again is the rest of it: If you strip the meaning of the term “intelligent designer” of all of the connotations that ID implicitly associates with it (a substance dualism with libertarian free will), the term becomes scientifically meaningless. For any who may disagree, please provide a concise, operationalized definition of “intelligent agency” that will serve to distinguish an intelligent agent from the rest of nature. (hint: it will not help ID to define it as “that which can create CSI”!)

  73. 73
    aiguy says:

    Errata: I meant to say If you strip the meaning of the term “intelligent designer” of all of the metaphysical connotations that ID implicitly associates with it

  74. 74
    WinglesS says:

    aiguy says: If you agree that we can’t empirically distinguish “intelligent causation” from “non-intelligent causation”, then why do you then turn around and suggest that the principles of information theory are capable of detecting “intelligent causation”? Looks like a pretty direct contradiction there.

    I said there is no way to determine for sure (100%) if an effect is due to intelligent causation, however we can conclude that and effect has a [b]high probabilty[/b] of being intelligently caused.

  75. 75
    WinglesS says:

    I don’t think that there are any contradictions in what I said, aiguy, just because probability doesn’t give you an definate answer, does that mean it isn’t math? Then again, perhaps I’m being confused about what empirical means, in whish case I’ll ask you to forgive me.

  76. 76
    aiguy says:

    WinglesS,

    I still don’t see your point. The reason we cannot distinguish intelligent from non-intelligent causation is because we do not know what the difference is (except for our various philosophical speculations). So I don’t think that it’s a matter of determining the issue with 100% certainty or not; it is a matter of knowing what we are trying to determine.

  77. 77
    aiguy says:

    WinglesS,
    No, “empirical” just means with reference to observable data (the reliably shared experience of our senses), not necessarily mathematical.

  78. 78
    WinglesS says:

    In other words you’re saying ID isn’t science because it can’t define what intelligence is. Hmm I don’t really think a perfect understanding of an intelligent cause is needed for a scientific theory. Many scientific theories understand phenomenon based on our observation of their effects rather than a perfect understanding of their cause. We theorize that there are only 2 types of charges, positive and negative based on our observation of the effects of charges, without knowing why there aren’t 3 or 4 types of charges, nor what makes a charge different from a non-charge in the first place. The definition of a charge is based on its effects rather than a distinguishing between charged causes and uncharged causes.

    Thus I think it is not necssary to distingish what intelligence is as a cause. (different from natural cuasation) I think it is valid to try and identify intelligence by its effects instead.

  79. 79
    nullasalus says:

    aiguy,

    Some philosophers do argue this; others have thought of a slew of arguments intended to defeat Searle’s theory (and the conclusion of the Chinese Room) entirely.

    It’s probably a side-issue, but I’m bringing it up because you seem to keep slipping in ‘Materialists think X’ or ‘Materialists don’t think X’ or ‘Professor Y is a materialist’. Insofar as some people may think that, say, Dan Dennett embodies materialism, I would agree they’re incorrect. But the dirty secret is that ‘materialists’ are fairly divided on a score of issues. The idea that ‘you can’t be conscious without a brain’ is a term that would likely unite Chalmers, Dennett, Searle, and quite possibly Thomas Aquinas. Grouping them all as materialists can only be correct insofar as that may be how they describe themselves (Probably not Aquinas, probably not Chalmers). But considering the divisions that exist between each of them, it just goes to show that ‘materialist’ is almost restricted to being a social label.

    Do materialists think that an exhaustive, bare physical description of the brain would explain it utterly? Some do. Some don’t. Some accuse the ones who do of being wrong. Some accuse the ones who don’t of not really being materialists. It’s tricky.

    But I am not here defending monist theories of mind, I am making a different argument entirely:
    1) All of these questions remain squarely in the realm of philosophical debate, where they have been for millenia, unresolvable by appeal to empirical evidence

    One objection here: No, that’s not where they’ve remained. Natural science, biology in particular, has been hefted up and swung around as a weapon for awhile now, and been presented as scientific proof of what truly are philosophical and (a)theological ideas. Where were the cries of ‘abuse of science’ when Victor Stenger wrote ‘God: The Failed Hypothesis’, arguing that science disproves God? Why is the outrage reserved for Behe, who is on record as saying that even if he’s correct, he doesn’t think that supernatural intervention was needed in the course of evolution?

    2) Without reference to these issues, the bare claim of ID that “intelligent agency” is responsible for life is ambiguous to the point meaninglessness.

    If that’s true, then the assertion that life and the universe arose unguided and unplanned is also meaningless. I personally don’t think agency can be scientifically detected or ruled out. I could argue ths specifics here more, but hey, may as well let it slide.

    3) Therefore, ID is either based on a metaphysical claim, or is vacuous.

    Again, I won’t get into specific defenses available to ID proponents. I support it primarily as a philosophical endeavor, myself. Though I’d be more encouraged if I saw some consistent treatment on this subject – and honestly, I think a whole lot of the rage directed at ID is because it promotes looking at the data through a different lens, or with a different frame of mind than is the norm. There do exist people who believe that studying science and nature is either supposed to make you subscribe to their preferred philosophical views, or ‘it isn’t working as intended’.

    Or, put another way: If ID is vacuous, it’s just adding to some vacuity that’s been present, tolerated, and encouraged in many quarters for quite awhile. I’m not that interested in seeing ID condemned merely so the status quo can be maintained.

  80. 80
    aiguy says:

    WingLesS,

    In other words you’re saying ID isn’t science because it can’t define what intelligence is.
    Yes, that’s right.

    Hmm I don’t really think a perfect understanding of an intelligent cause is needed for a scientific theory.
    But I’m not asking for a “perfect” understanding; I’m asking for any usable scientific definition. “Intelligent Causation” is ID’s sole explanatory principle; one would think there would be some attempt to pin down what this is supposed to mean. There is no other scientific discipline that attempts to explain any phenomenon by appeal to “intelligent causation” in the abstract. (Imagine if someone asked me how my AI system managed to play chess so well, and I responded “Because it is intelligent!“. After laughing politely, they would say “Ok, seriously, how does it do that?”).

    I think it is valid to try and identify intelligence by its effects instead.
    Yes, of course. So, what effects shall we say distinguish intelligent causation?

    When I ask this question, the typical response is “Intelligent causation is distinguished by its ability to create CSI!”. This is not a good candidate definition for intelligent causation in the context of ID, however, as you can see:
    Q: What explains the CSI we see in biology?
    A: Intelligence!
    Q: What is intelligence?
    A: The ability to create CSI!

  81. 81
    WinglesS says:

    Hmm
    Q: What explains electromagnetic interaction?
    A: Electric charges.
    Q: What are electric charges?
    A: Particles that cause electromagnetic interaction.

    Hmm I think it works for electricity too actually… I think some things have fundamental properties that can’t be explained in other terms.

  82. 82
    aiguy says:

    WinglesS,

    Hmm
    Q: What explains electromagnetic interaction?
    A: Electric charges.
    Q: What are electric charges?
    A: Particles that cause electromagnetic interaction.
    Hmm I think it works for electricity too actually… I think some things have fundamental properties that can’t be explained in other terms.

    Uh, no.
    Q: What explains electromagnetic interaction?
    A: Electric charges.
    Q: What are electric charges?
    A: Forces which are fully and quantitatively described by the entire discipline of physics and electronics. Laws like Coulumbs Law or Ohm’s Law tell us all about the effects of these causes, and these laws can themselves be explained in terms of more fundamental physics.

    Hopefully you see the point here. An explanation that is described ONLY in terms of the phenomenon in question does not add anything to our knowledge. We must devise characterizations that allow us to test whether or not the effects are from what we believe it to be.

  83. 83
    aiguy says:

    nullasalus,

    But the dirty secret is that ‘materialists’ are fairly divided on a score of issues.
    I don’t think there’s any secret about it; there are dozens of flavors of monism, and quite a few brands of dualism too, and this has been case throughout a very long history. This is the reason I say no scientific proposition can be dependent on the truth of any particular stance.

    But considering the divisions that exist between each of them, it just goes to show that ‘materialist’ is almost restricted to being a social label.
    I agree completely. Materialism is often mischaracterized as being entailed by atheism, for example, and mistakenly associated with determinism, moral relativism, epistemological relativism, and so on.

    Where were the cries of ‘abuse of science’ when Victor Stenger wrote ‘God: The Failed Hypothesis’, arguing that science disproves God? Why is the outrage reserved for Behe, who is on record as saying that even if he’s correct, he doesn’t think that supernatural intervention was needed in the course of evolution?
    Again I agree, and let me say I consistently and strenuously object to this sort of overstepping in either direction. I’ll go on record saying I think Dawkins is simply ridiculous in this respect. I also object when textbooks say that evolutionary theory demonstrates that our cause was purposeless. Which experiment was it that they ran that demonstrated lack of purpose?

    If that’s true, then the assertion that life and the universe arose unguided and unplanned is also meaningless.
    No, I think that “planning” has a perfectly clear meaning. We can test to see if a system is capable of planning by observing it. If ID wishes to claim that biological structures were planned, then they should make it clear that’s what the claim is, and not “intelligent” or “conscious” or “mental” or “has free will” or some other ambiguous or untestable thing.

    The entity who decides how to route your FedEx package plans. But is it conscious of its planning? Does it care if your package arrives? Is it aware of being busy, or bored? No way of telling, but I don’t think so (I think it’s a computer system).

    I personally don’t think agency can be scientifically detected or ruled out. I could argue ths specifics here more, but hey, may as well let it slide.

    Again, I think to the extent that “intelligent causation” is left undefined, the ID’s proposition can’t be evaluated at all, and to the extent that one does pin down what is meant, it either can’t be tested (e.g. free will) or doesn’t really mean what ID folks want it to mean (e.g. a planning mechanism that may well be completely physical, deterministic/algorithmic, unconscious, etc).

    If ID is vacuous, it’s just adding to some vacuity that’s been present, tolerated, and encouraged in many quarters for quite awhile. I’m not that interested in seeing ID condemned merely so the status quo can be maintained.

    Again I’ll express some sympathy with this. I don’t think evolutionary biology is vacuous in any important sense, but I agree that biology popularizers and even textbooks overstate the philosophical implications in order to bolster their own metaphysical beliefs. I also agree that the certainty expressed that evolution is a fundamentally complete theory is quite misplaced. And furthermore, I think that pursuing the design arguments is a perfectly valid philosophical endeavor. (My dirty secret: I think mind has something to do with it too)

    I am adamant, however, that the proposition “Intelligent causation is the best explanation for biological complexity” cannot possibly be evaluated by empirical means, and nobody should talk about “ID Theory” as if it was a scientific theory of anything.

  84. 84
    nullasalus says:

    aiguy,

    I don’t think evolutionary biology is vacuous in any important sense, but I agree that biology popularizers and even textbooks overstate the philosophical implications in order to bolster their own metaphysical beliefs.

    Between this and what else you’ve said, there’s not much left for me to fight about. Hooray, that’s a nice change of pace.

    I remember discussing this with you before – guess I wanted to make certain I truly had a grasp of your take on the situation. Kudos.

  85. 85
    WinglesS says:

    aiguy,

    Q: What explains electromagnetic interaction?
    A: Electric charges.
    Q: What are electric charges?
    A: Forces which are fully and quantitatively described by the entire discipline of physics and electronics. Laws like Coulumbs Law or Ohm’s Law tell us all about the effects of these causes, and these laws can themselves be explained in terms of more fundamental physics.

    Correct my if I’m wrong, but I don’t think charge is a force. It’s a fundamental property that causes force which are described by the laws you mention. F is proportional to Q just as we can say your grade is proportional to the amount of hard work you put into studying, but I don’t think the two are equal in the sense you mention.

  86. 86
    jjcassidy says:

    AIGuy,

    Too much stuff has passed by for me to want to answer it all. I just want to clear up a few things. First of all Searle’s Chinese Room argument achieves dubious success as a materialist argument. So even though he may use it, feeling that there must be an attendant “experience” of knowing, he gets various people calling his argument both “dualistic” and “quasi-religious” for his troubles. If this were true, Searle doesn’t alleviate the problem for materialists by stumbling onto the wrong side of the argument. Although it’s all too common for me to see, I can not take bailing out of your stated POV as answering that question from that POV.

    (Also it is quite interesting that some find Searle’s argument a negative argument at best.)

    Searle’s Chinese Room problem is so ill-defined anyway. What are the rules to hand back Chinese symbols to the question “Summarize Kant’s Critique of Pure Reason for me? Can you relate it to Proust’s Memories of Things Past?” How many sheets of paper are they written on, who had time to write them up?

    I’ll tell you the most reasonable way to answer that: “I haven’t read it. Why do you ask?” Otherwise that is some significant intelligence that designed these “rules”.

    Searle rejects it not because it’s not “getting the job done,” but because he doesn’t see anything that resembles his subjective experience of actually knowing a language. However, this can be easily side-stepped by saying 1) we know the brain has side-effects of “feeling” 2) intelligent processes need not be accompanied by a subjective feeling of “knowing”, and 3) perhaps only some processes produce this. We must then figure out whether or not the Chinese Room is this type of process or not. It can’t fail

    But there is another point I’d like to make about my statement. I wasn’t posing it as an insurmountable problem to materialists. Materialists aren’t locked into Turing problems, necessarily. That “if” in that sentence plays a role. If the brain is too much like a computer, then Rice’s theorem is a real problem for materialists. Increasing as a problem as the brain is thought to approach computer-like function.

    I would however, given the time and resources, argue that materialist rejections of Turing-like behavior runs counter to a more common methodology for no good reasons. For example, in Padian’s criticism of ID, he says that the statement that “no transitional forms have been found” is deceptive. Why? Because that question is too hard. ID-ers are expecting too much, too much burden of proof. He says, instead they find “transitional forms” in related species. I think it’s somewhat specious, because I think paleontologist would welcome a bona fide “transitional form” if they found one. It’s not like they are looking for only transitional features, it’s that they will settle for transitional features for the information they provide.

    Also, materialists or reductionists are always trumping Science as a collection of “best methods known”. Well, we have rules of computation under the Turing model for information processing (assuming we do some of that) so referring to them to analyze human processing power gives us “facts” and implications, that I don’t know that we would have unless we do that.

    Take this for example: if human brains closely match Turing machines, then we know something about brains. Non-Turing-like is not a positive identification and we cannot know something about brains from them simply being the negative. This is the typical form of the reductionist method. Materialists of all ages love the “How ELSE is it done?” argument. It’s only fitting to answer them in kind. “Assuming the brain is not Turing-like, how ELSE is it done?! Oh, you don’t know. Well Scientists X, Y, and Z have been publishing a lot of papers and collecting a lot of grant money, making headway mapping the brain to Turing instructions. What can you ADD?!! You HATE Science don’t you?” And so on…

    In my view it is actually the presence of insight that reductionists do not want to get drawn in Turing models. They know–or sense–that our brain is only the type of machine that can understand how it processes only if it was coded that way. Which would make further development in understanding intelligence a matter of without it coincidentally arriving as the type of machine that can compute how a Turing Machine can tune itself to better understand its own processes.

    Again Rice’s theorem suggests that a machine to determine whether a given TM is a certain type of algorithm or not faces it’s own halting problems. Thus the brain can only examine itself if it is the type of algorithm that can examine that instance of algorithm. We can never know whether it can unless it can–and additionally assess whether or not it has.

    I can understand why they wouldn’t want to step into this trap. But if the brain is a processing machine, it must be some kind of processing machine. Let’s say Uuring codifies a VM standard for processing that is non-Turing. So, when similar or separate cognitive problems pop up with the Uuring definition, do we just say, we know that the brain is a procesing machine, but there is no reason why it has to be Turing-like or Uuring-like.

    Materialists would never let such lack of commitment to known processing models fly, were it reversed.

  87. 87
    Q says:

    WinglesS, in 85, mentions “Correct my if I’m wrong, but I don’t think charge is a force.” in response to aiguy’s claim of electric charges are forces.

    Well, scientifically, the simplification of both your claims are both right and wrong – fizzbin 🙂 . The concept of “charge” is interchangable with concepts of the “electric field”. This is Gauss’s Law. See http://hyperphysics.phy-astr.g.....aw.html#c2
    The law says about charge and field :”The area integral of the electric field over any closed surface is equal to the net charge enclosed in the surface divided by the permittivity of space.”

    In shorthand, this means that the perception of charge ends up being a side effect of the electric field. (Or just as accurately, electric field ends up being a side effect of the charge.)

    WinglesS, as you mentioned, force is proportional to the charge. Additionally, it is proportional to the strength of the electric field. (F = QE, if the charge is stationary) But, because of Gauss’s law, we can fully eliminate Q from the equation and replace it with only the field. Or, we can eliminate the field, and replace it with charge.

    As a result, we can equally explain force as being a side-effect of the charge, or explain that charge is a side-effect of the force detected from the electric field.

    (But pursuing this too far would become a threadjack.)

  88. 88
    WinglesS says:

    Q,

    Does that mean that

    Q: What are electric charges?
    A: Particles that cause an electric field.
    Q: What is an electric field?
    A: A side effect of electric charge.

    Is true? I didn’t do very well for my physics, unfortunately, so I can’t see things like that straight away.

  89. 89
    aiguy says:

    WinglesS,

    Correct my if I’m wrong, but I don’t think charge is a force. It’s a fundamental property that causes force which are described by the laws you mention.
    Quite right, my sloppiness – the force is the electro-motive force that acts on the charge.

    I assume you take my point then, and see how the definition of electric charge is not vacuous; it is tied into a rich network of concepts that is ultimately grounded in terms of observable effects. In contrast, it is vacuous to define “intelligent causation” merely as the cause CSI, and then explain the presence of CSI by intelligent causation.

  90. 90
    Q says:

    I’d say that’s a reasonably fair statement, from one direction of what I was indicating.

    But, it isn’t really a contradiction to what aiguy suggested. By analogy, it’s kind of like how E=mc2 shows that energy is just a different interpretation of matter, and m=E/c2 shows that mass is just a different interpretation of energy. I used the term “side effect” instead of “interpretation” in the earlier claim, and didn’t mean them to be different claims.

    We can say that electric force is just a different interpretation of charge, electric field is just a different interpretation of charge, and electric field is just a different interpretation of electric force. For static charges, of course. Otherwise, magnetism rears its head and messes it all up!

  91. 91
    Q says:

    Oops! I forgot the attribution in 90. It was for WinglesS in 88.

  92. 92
    WinglesS says:

    aiguy,

    I assume you take my point then, and see how the definition of electric charge is not vacuous; it is tied into a rich network of concepts that is ultimately grounded in terms of observable effects. In contrast, it is vacuous to define “intelligent causation” merely as the cause CSI, and then explain the presence of CSI by intelligent causation.

    So you’re saying the concept of charge is science despite our inability to define charge from non-charge apart from its effects, as long as these effects are tied to a “rich network of concepts ultimately grounded in terms of observable effects”. If that’s what you’re saying, I think you’re being kind of vague, but from my limited understanding, I assume that you mean that there are formulae that decribe the behavior of electric fields while there are none for CSI.

    Hmm ID probably is a new science in that area, although they seem to have proposed a law of conservation of information, I think it’s true that ID is lacking in formulation of such laws. There exists a law called Zipf’s law for languages but I wonder if that’s related to ID in any way as well. Enlighten me though, on any such laws for Darwinistic Evolution, the alernative of ID. (natural selection seems to me more of a result than a law btw) I’ve never heard of any evolutionary formula, but then I’m no Biologist.

  93. 93
    aiguy says:

    WinglesS,

    So you’re saying the concept of charge is science despite our inability to define charge from non-charge apart from its effects, as long as these effects are tied to a “rich network of concepts ultimately grounded in terms of observable effects”. If that’s what you’re saying, I think you’re being kind of vague, but from my limited understanding, I assume that you mean that there are formulae that decribe the behavior of electric fields while there are none for CSI.
    No, let’s assume Dembski is right, just for the sake of argument, and CSI is a well-defined property. The problem, rather, is that while there are precise and empirically grounded characterizations of charge, and all of the fields and forces described in physics, there are no such characterizations of intelligence. And no, it doesn’t have to be a forumla, it just has to be a sufficiently specific description so that we can evaluate whether or not the descriptions match our observations.

    Let’s take an example which is a bit simpler than electricity – Newtonian gravity. Is gravity a vacuous explanation of falling objects?

    Q: What causes objects to fall?
    A: Gravity.
    Q: What is gravity?
    A: That which causes things to fall.

    If this is how Newton defined gravity, it would have been quite analogous to defining “intelligence” as “that which causes CSI”. However, we would never have heard of Newton, because nobody would ever have paid any attention to such a silly circular explanation. Moliere famously lampooned just such explanations; Paraphrasing from memory here:
    Q: Oh learned doctor, why does Opium make one sleep?
    A: Because of course Opium contains a sleep-inducing agent!
    (laughter)

    So how is Newton’s theory not just a circular definition like this? Because he defined gravity not merely as something that causes things to fall, but rather as a force that acts between any two objects and causes acceleration of both objects, and that this force varies with the inverse of the square of the distance between the two objects, and proportional to the mass of the object and to a constant, and this constant is given precisely, and the acceleration of each object will vary in proportion to the force and inversely proportional to their mass, and we can measure the acceleration, and we can measure the mass, and so on. And because gravity is characterized in such terms, rather than merely in terms of causing whatever we wish to explain, people could actually evaluate whether or not this thing was responsible for observed phenomena or not.


    Newton’s gravity was so well characterized that people for the first time could see this: That which causes an apple to fall to Earth is the same thing as that which causes the planets to move in their orbits!

    Now, ID makes an analogous claim of identical causes: That which causes a human being’s intelligent behavior is the same thing as that which causes the CSI in biology. Unfortunately, if we have simply defined intelligence as the cause of all CSI, there is no way to evaluate whether or not this is true. So ID’s claim cannot be considered scientific.

    Enlighten me though, on any such laws for Darwinistic Evolution, the alernative of ID.No, Darwinistic Evolution is not the alternative to ID. There are any number of alternatives to ID. And I’m no no big fan of evolutionary biology, so I’ll let somebody else respond to that.

  94. 94
    Daniel King says:

    Q:

    There are any number of alternatives to ID.

    Interesting…

    Please list a dozen.

  95. 95
    aiguy says:

    Daniel,

    Please list a dozen.

    Given the level of detail that ID provides (“CSI in biology is caused by intelligence, and intelligence is that which creates CSI”) one could make up a dozen on the spot. But here is one I’m particularly fond of; perhaps you’ve heard of it. It is a theory called X-Force Theory.

    The claim of X-Force theory is “The complex form and function we see in biology is due to the X-Force”. And what evidence do we have? Well, X-Force is what enables all complex form and function to arise. For example, when a human being builds a complex machine, that is X-Force at work. So, since the forms in biology are complex like the ones we see that humans build, that is evidence that X-Force is responsible.

    This theory is actually quite similar to ID, except that instead of positing that “intelligence” is what enables people to design complex machinery, X-Force theory posits that “X-Force” is responsible. One advantage of X-Force theory over ID theory is that X-Force isn’t bogged down by metaphysical issues like libertarian free will, consciousness, and so on. It’s just X-Force.

    Perhaps you will complain that I have not provided an independent, operationalized definition of X-Force? Yes, I can see how that might be problem. But since ID fails to provide an independent, operationalized definition of “intelligence”, we can see that X-Force really is an alternative theory to ID. Neither of them say anything at all about what might be the cause of biological complexity that we can evaluate against empirical evidence.

  96. 96
    j says:

    John Locke, An Essay Concerning Human Understanding, Book IV (1690):

    “[I]t is as impossible to conceive that ever bare incogitative matter should produce a thinking intelligent being, as that nothing should of itself produce matter. Let us suppose any parcel of matter eternal, great or small, we shall find it, in itself, able to produce nothing. For example: let us suppose the matter of the next pebble we meet with eternal, closely united, and the parts firmly at rest together; if there were no other being in the world, must it not eternally remain so, a dead inactive lump? Is it possible to conceive it can add motion to itself, being purely matter, or produce anything? Matter, then, by its own strength, cannot produce in itself so much as motion: the motion it has must also be from eternity, or else be produced, and added to matter by some other being more powerful than matter; matter, as is evident, having not power to produce motion in itself. But let us suppose motion eternal too: yet matter, INCOGITATIVE matter and motion, whatever changes it might produce of figure and bulk, could never produce thought: knowledge will still be as far beyond the power of motion and matter to produce, as matter is beyond the power of nothing or nonentity to produce. And I appeal to every one’s own thoughts, whether he cannot as easily conceive matter produced by NOTHING, as thought to be produced by pure matter, when, before, there was no such thing as thought or an intelligent being existing? Divide matter into as many parts as you will, (which we are apt to imagine a sort of spiritualizing, or making a thinking thing of it,) vary the figure and motion of it as much as you please — a globe, cube, cone, prism, cylinder, &c., whose diameters are but 100,000th part of a GRY, will operate no otherwise upon other bodies of proportionable bulk, than those of an inch or foot diameter; and you may as rationally expect to produce sense, thought, and knowledge, by putting together, in a certain figure and motion, gross particles of matter, as by those that are the very minutest that do anywhere exist. They knock, impel, and resist one another, just as the greater do; and that is all they can do. So that, if we will suppose NOTHING first or eternal, matter can never begin to be: if we suppose bare matter without motion, eternal, motion can never begin to be: if we suppose only matter and motion first, or eternal, thought can never begin to be.”

  97. 97
    WinglesS says:

    aiguy says,

    Now, ID makes an analogous claim of identical causes: That which causes a human being’s intelligent behavior is the same thing as that which causes the CSI in biology. Unfortunately, if we have simply defined intelligence as the cause of all CSI, there is no way to evaluate whether or not this is true. So ID’s claim cannot be considered scientific.

    I don’t think intelligence can be defined as the source of all CSI but rather as a source of CSI. As I said it is possible, although very unlikely, for CSI to arise without intelligence. And it does bring us back to the first point, when I said we can’t evaluate whether a said case of CSI is caused by intelligence but we can conclude that it is very probable that it is.

    You’ve only named one alternative to ID but X-Force is pretty much the same as ID imo. ID doesn’t have to be bogged down by metaphysical issues like libertarian free will, consciousness, and so on. Those issue come into the picture due to support of ID from the Christian community, and are inherited from Christian philosophy. Perhaps it would be good for you to list another laternative that isn’t like ID or Darwinian Evolution at all.

    Perhaps your point that ID shouldn’t be considered science is valid, but taking your point of view, I find myself doubting that Darwinian Evolution, (can we prove that a case of CSI is caused by Darwinian Evolution?) Abiogenesis, and the Oort cloud are scientific concepts, and that list might grow to include many other theories and concepts that others will argue vehemently are science. (dark energy perhaps) Perhaps someone might clear my thinking on this issue.

    On another note, sorry for taking up so much of your time, although I think your arguments have helped my thoughts on some issues I think it’s confused me about what should be considered science, not because ID doesn’t fit your criteria, but because so many other concepts that pass as science do not appear to do so.

  98. 98
    kairos says:

    #93 aiguy

    Newton’s gravity was so well characterized that people for the first time could see this: That which causes an apple to fall to Earth is the same thing as that which causes the planets to move in their orbits!

    That’s right, but isn’t that what does actually happen for intelligence? From Archeology to ET searching (and in lots of other fields for that matter) deductions and inferences for who could have produced some artifacts or signals are just done from a very high confidence to know what intelligence is. Are you sure that for Newton’s theory there is a real qualitative difference and not just a quantitative one?

    Now, ID makes an analogous claim of identical causes: That which causes a human being’s intelligent behavior is the same thing as that which causes the CSI in biology. Unfortunately, if we have simply defined intelligence as the cause of all CSI, there is no way to evaluate whether or not this is true. So ID’s claim cannot be considered scientific.

    Everyone may legitimately put a so strict definition of what is scientific, but then the same person should coherently be resposible to state that lots of scientific fields are, with that definition, non scientific anymore.

  99. 99
    kairosfocus says:

    H’mm:

    It seems that the basic ID-related definitions issue is surfacing again. [AIG has asked me to come across and look at no 93.]

    I note first that definition is itself a process, not a statement. Further to this, precising statements of genera and differentia and/or of necessary and sufficient conditions are logically subsequent to basic concept formulation and exemplification.

    Namely, we have to have a firm enough concept based on examples and experiences, for definitions to be seen as reliably separating examples and non-examples.

    Such concepts imply another style of definition or at least recognition — one that Aquinas was fond of pointing out: instantiation through and family resemblance across examples.

    Precising verbal or quantitative definitions help us t o mark borders, where this is possible.

    But we should note — on a point of proportion — that biology is a science where there is no generally accepted exception-less necessary and sufficient statement of what “life” is; the recent what is capable of undergoing darwinian macro-evolution is notoriously question-begging and tendentious for instance. And indeed there are some serious borderline cases. But, rightly, biology is a recognised science.

    Similarly, one can always challenge a definition — just like one can always challenge any claim. Then we have a choice: infinite regress or resolution at first plausibles resting in part on self-evident truths and otherwise on comparative difficulties across worldviews options. in short we are here probing into the Lakatosian worldviews core of the relevant scientific research programmes.

    The proper method for such is comparative difficulties analysis across factual adequacy, coherence and explanatory elegance.

    In that context, I hold that we directly and as the first undeniable fact of all experience ourselves as intelligent agents, and that we observe one another as similarly intelligent agents. So — as I discuss in the epistemology thread that spawned this one, now going into model identifying adaptive control systems and i/o front end processors as illustrations familiar enough from the world of technology — we form the concept and look for family resemblance to identify other such agents.

    So, let us put all of this in proportion and keep out of that ever so tempting morass, selective hyper-skepticism. [And JT if you are hanging around, that is my descriptive term fro a concept neatly identified by Simon Greenleaf. Latterly others have begun to take it up in various versions. You still owe me a major apology for slander.]

    Okay, in my always linked section A I take a stab at adequate basic definition of several key concepts. here is my stab on intelligence:

    First, let us identify what intelligence is. This is fairly easy: for, we are familiar with it from the characteristic behaviour exhibited by certain known intelligent agents — ourselves. Specifically, as we know from experience and reflection, such agents take actions and devise and implement strategies that creatively address and solve problems they encounter; a functional pattern that does not depend at all on the identity of the particular agents. In short, intelligence is as intelligence does. So, if we see evident active, intentional, creative, innovative and adaptive [as opposed to merely fixed instinctual] problem-solving behaviour similar to that of known intelligent agents, we are justified in attaching the label: intelligence. [Note how this definition by functional description is not artificially confined to HUMAN intelligent agents: it would apply to computers, robots, the alleged alien residents of Area 51, Vulcans, Klingons or Kzinti, or demons or gods, or God.] But also, in so solving their problems, intelligent agents may leave behind empirically evident signs of their activity; and — as say archaeologists and detectives know — functionally specific, complex information [FSCI] that would otherwise be improbable, is one of these signs.

    This preliminary point immediately lays to rest the insistent assertion that inference to design is somehow necessarily “unscientific” — as such is said to always and inevitably be about improperly injecting “the supernatural” into scientific discourse . . . For, given the significance of what routinely happens when we see an apparent message [we infer to message in the teeth of the possibility of lucky noise, as section A discusses, based on precisely FSCI], this is simply not so; even though certain particular cases may raise the subsequent question: what is the identity of the particular intelligence inferred to be the author of certain specific messages? (In turn, this may lead to broader, philosophical; that is, worldview level questions. Observe carefully: such questions go beyond the “belt” of science theories, proper, into the worldview issues that — as Imre Lakatos reminded us — are embedded in the inner core of scientific research programmes, and are addressed through philosophical rather than specifically scientific methods.)

    In short, those who would make such a rhetorical dismissal, would do well to ponder anew the cite at the head of this web page. For, the key insight of Cicero [C1 BC!] is that, in particular, a sense-making (thus, functional), sufficiently complex string of digital characters is a signature of a true message produced by an intelligent actor, not a likely product of a random process. He then [logically speaking] goes on to ask concerning the evident FSCI in nature, and challenges those who would explain it by reference to chance collocations of atoms.

    That is a good challenge, and it is one that should not be ducked by worldview-level begging of serious definitional questions or — worse — shabby rhetorical misrepresentations and manipulations.

    Therefore, let us now consider in a little more detail a situation where an apparent message is received. What does that mean? What does it imply about the origin of the message . . . or, is it just noise that “got lucky”? . . . [go to the always linked for more]

    Now, is that helpful or not? Why/why not?

    GEM of TKI

  100. 100
    tribune7 says:

    AIG — Newton addressed how and why things fell, not whether they fell.

    ID addresses, not why and how things are designed, merely how we can tell that they are.

  101. 101
    kairos says:

    #97 WinglesS

    I don’t think intelligence can be defined as the source of all CSI but rather as a source of CSI. As I said it is possible, although very unlikely, for CSI to arise without intelligence.

    I don’t agree; the relevance of CSI is just that to provide a useful and reliable reference to intelligence, either directly (intelligent agent) or indirectly (the product of an algorithm that was producedby an intelligent agent. This is possible because “very unlikely” does mean some event well below UPB.

  102. 102
    gpuccio says:

    #95 aiguy:

    Please, be serious! Intelligence is an empirical experience and always has been. If you want to call it X-force, in your weird language, you are welcome. We all know what it is you are referring to as “X-force/intelligence”, exactly because it is an empirical experience. Consciousness, free will and intelligence are all empirical experiences experienced by… guess who? Our own consciousness. You can cheat with words as long as you want, but you can’t cancel that simple and universal fact. That’s why words like “consciousness”, “will” and “intelligence” have been created, and everyone agrees about their empirical meaning.

    #97 WinglesS:

    Intelligence is the only known empirical cause of CSI. In a recent thread I have asked someone to point to a single example of CSI which is not due to intelligence. He had cited Bénard cells, which obviously are not CSI. I have tried to explain what CSI is, you know, it’s not that difficult, but obviously you can’t explain to a darwinist the meaning of a concept which indeed falsifies what he believes in, and expect that he accepts it…

    Anyway, nobody has ever pointed to a single example of CSI (correctly defined) which is not the product of intelligence. Except, obviously, biological information, which is the issue we are discussing.

    That’s an empirical truth. The fact that CSI “could” come into existence by chance, although very unlikely, is a logical truth, but it has no empirical relevance. The fact is, it has never come into existence by chance. And that’s more than enough for a scientific, empirical theory like ID.

    And, after all, in the case of biological information, we are not dealing with a single, isolated case of CSI, which could “in theory” be the only example in the universe of CSI generated by chance. We are dealing, indeed, with billions of different examples and levels of CSI, independent one from another. Completely different functional proteins, different body plans, different regulatory networks, and so on. Therefore, even the theoretical, logical possibility of CSI being, once in the universe, generated randomly, is completely irrelevant.

    Again, intelligence is the only known cause of CSI. Period.

  103. 103
    kairosfocus says:

    Hi GP:

    Actually, it is even stronger, and has been since Cicero:

    s it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them. How, therefore, can these people assert that the world was made by the fortuitous concourse of atoms, which have no color, no quality—which the Greeks call [poiotes], no sense? [Cicero, THE NATURE OF THE GODS BK II Ch XXXVII, C1 BC, as trans Yonge (Harper & Bros., 1877), pp. 289 – 90.]

    Clearly, ever since C1 BC, it has been recognised that a sufficiently long and meaningful digital data string is utterly improbable on chance. but, such is well known as the product of an author — in this case Ennius. So,the concept of CSI in the form functionally specified complex information was recognised as a signature of intelligent agents as long ago as C1 BC.

    Indeed, that is common sense. For instance, no-one who comes to this web page as a default assumes that the posts in it are lucky noise — which is of course physically and logically possible. But it is so utterly improbable that reliably we infer to agency once something is functionally specified and complex enough that chance is utterly unlikely to “discover” it in the relevant configuration space.

    However, when the direct implication of such FSCI cuts across one’s world view — e.g on DNA and the complex organisation of the observed life-facilitating cosmos, suddenly many impose an unreasonably high standard of proof, often on the excuse that “extraordinary claims require extraordinary proof.”

    Well, one could argue in response, that the idea that chance would do such an extraordinary thing would require an extraordinary degree of proof for that claim — especially since in ALL directly known cases FSCI is the product of intelligent agents.

    But, it is better to point out that extraordinary claims are simply those we do not expect and are inclined to be incredulous over: that is, the perception of extraordinariness is a psychological fact, not an epistemic one. Instead of imposing unreasonably high standards of proof when our preferences are in contention, we should rather look at adequacy and being willing to be wrong — as anything that is a claim about the empirical world can be.

    Then, we can look at what happens when worldview level commitments are not in contention — e.g on the digital bit strings in this web page — and soon it is plain that selective hyperskepticism is at work.

    This sort of unreasonableness — by its very unreason-able nature — cannot be overturned directly by logical and evidential argument, but as more and more of the less committed, ordinary unprejudiced people see that this is what is going on, the tide of opinion will shift.

    For those who are genuinely confused and doubtful [as are many who come to this blog and are puzzled by why so many here who are educated and experienced do not buy into the evo mat metaphysical fairytale of origins [too often disguised as “science”], over time as the evidence comes across form many directions, it will eventually dawn that of course, the truth is obvious. Just as it always was, save to those blinded by modernist and post/ultra- modernist selective hyperskepicism. If you are blind, it is of no account how bright and plain the sun is — you simply cannot see it.

    But, if we begin to even dimly see that science is inherently provisional, that it works by inference to best empirically anchored explanation, thence, that this is inescapably a philosophical (epistemological) issue — indeed science used to be called natural philosophy — then one will be open to see that Lakatos was right, namely, scientific research programmes have a belt of theories surrounding a worldviews core. And that core tends to be protected by the theories.

    Only when the theories become more and more plainly deficient in explanatroy power [and resort may be made to institutional power to suppress dissent], does the central thinking become exposed to challenge. So, as the power of the design inference is making itself felt once more in our civilisation, that is beginning to happen.

    THAT is why the evolutionary materialist opposition to the inference to design is so intense and ruthless: the course of a whole civilisation is in contention.

    GEM of TKI

  104. 104
    Q says:

    aiguy, I’ve been thinking about your argument about how “ID fails to provide an independent, operationalized definition of “intelligence””

    I suggest a different approach to your concerns than simply repeating your assertion.

    Specifically, I’ve been addressing issues of ID’s claims regarding probability. Those claims, I suggest, are the first order concerns of ID. Claims about intelligence are only second or third order concerns, at least when viewed through Dembski’s explanatory filter.

    (For my explanation, I’ll put intelligence in quotes, to indicate that it is the concept in dispute, and that no specific attributes of intelligence are assumed.)

    What I mean about “first order concerns” is that the first step of the explanatory filter is to make an observation about something. This observation step does not require “intelligence”, as the observation could be by man or machine.

    At that step, inference is used to determine a probability that the observation could have been caused by natural laws/regularity. That again requires no “intelligence”, as the inferense could have come from a universal database following inferential algorithms, and have been performed by man or machine.

    The next step is to test for chance. This is also a probability test. The test can also be performed by extrapolating our known experiences about chance to determine a probability that this result was caused by random events. The test could be performed according to knowledge-based rules, so requires no “intelligence”, and could be performed by man or machine.

    The third test, for design, could also be based upon previous history. For example, we observe some striations on old arrowheads that haven’t been observed to occur on rocks that have never been arrowheads. This again can be a rules-based test, so it need not require “intelligence” to perform the test.

    So far, working down Dembski’s explanatory filter, no “intelligence” is needed to perform the tests. All that is needed is the means to form observations (collect data), access the history of knowledge, extrapolate it according to rules of extrapolation, and arrive at a probability of the observation being a result of some cause.

    The issue of “intelligence” does enter the problem regarding an understanding of what is design. This is where my argument diverges from yours. I don’t care what “intelligence” is. As such, I’m suggesting it is improper to assert that “design” is the result of “intelligence”. My position is along the lines of “intelligence is that set of properties that cause results which have a similar probability of occuring as the results of human action.”

    I’m suggesting that intelligence isn’t the known property. Instead, it should be considered as a place holder for a set of properties to be evaluated. Kind of like our discussion of force, charge, and field. We don’t need to know the mechanism of force. We just need to know that the property we call force is interntally consistent to the use of the term “force”. Same for field – we don’t need to know the mechanism of a field to understand that some observations are best explained through the existance of a field.

    In other words, I’m suggesting the understanding of what is “intelligence” is dependent upon the probabilities observed. In ID, that would be the observations that are extrapolated from the observed probabilities about design. I argue against saying that certain events probably result because we “know” that intelligence was involved.

    With this approach, we can now try to fill out the properties that would go into the place holder of “intelligence”. Using “intelligence” as the class of post-hoc explanations of observations is the most consistent with the theories of ID, especially of the explanatory filter, I’m suggesting.

  105. 105
    vividblue says:

    KF,

    Regarding 103 …AMEN

    Vivid

  106. 106
    aiguy says:

    kairos,

    That’s right, but isn’t that what does actually happen for intelligence? From Archeology…

    Archeology does not have any notion at all of some abstract class of entities that philosophers call “intelligent agents”. Archeology is the study of ancient human (and only human) civilizations.

    …to ET searching

    There has never been any published scientific inference to an extra-terrestrial life form to explain anything. If, someday, SETI finds some phenomenon and wishes to argue that ET life forms are the best explanation, we can argue about the merits of their argument. If for example they detected a wide-band EM signal emanating from within a pulsar that had prime-number intervals (like the Contact example) I would think most scientists would not accept that life-forms were responsible.

    Are you sure that for Newton’s theory there is a real qualitative difference and not just a quantitative one?

    Well, sure – Newton’s definitions were demonstrably not circular, and did not depend on the truth of dualism or any other untestable proposition.

    Everyone may legitimately put a so strict definition of what is scientific, but then the same person should coherently be resposible to state that lots of scientific fields are, with that definition, non scientific anymore.

    I disagree, but I think we should take this one step at a time. Using the very simple definition of science as meaning “Explanations ought to be definable in terms of things that ultimately we can all experience with our senses”, I’d like folks to agree that ID fails as science because it can’t provide a usable definition of intelligence that will serve ID’s needs (i.e. be able to evaluate the claim that intelligence caused life).

    After that, if you’d like to argue that we can’t define the components of Darwinian evolution, or physics, or chemistry, or some other field in this way, we can argue about that. (For example, I think that everybody knows quite well what a mutation is, and what differential reproduction is, and so on; it’s just that many here don’t believe that these things account for biological complexity. But all that means is that Darwinian evolution is wrong; it does not mean it is unscientific).

    But we should note …that biology is a science where there is no generally accepted exception-less necessary and sufficient statement of what “life” is; … And indeed there are some serious borderline cases. But, rightly, biology is a recognised science.

    This is a very illustrative point, kairos. You are right – life is notoriously difficult to define. However, there is no biological theory that attempts to explain anything using “life” as the explanation”! If I want to know how slime mold manages to find food, or flowers orient to the sun, we can not merely explain these things by saying “Because they are alive!” – this tells us nothing at all that we didn’t already know (that our intuitive category of life seems to apply to slime mold and flowers).

    In just the same way, if you ask “What caused the flowers to be here in the first place?” and I answer “An intelligent cause!”, without saying anything about what “intelligence” is supposed to mean, this tells us nothing at all we didn’t know already. Sure, without any real definition we can categorize anything that could cause living things to exist as “intelligent” (including evolutionary processes, if that is what one thinks is responsible), but this doesn’t actually say anything substantive about the cause at all.

    First, let us identify what intelligence is. This is fairly easy: for, we are familiar with it from the characteristic behaviour exhibited by certain known intelligent agents — ourselves.

    This is exactly the type of circular reasoning I am objecting to!
    What is intelligence? It is what intelligent agents do.
    What are intelligent agents? Beings like us – humans.
    Why do you say humans are intelligent agents? Because they act intelligently….

    Specifically, as we know from experience and reflection, such agents take actions and devise and implement strategies that creatively address and solve problems they encounter; a functional pattern that does not depend at all on the identity of the particular agents. In short, intelligence is as intelligence does.

    For starters, this describes evolutionary processes! Darwinian evolution devises and implements strategies and creatively addresses and solve problems; its basic strategy is trial and error, from which other strategies arise. In fact there are neural Darwinists who propose that evolutionary algorithms underlie human intelligence. I AM NOT PROPOSING THAT THESE THEORIES ARE TRUE. Rather, I am pointing out that your definition of intelligence accomodates the theory that you very much wish to exclude from the meaning of “intelligence”.

    So, if we see evident active, intentional, creative, innovative and adaptive

    More problems. First, the word “intentional” has no meaning we can evaluate against empirical evidence. Second, in the context of ID, we can not know if the Designer was innovative and adaptive, or perhaps was merely a one-trick pony as it were: Maybe the Designer could create the life forms we see, but is utterly incapable of doing anything else at all – like an idiot savant (forgive the politically incorrect label).

    I think this is helpful, kairos, and I applaud you for being willing to take a crack at developing some sort of characterization to give meaning to ID’s claim. Hopefully we can see that once one actually attempts to say what we mean by “intelligence”, when used as an explanation for life, ID falls apart.

  107. 107
    magnan says:

    aiguy (#64): ” (M):Whatever the nature of consciousness, it is ultimately not (just) the body and brain.
    (A): And let’s see what sort of empirical evidence you might muster to support this view…”

    As of 1996 the evidence included 61 independent Ganzfeld experiments, 2094 PK experiments using random event generators, and hundreds of other experiments involving tossing dice, dream research, and remote viewing. A lot more has accumulated since. There are high numbers of replications of these and other basic laboratory parapsychological experiments, which allows for meta-analyses of these studies. For instance a meta-analysis on the results of the Stanford Research Institute remote viewing experiments undertaken between 1973 and 1988 returned odds against the hypothesis that the results were due to chance of more than a billion billion to one. These studies were replicated by the Princeton Engineering Anomalies Research Laboratory. (from Radin, 1989, 2006).

    Meta-analysis is widely used today in  psychology, sociology, and especially medical research (primarily therapy evaluations and epidemiology). A casual look at the British Medical Journal (BMJ) shows literally hundreds of such analyses conducted since 1999.

    Just one example of the many independent replications is where the subject was EEG correlations in two separated  people (listed below). This is just the tip of an iceberg.

    -Extrasensory electroencephalographic induction between identical twins, T.D. Duane and T.  Behrend, Science,  vol. 150; 367  (1965)   note: this was in the 1960s before the skeptic barrier was fully up in the mainstream scientific journals

    -Possible physiological correlates of psi cognition,  C.T. Tart, International Journal of Parapsychology, 5, 375-386 (1963)

    These two papers generated a stream of conceptual replications by different groups, most of which had positive results. There are too many of these to list completely, but some examples:

    -Intersubject EEG coherence: is consciousness a field?, D.W. Orme-Johnson, M.C. Dillbeck, R.K. Wallace, and G.S. Landrith, International Journal of Neuroscience, 16,203-209 (1982)

    -Information transmission under conditions of sensory shielding,  R. Targ and H. Puthoff,  Nature,   252, 602-607 (1974)

    – EEG correlates to remote light flashes under conditions of sensory shielding,  C.T. Tart, H. Puthoff, R. Targ (eds.), Mind At Large: IEEE Symposia on the nature of extrasensory perception, Hampton Roads Publishing Co. 1979, 2002 

    -Correlations between brain electrical activities of two spatially separated human subjects,  J. Wackermann, C. Seiter, H. Keibel, and H. Walach,  Neuroscience Letters, 336, 60-64 (2003)

    -Event-related EEG correlations between isolated human subjects,   D. I. Radin,  Journal of Anternative and Complementary Medicine, 10, 315-324 (2004)

    Another example of a type of experiment in parapsychology is in the area of human intentional effects on other living organisms such as cell cultures and other animals. Numerous controlled
    studies have been conducted by legitimate researchers. The following is a short
    list of some of the most interesting ones. I can give you references
    if you are interested.
    -Algae and Psychokinesis
    C. M. Pleass and N. Dean Dey
    -Psychokinesis and Bacterial Growth
    C. B. Nash
    -Psychokinesis and Fungus Culture
    J. Barry
    -Psychokinesis and Red Blood Cells
    W. Braud, G. Davis and R. Wood
    -Red Blood Cells and Distant Healing
    W. Braud
    -Wound Healing in Mice and Spiritual Healing (& subsequent
    replication)
    B. Grad, R. J. Cadoret, G. I. Paul
    -Malaria in Mice: Expectancy Effects and Psychic Healing
    G. F. Solfvin
    -Arousing Anesthetized Mice Through Psychokinesis
    G. K. Watkins and A. M. Watkins
    -“A Dog that seems to know when his Owner is Coming Home”
    R. Sheldrake

    If you are an open-minded skeptic perhaps you would be willing to peruse some of the general sources of pertinent evidence below.

    – The Conscious Universe by Dean Radin
    – Entangled Minds by Dean Radin
    – Best Evidence: An Investigative Reporter’s Three-Year Quest to
    Uncover the Best Scientific Evidence for ESP, Psychokinesis, Mental
    Healing, Ghosts and Poltergeists, Dowsing, Mediums, Near Death
    Experiences, Reincarnation, and Other Impossible Phenomena That
    Refuse to Disappear by Michael Schmicker
    – Journal of Scientific Exploration, published by the Society for
    Scientific Exploration
    – Mind At Large: Institute of Electrical and Electronics Engineers
    Symposia on the Nature of Extrasensory Perception (Studies in
    Consciousness) by Charles C. Tart, Harold E. Puthoff and Russell Targ
    (Editors)
    – The Afterlife Experiments: Breakthrough Scientific Evidence of Life
    After Death by Gary R. Schwartz
    – Twenty Cases Suggestive of Reincarnation by Ian Stevenson
    – Near Death Experiences in Survivors of Cardiac Arrest: A
    Prospective Study in the Netherlands by Dr. Pim van Lommel, in
    British medical journal The Lancet, Dec. 15 2001

    Much of the hard evidence for psi phenomena today is founded on laboratory experiments and not anecdotal evidence. This is just a sample of that body of evidence that simply can’t reasonably be dismissed as fraud, trickery or self-delusion. Of course you are free to simply scoff and ignore this information since you know it can’t be valid. I would term that selective hyperskepticism.

    I would add that I believe that much “anecdotal evidence” also cannot reasonably be dismissed – this is in the form of the testimony of vast numbers of ordinary people that esp events that happen to them are real and often have verified information.

  108. 108
    aiguy says:

    WinglesS,

    You’ve only named one alternative to ID but X-Force is pretty much the same as ID imo.

    Yes, that is exactly the point. X-Force theory has all of the same explanatory power, predictive power, and testability as ID theory has, which is absolutely none whatsoever. As it stands now, these are both parodies of useful scientific theories.

    ID doesn’t have to be bogged down by metaphysical issues like libertarian free will, consciousness, and so on.

    But once we strip all of this metaphysics away from the meaning of intelligence, nothing at all remains that we can us to make sense of ID theory! Just read what others here say about intelligence – that it does entail free will, qualia, etc. This is the heart of what I object to in ID.

    Perhaps it would be good for you to list another laternative that isn’t like ID or Darwinian Evolution at all.

    I think various ideas about structuralism aren’t like either ID or evolution, but I don’t think they are “alternatives” yet because, like ID, none of these have been fleshed out into theories either. But the important point here is that just because we don’t have a good alternative does not make meaningless theories into good science.

    For example, we do not know proteins manage to get folded into functional 3-D configurations inside of cells – it is a big mystery. Shall I propose that some little tiny invisible intelligent agent resides inside each of our cells, busily folding up proteins? No, that would be a very bad theory, and the fact that no other theory has been accepted doesn’t make it any better.

    (By the way, why doesn’t ID assert that intelligent causation is responsible for protein folding? After all, it has already been shown that proteins could never fold themselves just by random chance; it would take years instead of milliseconds or seconds for that to happen!)

    Perhaps your point that ID shouldn’t be considered science is valid

    Thanks, WinglesS, I’m very gratified to have made the point.

    As for your doubts that other scientific disciplines suffer from circular definitions and unsupportable metaphysical claims, I disagree, but as you say that is another discussion. If you think about my example of Newtonian gravity a bit more you might be able to see that when scientists do characterize some hypothetical cause with enough detail, we can test to see if our characterization corresponds to a real cause or not. (And nobody asserts that “abiogenesis” is a theory; it is the phenomena that we need a theory to explain.).

  109. 109
    kairosfocus says:

    Q:

    I would reconceptualise.

    1] From TBO’s TMLO on, the key thing is configurations and clustering. The characteristic objects of ID investigations are subject to multiple [generally speaking, more than 10^150] potential configurations, with significantly different discernible outcomes. Some of these are functional or specific in recognisable and interesting ways.

    2] The issue then becomes to ask whther the observed organised complexity is rooted in chance, mechanical necessity showing itself in natural regularities, or agency.

    3] These are generally and fairly easily observed causal patterns and may be independently present in given situations — i.e they are not mutually reducible [consider my favourite falling, tumbling die involved in playing a game]. Natural regularities are like heavy objects tending to fall if not supported. Chance is like the way a die as such an object then comes to rest with one of six faces uppermost. Agency is like our experience of using dice in games, to achieve our purposes.

    4] When the config space is such that islands of functionality are credibly less than 1 in 10^150 of the overall space [that is why I use the range 10^150 to 10^300], no reasonable random walk will credibly be able to find such an island, on the gamut of the cosmos’s matter and duration. Thus, there is no credible basis for hill-climbing to spontaneously begin, e.g by body-plan level natural selection or its analogues in the pre-biotic world.

    5] But we do know by much direct observation and experience, that agents, through understanding the logic of configurations, are able to configure entities to get close-to-function, and do troubleshooting to get the complex object to work. For instance I compose this comment and do cleanup editing on typos.

    6] In short it is known that intelligent agents can create FSCI, and why — and why it is utterly unlikely for chance to do so. High-contingency situations are not dominated by natural regularities.

    7] Thus, on inference to best — and known reliable — explanation, is agency. The problem is not with the logic or the evidence, but that the implications for certain cases cut clean across dominant parties in the sciences, education, media and power-centres of our civilisation.

    In part 2 I will give a case in point of the sort of thermodynamics thinking – NB JT — that underlies this insight.

    GEM of TKI

  110. 110
    kairosfocus says:

    PS: On micro-jets and nanobots, from Appendix 1 section 6, my always linked:

    ______________
    6] It is worth pausing to now introduce a thought experiment that helps underscore the point, by scaling down to essentially molecular size the tornado- in- a- junkyard- forms- a- jet example raised by Hoyle and mentioned by Dawkins with respect . . . :

    NANOBOTS & MICRO-JETS THOUGHT EXPT:

    i] Consider the assembly of a Jumbo Jet, which requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional configuration[s] are so rare relative to non-functional ones that random search strategies are maximally unlikely to create a flyable jet, i.e. we see here the logic of the 2nd Law of Thermodynamics at work.)

    ii] Now, let us shrink the example, to a micro-jet so small [~ 1 cm or even smaller] that the parts are susceptible to Brownian motion, i.e they are of about micron scale [for convenience] and act as “large molecules.” Let’s say there are about a million of them, some the same, some different etc. In principle, possible. Do so also for a car, a boat and a submarine, etc.

    iii] In several vats of a convenient fluid, each of volume about a cubic metre, decant examples of the differing mixed sets of nano-parts, so that the particles can then move about at random, diffusing through the liquids as they undergo random thermal agitation.

    iv] In the control vat, we simply leave nature to its course.

    Q: Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.]

    ANS: Logically and physically possible (i.e. this is subtler than having an overt physical force or potential energy barrier blocking the way!) but the equilibrium state will on statistical thermodynamics grounds overwhelmingly dominate — high disorder.

    Q: Why?

    A: Because there are so many more accessible scattered state microstates than there are clumped-at -random state ones, or even moreso, functionally configured flyable jet ones. (To explore this concept in more details, cf the overviews here [by Prof Bertrand of U of Missouri, Rolla], and here — a well done research term paper by a group of students at Singapore’s NUS. I have extensively discussed on this case with a contributer to the ARN known as Pixie, here. Pixie: Appreciation for the time & effort expended, though of course you and I have reached very different conclusions.)

    v] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. Work is done when forces move their points of application along their lines of action. Thus in addition to the quantity of energy expended, there is also a specificity of resulting spatial rearrangement depending on the cluster of forces that have done the work. This of course reflects the link between work in the physical sense and in the economic sense; thence, also the energy intensity of an economy with a given state of technology. Thereby, too, lies suspended much of the debate over responses to feared climate trends, but that is off topic . . .]

    Q: After a time, will we be likely to get a flyable nano jet?

    A: Overwhelmingly, on probability, no. (For, the vat has ~ [10^6]^3 = 10^18 one-micron locational cells, and a million parts or so can be distributed across them in vastly more ways than they could be across say 1 cm or so for an assembled jet etc or even just a clumped together cluster of micro-parts. [a 1 cm cube has in it [10^4]^3 = 10^12 cells, and to confine the nano-parts to that volume obviously sharply reduces the number of accessible cells consistent with the new clumped macrostate.] But also, since the configuration is constrained, i.e the mass in the microjet parts is confined as to accessible volume by clumping, the number of ways the parts may be arranged has fallen sharply relative to the number of ways that the parts could be distributed among the 10^18 cells in the scattered state. (That is, we have here used the nanobots to essentially undo diffusion of the micro-jet parts.) The resulting constraint on spatial distribution of the parts has reduced their entropy of configuration. For, where W is the number of ways that the components may be arranged consistent with an observable macrostate, and since by Boltzmann, entropy, s = k ln W, we see that W has fallen so S too falls on moving from the scattered to the clumped state.

    vi] For this vat, next remove the random cluster nanobots, and send in the jet assembler nanobots. These recognise the clumped parts, and rearrange them to form a jet, doing configuration work. (What this means is that within the cluster of cells for a clumped state, we now move and confine the parts to those sites consistent with a flyable jet emerging. That is, we are constraining the volume in which the relevant individual parts may be found, even further.) A flyable jet results — a macrostate with a much smaller statistical weight of microstates. We can see that of course there are vastly fewer clumped configurations that are flyable than those that are simply clumped at random, and thus we see that the number of microstates accessible due to the change, [a] scattered –> clumped and now [b] onward –> functionally configured macrostates has fallen sharply, twice in succession. Thus, by Boltzmann’s result s = k ln W, we also have seen that the entropy has fallen in succession as we moved form one state to the next, involving a fall in s on clumping, and a further fall on configuring to a functional state; dS tot = dSclump + dS config. [Of course to do that work in any reasonable time or with any reasonable reliability, the nanobots will have to search and exert directed forces in accord with a program, i.e this is by no means a spontaneous change, and it is credible that it is accompanied by a compensating rise in the entropy of the vat as a whole and its surroundings. This thought experiment is by no means a challenge to the second law. But, it does illustrate the implications of the probabilistic reasoning involved in the microscopic view of that law, where we see sharply configured states emerging from much less constrained ones.]

    vii] In another vat we put in an army of clumping and assembling nanobots, so we go straight to making a jet based on the algorithms that control the nanobots. Since entropy is a state function, we see here that direct assembly is equivalent to clumping and then reassembling from a random “macromolecule” to a configured functional one. That is: dS tot (direct) = dSclump + dS config.

    viii] Now, let us go back to the vat. For a large collection of vats, let us now use direct microjet assembly nanobots, but in each case we let the control programs vary at random a few bits at a time -– say hit them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones, and if there is an improvement, we allow replacement. Iterate, many, many times.

    Q: Given the complexity of the relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un-anticipated technology? (Justify your answer on probabilistic grounds.)

    My prediction: we will have to wait longer than the universe exists to get a change that requires information generation (as opposed to information and/or functionality loss) on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?]

    ix] Try again, this time to get to even the initial assembly program by chance, starting with random noise on the storage medium. See the abiogenesis/ origin of life issue?

    x] The micro-jet is of course an energy converting device which exhibits FSCI, and we see from this thought expt why it is that it is utterly improbable on the same grounds as we base the statistical view of the 2nd law of thermodynamics, that it should originate spontaneously by chance and necessity only, without agency.

    xi] Extending to the case of origin of life, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO’s term chemical work, fine], and configuring work can be identified and applied to the shift in entropy through the same s = k ln W equation. For, first we move from scattered at random in the proposed prebiotic soup, to chained in a macromolecule, then onwards to having particular monomers in specified locations along the chain — constraining accessible volume again and again, and that in order to access observably bio-functional macrostates. Also, s = k ln W, through Brillouin, TBO link to information, viewed as “negentropy,” citing as well Yockey-Wicken’s work and noting on their similar definition of information; i.e this is a natural outcome of the OOL work in the early 1980’s, not a “suspect innovation” of the design thinkers in particular. BTW, the concept complex, specified information is also similarly a product of the work in the OOL field at that time, it is not at all a “suspect innovation” devised by Mr Dembski et al, though of course he has provided a mathematical model for it. [ I have also just above pointed to Robertson, on why this link from entropy to information makes sense — and BTW, it also shows why energy converters that use additional knowledge can couple energy in ways that go beyond the Carnot efficiency limit for heat engines.]
    ______________

    In short, the issue is serious, and is not dependent on dubious metaphysical speculation but instead is a matter of the generally accepted and commonly used underlying principles of thermodynamics and commonplace experience.

    But, since the design inference evidence and message cut clean across the story of origins told by those who dominate key power centres in our culture, it is too often treated with — always fallacious — selective hyperskepticism, or worse.

    Those who do so, need to listen to Cicero again, and consider whether they automatically assume that unless there is independent proof — how, as that too requires a message! — all is lucky noise.

    [Kindly note, too, that refusing to beg the question on the possibility of agency at OOL or OOBPLBD or OO Cosmos (henceforth OOC) –as I address in sections B – D in my always linked — is to recognise possibilities, not to commit the “error” imagined by Kantians of imposing a dubious metaphysical “assumption” that agency exists. Indeed, as I discussed in my always linked, Kant has made a key little error at the beginning and his dichotomy of the cosmos into noumenal and phenomenal is perniciously self-referentially incoherent; thus, fallacious.]

    GEM of TKI

  111. 111
    Q says:

    KF, in one of two posts above mentions “The issue then becomes to ask whther the observed organised complexity is rooted in chance, mechanical necessity showing itself in natural regularities, or agency.”

    I’ll pick just that point, because I don’t need to fisk your entire always linked on this site, and it is sufficient to express that your analysis went too far on weak premises.

    What you mention is one of the issues. But, I am strongly suggesting that you are misrepresenting the explanatory filter. The filter is to find which is the best explanation. It is not to find what must be the explanation. That is a significant difference rooted in the epistomology of the problem.

    Accordingly, the rule for determining the best explanation is based upon the probabilities of each of the explanations. Go here to seem my point: http://www.ideacenter.org/cont.....hp/id/1203. Each of the steps has a probability test.

    Thus, your “ask whether” claim is really “test whether the claim passes certain probabilities.” In other words, ID as a theoretical framework for science, isn’t about asking and asserting. It is about testing and demonstrating.

  112. 112
    DaveScot says:

    kf

    Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random?

    It will if the parts are like proteins and snap together in the proper manner when they get close to each other. Proteins at the nanomolecular scale are quite dissimilar to the larger machine parts that most people are familiar with. Because of electrostatic and hydrophobic/hydrophilic properties on their surfaces they have to be modeled in at least 5 dimensions. Imagine if the parts to an airplane each had little magnets attached to them so that when you got two parts that belong together in close proximity they snap together the rest of the way by themselves. Mismatched magnets repel so wrong parts don’t stick together. With parts like that you really do just have to put them in a fluid, stir it chaotically, and the correct final assembly will emerge.

    That said, it just makes the proteins that much more unlikely to arise by chance as they have to be predesigned to have matching binding sites with the correct other proteins and actively avoid binding with incorrect ones. Using the same 5-dimensional method proteins can bind or repel molecules (usually simpler molecules or even individual atoms) that arent’t other proteins.

  113. 113
    kairosfocus says:

    H’mm:

    It is wise to put up a follow up note or two on points picked up by Q and Dave Scott.

    Meanwhile on the main issue in the blog thread, I observe that, per AmH dict as a witness, the word empirical means:

    a. Relying on or derived from observation or experiment: empirical results that supported the hypothesis. b. Verifiable or provable by means of observation or experiment: empirical laws.

    It seems to me that our first person experience of ourselves as agents with reasonably reliable minds that manifest intelligence [e.g through producing functional information], and our consistent observation of others as agents fits in under this rubric. So, I think there is excellent reason to hold that no claimed account of intelligence that ignores or cannot credibly ground this fact and its origins on its premises, is a non-starter. [Evo Mat fans, this means you.]

    Now, on points of follow-up:

    1] Q, 109: it is sufficient to express that your analysis went too far on weak premises.

    “Weak premises” — Such as? [In short, I am suggesting tehat I have done an inference to best explanation WITHOUT bringing in the Darwinista/Evo Mat selective hyperskepticism, starting from the implications of how we interpret to mesager in the face of the possibility of noise.]

    Of course I do not offer a proof beyond all rational dispute – nothing in science is that way, and the sudden insistence on “extraordinary proof” when worldview level assertions are in question relative to otherwise obvious empirically anchored evidence, is suspect.

    2] The filter is to find which is the best explanation. It is not to find what must be the explanation. That is a significant difference rooted in the epistomology of the problem . . . . your “ask whether” claim is really “test whether the claim passes certain probabilities.” In other words, ID as a theoretical framework for science, isn’t about asking and asserting. It is about testing and demonstrating.

    Excuse me – have you actually seen what I have repeatedly explicitly said and linked routinely on the subject of science as IBE and of the ID inference as an instance of that?

    FYI: I have always pointed out that the issue of inference to design across the commonly observed causal factors, chance necessity, agency, is a matter of empirically anchored, provisional inference to best current explanation. This is all science can offer on matters of consequence, and it is why Popper put forward the potential for falsification as a virtue of scientific theorising.

    In the case excerpted, I have pointed not to absolute impossibility, but to statistical improbabilities so overwhelming that they show the direction of observed spontaneous change in the real world: e.g. diffusion is not normally undoable on a spontaneous basis because of precisely the large difference on statistical weight between the clumped and the dispersed macrostates.

    In short, you are – I believe inadvertently but understandably [thermodynamics views are sometimes a bit hard to follow] tilting at a strawman, which would make any “fisking” you put up miss the real mark. But then the linked IDEAS page says pretty much the same thing I have, e.g. here [on inference to design across chance-necessity-agency]. Excerpting on Hoyle’s tornado in a junkyard case [and in a context taking up Shapiro’s recent remarks in Sci Am en passant]:

    . . . the significance of FSCI naturally appears in the context of considering the physically and logically possible but vastly improbable creation of a jumbo jet by chance. Instantly, we see that mere random chance acting in a context of blind natural forces is a most unlikely explanation, even though the statistical behaviour of matter under random forces cannot rule it strictly out. But it is so plainly vastly improbable, that, having seen the message — a flyable jumbo jet — we then make a fairly easy and highly confident inference to its most likely origin: i.e. it is an intelligent artifact. For, the a posteriori probability of its having originated by chance is obviously minimal — which we can intuitively recognise, and can in principle quantify . . . .

    In short, there is a distinct difference and resulting massive, probability-based credibility gap between having components of a complex, information-rich functional system with available energy but no intelligence to direct the energy to construct the system, and getting by the happenstance of “lucky noise,” to that system. Physical and logical possibility is not at all to be equated with probabilistic credibility — especially when there are competing explanations on offer — here, intelligent agency — that routinely generate the sort of phenomenon being observed.
    . . . . through multiplying the many similar familiar cases, we can plainly make a serious argument that FSCI is highly likely to be a “signature” or reliable sign that points to intelligent — purposeful — action. [Indeed, there are no known cases where, with independent knowledge of the causal story of the origin of a system, we see that chance forces plus natural regularities without intelligent action has produced systems that exhibit FSCI. On the contrary, in every such known case of the origin of FSCI, we see the common factor of intelligent agency at work.]
    Consequently, we freely infer on a best and most likely explanation basis [to be further developed below], that:
    Absent compelling reason to conclude otherwise, when we see FSCI we should infer to the work of an intelligence as its best, most credible and most likely explanation. (And, worldview level question-begging does not constitute such a “compelling reason.”)

    I think it is fair comment to observe that I have explicitly and even insistently argued in an inference to best explanation context. Indeed, on point iv in the thought experiement comment above, I noted:

    iv] In the control vat, we simply leave nature to its course.

    Q: Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.]
    ANS: Logically and physically possible (i.e. this is subtler than having an overt physical force or potential energy barrier blocking the way!) but the equilibrium state will on statistical thermodynamics grounds overwhelmingly dominate — high disorder.
    Q: Why?
    A: Because there are so many more accessible scattered state microstates than there are clumped-at -random state ones, or even moreso, functionally configured flyable jet ones.

    That is more than clear enough I believe. Not to mention, I have repeatedly remarked and linked on the general nature of science as an empirically anchored, provisional IBE exercise, here.

    3] DS, 110: It will if the parts are like proteins and snap together in the proper manner when they get close to each other.

    You will see that I in fact put this in as a Sci-fi feature of the model, i.e once the parts get within say 10 microns they tend to move together. [In the real world at about 10 molecular diameters, there are increasingly effective attractive forces due to mutual polarisation of electron clouds etc, which then tend to pull molecules together until they begin to push up against each other, at which time strong repulsive forces limit separation. This is reflected in the classical intermolecular forces diagram familiar to I suppose those who have done about up to a freshman physics course; assuming here is sufficient parallel with our own A Level Physics. This is the theoretical basis for e.g. Hooke’s law on the almost linear elasticity of materials within limits.]

    But also, this is just the clumping part of the deal: if we get parts to simply clump spontaneously, they are clumped at random. (And indeed, one can get molecular species relevant to prebiotic soups to clump at random, unfortunately they do not tend to form biofunctional proteins but rather useless tars, as has been pointed out from TBO down to Shapiro’s recent Sci Am article.) That’s why there is a configuring work term to address as well as the clumping work term.

    4] Imagine if the parts to an airplane each had little magnets attached to them so that when you got two parts that belong together in close proximity they snap together the rest of the way by themselves. . . .

    Nice way to put what I said in my imaginary case, under iv, Q – just after the bit excerpted by Q: [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.] [The parts are on this view about a micron in size . . .]

    Can I borrow this?

    5] Mismatched magnets repel so wrong parts don’t stick together. With parts like that you really do just have to put them in a fluid, stir it chaotically, and the correct final assembly will emerge.

    Not quite. Magnets of course couple in two ways and repel in two ways, so it’s 50-50 odds on coupling at any point, which is okay: sometimes molecules will bond at a certain point, sometimes they won’t.

    If parts come together at random in the brownian motion etc of the vats, half the time they will stick any old how, half the time they won’t.
    The resulting clumping will be a bit hard to get to the level of all the parts, as the natural tendency will be to spread across the whole vat, so that the average separation between any two parts of interest for the designed configuration will be about 50 cm [i.e. 1 cubic metre of liquid]. That’s why I spoke to clumping work. This models the challenge of dilution of the emerging macromolecular species in a real-world pre-biotic soup. [You have to get the macromolecules for the emerging first life form to be close together maybe within about a micron . . .]

    Next, if there is clumping, it will tend not to be in the relevant configuration – i.e there are a lot more ways for things to be clumped in a mess than in a flyable jet. [This models the issue that macromolecules have to be of the right composition and folding and then have to be fitted together precisely to get biofunctional cells going – all of this taking up a lot of information at genetic and epigenetic levels.]

    So, at two major levels of highly informed work – [1] clumping of the RIGHT parts and [2] configuring of these parts to work together effectively — , we won’t get to a flyable jet form a dispersed collection of jet parts. [The implications for getting a workable cell to form spontaneously should be fairly clear.]

    This you saw in your final comment . . .

    it just makes the proteins that much more unlikely to arise by chance as they have to be predesigned to have matching binding sites with the correct other proteins and actively avoid binding with incorrect ones. Using the same 5-dimensional method proteins can bind or repel molecules (usually simpler molecules or even individual atoms) that arent’t other proteins.

    GEM of TKI

  114. 114
    DaveScot says:

    kf

    Have you read “Edge of Evolution”? Behe describes protein binding and used the magnet analogy. He’s Professor of Biochemistry so I’m pretty sure he knows what he’s talking about in this regard and he has no reason whatsoever to exagerate the self-assembly capabilities of organic machinery. By the same token it’s obvious you are not a biochemistry professor so I’m going to believe Behe – much more so because what Behe describes is my understanding from other reliable sources as well. Any clumping is temporary. Brownian motion knocks the clumps apart and also insures that unattached parts move around until they eventually reach their designated attachment point. Only when a part is fitted where it belongs is Brownian motion overcome so they don’t come back apart.

    If you design machines at the nanometer scale strategically placing binding (and repelling) sites in 3-dimensions they will indeed self-assemble just by putting them in a fluid where they can randomly migrate. You need to fit this into your mental model of how sub-cellular machinery works.

    This is the basic mechanism of how many drugs and toxins work. They are typically small molecules with precise shapes and binding sites that snap into some much larger protein. The effect of it is to slightly alter the 5-dimensional properties of the target so that the target can no longer bind to what it was supposed to bind to. Sort of like putting sand into a Swiss watch. This explains why it’s so difficult to find effective drugs. First it has to target a protein in the bacterial invader that doesn’t exist in the host lest it kill them both and then it still has to have the precise 5-dimensional properties so that it snaps onto the target protein and stays there.

  115. 115
    Q says:

    KF asks in 113 ““Weak premises” — Such as?” and in the same post mentions “Nice way to put what I said in my imaginary case, …”

    You answered it for me. Imaginary cases can only provide weak premises and weaker conclusions. Quite simply, Gedankenexperiments aren’t emperical. (http://www.m-w.com/dictionary/gedankenexperiment)

  116. 116
    kairosfocus says:

    H’mm:

    It seems a few further remarks are in order, especially as Q is IMHCO being just a little cute and evasive — he should know that thought experiments [a term popularised by Einstein, but the ideas are as old as modern physics] are in fact of major and respectable importance in the history of what I suspect is our in-part common discipline, Physics.

    For instance, Galileo’s cannon and musket ball dropping off the leaning tower of Pisa was probably a thought experiment, indeed the musket ball would lag slightly. Similarly, the famous pulse-timed pendulum in chapel would have shown a bit of variation in period with width of swing. Also, in getting to the principle of inertia he in-thought extended the behaviour of balls in smooth U shaped troughs — they “try” to get back up to their original level — by asking what would then “logically” happen with a perfectly smooth trough which was simply flattened out instead of rising back up.

    Also, observe again that there is the little issue [cf 94 etc, just from me, others have made the same still dodged point . . .] that was long since pointed out but is being ducked in a rush after convenient red herrings and strawmen on his part:

    . . . I hold that we directly and as the first undeniable fact of all experience ourselves as intelligent agents, and that we observe one another as similarly intelligent agents. . . . . So, let us put all of this in proportion and keep out of that ever so tempting morass, selective hyper-skepticism.

    But first . . .

    1] Dave, 114: Have you read “Edge of Evolution”? Behe describes protein binding and used the magnet analogy.

    Okay, nope — haven’t got around to Behe’s EOE yet; I am out in the boonies! [I do note that on the summary of his main point he has shown that the RV + NS mechanisms accessible to malaria bugs has over more generations than there were for mammalia per the usual timelines, only got to a few monomers’ worth of relevant shift in key proteins its battle with the antimalaria drugs. Of course Malaria is held to be the biggest selection pressure on the human genome over the past several thousand years, and we have made only a few minor — but survival-significant — variations in relevant proteins too. In short, the empirical data backs up the config-space challenge on the vast and unfeasible improbability of getting to highly complex and biofunctional molecules at body plan innovation level by chance innovations and selection filtering on the gamut of life on earth. That is also the message of the Cambrian life revolution. Etc etc.]

    But also it is now very clear that we are discussing two very different things — I am principally talking about PROTEIN ASSEMBLY AND DNA ASSEMBLY (as per all the way back to TBO and Denton). Protein folding [post assembly], protein-protein interactions and DNA coiling are largely electrically driven indeed, but the primary configuring issue is not there, it is with getting TO the creation of the cluster of informational macromolecules.

    If you recall the cell’s step-by step, sequential, algorithmic protein assembly procedures, you will see why. Here’s good old materialism-leaning prof Wiki as just linked:

    Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein . . . . Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase [GEM: i.e enzymes, complex proteins themselves, i.e the process loops!] . Most organisms then process the pre-mRNA (also known as a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second.[6]

    The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase “charges” the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus.

    Notice the information and communication technology terms that I have emphasised — codes, step-by-step algorithms, precise specified sequences etc etc. This is a digital, code-based, algorithmic and communication process — it only happens to be happening in chemical technology — the digital information and communication system nature of what is going on is obvious. [And, that is my bailiwick — thank you, kind and diligent molecular biologists and biochemists etc, for handing to me the basic information by reverse-engineering the algorithm-implementing machinery.]

    Only, the specific technologies use proteins as a class of swiss-army knife informational molecules and use DNA for memory elements. Instead of registers and ALUs etc look for ribosomes and enzymes etc. But, functionally the processes are the same basic digital techniques that are by now so familiar.

    All that is left off by prof Wiki is that the typical such prtotein is ~ 300 acids long, requiring about 900 DNA base pairs. 4^ 900 is of course ~ 7.145 * 10^541, well beyond my “stretched” Dembski bound [to take in islands of functionality].

    And that underestimates severely the FSCI at work.

    Now, try to get the required mechanisms assembled from monomers in a pre-biotic soup by chance! [Onward, try to account for say the 100 mn bases to get the body-plan codes for new phyla in say the Cambrian fossil-life revolution.]

    When the proteins are duly assembled, moved around by heavy kinesin et al and kept in the right places by cytoskeletons, of course they then use their highly informational structures to click together as if by magic. But that is after the really interesting stuff from my perspective has long since happened.

    My microjets thought experiment illustrates the same process, bringing out in particular that we have to address clumping work and configuring work to get to the required macromolecules to carry out the biofunctions, thence the force of TBO’s thermodynamic reasoning [and implied links to information theory].

    2] Q, 115: Imaginary cases can only provide weak premises and weaker conclusions

    Excuse me — as I already pointed out by hihglighting several of Galileo’s most famous experiments that probably weren’t [at least not quite as he reported — that’s being neat and cute with rhetoric but dodging the issues on the merits.

    The above Wiki cite for responding on Dave’s concern should suffice to show what I am driving at on the microjets thought experiment, and that is VERY empirically valid.

    And BTW, the point of a gedankenexperiment is that it works in accord with the known relevant natural regularities/laws of physics, so that it is an in-principle feasible experiment, just it may not be technically or timewise or financially feasible to do it just now. In short, a good thought experiment brings out the inner logic of the science in a physically conceivable situation, as a test of coherence. [It can also be very fruitful on getting to new theoretical constructs, i.e in hypothesis formation.]

    For two famous and more modern cases, much of Einstein’s original conceptualisation of Relativity was triggered by “taking” an imaginary ride on a beam of light in the context of the expected behaviour of the physics. Kekule’s benzene ring snake swallowing its tail is also famous. On the other side of the story, Einstein constructed such a thought experiement to try to undo the uncertainty principle at the famous Copenhagen conference. He proved to be wrong in his initial conclusions, but in the process discovered the energy-time form of the uncertainty principle. Today, scientific visualisation and computer simulation carry out the same basic “what-if” process — and are not generally regarded as only providing “weak premises and weaker conclusions.”

    3] It’s not just KF, folks . . .

    But also it’s not just me, out here in the boonies imagining nanobots making up micro-jets and asking about whether vats sitting there would spontaneously form such microjets with parts that on average are 1 micron in scale, interact at 10 microns and are separated by on average 1 cm — 1 million parts in a vat with a cubic metre of fluid — by the well-known, commonly empirically observed thermodynamics of diffusion.

    [To get another look at the same physics: Put a drop of ink in a glass of water and see it “dissolve.” How long on average would we have to wait for it to spontaneously come back together, Q, why? And, if the drop were instead parts for our famous little jet, which have to not only be clumped but configured to fly, how long for that to happen by chance, Q?]

    For, I am essentially scaling down to quasi-molecular scale the point that Hoyle — no mean thermodynamicist! — made in remarking about tornadoes in junkyards and assembling 747’s by lucky corellations of materials and forces that just happened to do the relvant clumpingt andf configuring work: voila — a flyable jumbo-jet.

    So, why is it that over in Seattle, Boing doesn’t save money and send in the twisters into the jumbo-jet parts warehouses? [Onlookers, the answer is so obvious that it can only be dodged by cute evasions.]

    In case you don’t get the force of the point, here is Dawkins, in the Blind Watchmaker:

    Hitting upon the lucky number that opens the bank’s safe [NB: cf. here the case in Brown’s The Da Vinci Code] is the equivalent, in our analogy, of hurling scrap metal around at random and happening to assemble a Boeing 747. Of all the millions of unique and, with hindsight equally improbable, positions of the combination lock, only one opens the lock. Similarly, of all the millions of unique and, with hindsight equally improbable, arrangements of a heap of junk, only one (or very few) will fly. The uniqueness of the arrangement that flies, or that opens the safe, has nothing to do with hindsight. It is specified in advance. [p. 8.]

    Of course Dawkins uses comparisons that vastly understate the required configuration space, and his Mt Improbable type rebuttal has to address the problem of the isolation of the islands of function in the sea of non-functional configs. You have to find your island before you can climb its hills!

    Here, too, is Robert Shapiro in his recent Sci Am remark on the “popular” RNA world hypothesis:

    RNA nucleotides are familiar to chemists because of their abundance in life and their resulting commercial availability. In a form of molecular vitalism, some scientists have presumed that nature has an innate tendency to produce life’s building blocks preferentially, rather than the hordes of other molecules that can also be derived from the rules of organic chemistry. This idea drew inspiration from . . . Stanley Miller. He applied a spark discharge to a mixture of simple gases that were then thought to represent the atmosphere of the early Earth. Two amino acids of the set of 20 used to construct proteins were formed in significant quantities, with others from that set present in small amounts . . . more than 80 different amino acids . . . have been identified as components of the Murchison meteorite, which fell in Australia in 1969 . . . By extrapolation of these results, some writers have presumed that all of life’s building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case.

    A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . .

    Shapiro then goes for the jugular (in a remark that inadvertently also applies to his preferred metabolism first scenario, as TBO pointed out and as my own little thought experiment underscores):

    The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.

    In short, onlookers, on the evidence, Q is simply being “cleverly” selectively hyperskeptically difficult and evasive. [I suspect he has an undergrad minor in physics or more, or at any rate sufficient physics and/or chemistry to understand just what I am pointing to in raising the statistical thermodynamics principles-based issues in the above.]

    GEM of TKI

  117. 117
    Q says:

    KF, in 116, points out that I should know that thought experiments [a term popularised by Einstein, but the ideas are as old as modern physics] are in fact of major and respectable importance in the history of what I suspect is our in-part common discipline, Physics

    Regarding the history of science, we have no dispute about thought experiments as having “major and respectable importance”. But that doesn’t give you a free pass to abuse the limitations of thought experiments. As a proof of new knowledge, they fail. As a platform for making a prediction – yes – but the limitation is that follow through is required for it to be science. As a means to extrapolate old knowledge to new scenarios – yes, but with the same limitation as making a prediction. As a means to explain basic concepts, thought experiments have some value, more if they represent an interpolation rather than an extrapolation.

    The problem, as you so often illustrate by using IMHCO (In My Humble but Correct Opinion) ( http://awads.net/wp/2005/06/22.....rnet-jive/ ) is that you believe/present your thought experiments/extrapolations as stronger than is supported by the scientific process. DaveScot’s example of the flaws in your thought experiment about clustering, earlier in this thread, is one representation of why your confidence/asserted correctness should be tempered. For example, in your thought experiment of drops of ink in water, and your extrapolation of that process to molecular bonding (“essentially scaling down to quasi-molecular scale”), your premise ignores the biasing effect of electrical charge on the molecular bonding. Quite simply, that extrapolated thought experiment of yours can be shown to be insufficiently developed (i.e. weak premises) to be able to provide sufficiently valuable results.

    Thought experiments aren’t empirical. Since you quote wiki, check here, first paragraph http://en.wikipedia.org/wiki/Thought_experiment . “All thought experiments, however, employ a methodology that is a priori, rather than empirical, in that they do not proceed by observation or physical experiment.”

    KF also says Q is simply being “cleverly” selectively hyperskeptically difficult and evasive.

    Why are you going ad hominin? It won’t improve your argument. Call it hyperskeptical if you will. I call it calling a spade a spade.

  118. 118
    kairosfocus says:

    Sigh:

    First of all, when I use the abbreviation “IMHCO” I mean “in my humble but CONSIDERED opinion,” which is of course open to correction. [I was blissfully unaware that there was another interpretation out there . . . especially since I have “always” held that one can make errors and so must be provisional in one’s thinking. Indeed, “humble” implies that. I will now, for clarity in this discussion [long since sadly clouded by the smoke of burning strawmen], change to IMHBCO to mark the distinction.]

    This putting of words into my mouth that don’t properly belong there, is sadly symptomatic of the problem that Q manifests — setting up and knocking over a strawman that has been led to by a red herring. And, that sort of knocking over of strawmen is hardly “calling a spade a spade”!

    So, next, lest we forget the actual focus-issue: that brings us back to the original post for this thread, as BarryA set it:

    In the comment thread to my last post there was a lot of discussion about computers and their relation to intelligence. This is my understanding about computers. They are just very powerful calculators [IMHBCO as one who has designed, built, programmed and debugged such from the ground up chip by chip and machine code by machine code: yes!] , but they do not “think” in any meaningful sense. By this I mean that computer hardware is nothing but an electro-mechanical device for operating computer software. Computer software in turn is nothing but a series of “if then” propositions. These “if then” propositions may be massively complex, but software never rises above an utterly determined “if then” level. This is a basic Turing Machine analysis.

    This does not necessarily mean that the output of computer software is predictable. For example, the “then” in response to a particular”if” might be “access a random number generator and insert the number obtained in place of the variable in formula Y.”

    “Unpredictable” [by us!] is not a synonym for “contingent.” Even if an element of randomness is introduced into the system, however, the way in which the computer will employ that random element is determined [in short, the reason for the reasoning lieth elsewhere than in the machine that carries out programmed instructions on input data, step by step]. . . .

    The computer registered “red” when red light was present. My brain registered “red” when red light was present. Therefore, the computer and my brain are alike in this respect. However, and here’s the important thing, the computer’s experience of the sunset can be reduced to the functions of its light gathering device and hardware/software. But my experience of the sunset cannot be reduced to the functions of my eye and brain. Therefore, I conclude I have a mind which cannot be reduced to the electro-chemical reactions that occur in my brain.

    BarryA is right, dead right!

    Now, on the attempt to blunt the force of my scaled-down to semi-molecular scale version of Sir Fred Hoyle’s “tornado in a junkyard expected to form a 747 by chance + necessity alone” example, I respond to the latest — IMHBCO, sadly selectivley hyperskeptical — objections as follows:

    1] As a proof of new knowledge, they [thought experiments] fail.

    First, back to Galileo — I believe, during his time of imprisonment but I stand to be corrected on this:

    a –> Consider his U-troughs and metal balls rolling down then “trying” to get back up to their original level as he made the tracks smoother and smoother.

    b –> He then argued that in a perfectly smooth track, the balls would rise back to their original level. (Have you, Q, ever seen a perfectly smooth and actually friction-free trough? [Or even a friction-free air track or air table?])

    c –> He then made the next in-thought extension: flatten out the rising arm, so that the ball is on a smooth in effect infinitely long track and never gets a chance to rise back to its original level. Thus, Galileo arrives at and in so doing warrants in effect Newton’s First Law of Motion [i.e., in our terms, of MOMENTUM], the law of inertia – BY EMPIRICALLY ANCHORED THOUGHT EXPERIMENT. (Actually, if memory serves, he mistakenly thought that the ball would go in a circle — going a bit far with the fact that the Earth has been known since 300 BC to be a sphere.)

    d –> This brings us to a slippery phrase that as one knowing about scientific inference to best, empirically anchored explanation [IBE], you MUST know is utterly inappropriate to such a context for science: proof of new knowledge. Scientific knowledge of consequence is provisional, and empirically testable and reliable, not “proved.” AND THE SLIPPING IN OF SUCH A LOADED CONCEPT TO PREJUDICE THE CASE IN A SITUATION WHERE YOU DON’T WANT TO GO WITH THE IMPLICATIONS OF IBE, IS SELECTIVE HYPERSKEPTICISM.

    In turn that brings us to the empirical root of the microjets thought experiment: the diffusion of an ink drop in a glass of water.

    2] Of ink drops and microjets and macromolecules in prebiotic soups . . .

    [Q, 117:] in your thought experiment of drops of ink in water, and your extrapolation of that process to molecular bonding (”essentially scaling down to quasi-molecular scale”), your premise ignores the biasing effect of electrical charge on the molecular bonding. Quite simply, that extrapolated thought experiment of yours can be shown to be insufficiently developed (i.e. weak premises) to be able to provide sufficiently valuable results.

    Not so fast, pardner!

    e –> As an inspection of the exchange with Dave Scott will rapidly reveal, he mistakenly [obviously he did not read my point xi] thought in terms of a context that I EXPLICITLY was not addressing, proteins clicking together to carry out the biochemistry of life. [In short the relevant components in my thinking are the constituents of the alleged pre-biotic soup, the monomers of life: amino acids, nucleic acids and the like.]

    f –> As with Thaxton et al, whom as point xi will show, I was explicitly discussing [and it is wise to check a context before making an accusation as strong as you have made, Q], I was speaking to the FORMATION of informational macromolecules: why else do you think I was taking clumping and configuring work, step by step, to show the validity of breaking up dS into dS_clump + dS_config, as TBO did?

    g –> Indeed, observe how I used “clumping” as a substitute for “chemical work” explicitly, e.g in point xi as excerpted above at 110:

    xi] Extending to the case of origin of life, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO’s term chemical work, fine], and configuring work can be identified and applied to the shift in entropy through the same s = k ln W equation.

    h –> I then very explicitly applied the parallels, and you did not even have to do a web click to get to them; just you needed to READ before assuming and asserting rhetorically convenient error on my part:

    For, first we move from scattered at random in the proposed prebiotic soup, to chained in a macromolecule, then onwards to having particular monomers in specified locations along the chain — constraining accessible volume again and again, and that in order to access observably bio-functional macrostates. Also, s = k ln W, through Brillouin, TBO link to information, viewed as “negentropy,” citing as well Yockey-Wicken’s work and noting on their similar definition of information; i.e this is a natural outcome of the OOL work in the early 1980’s, not a “suspect innovation” of the design thinkers in particular. BTW, the concept complex, specified information is also similarly a product of the work in the OOL field at that time, it is not at all a “suspect innovation” devised by Mr Dembski et al, though of course he has provided a mathematical model for it.

    i –> That brings us right back to the force of my summary point in point 3 of 116 [and fixing a typo or two . . . sorry on the old Dyslexia]:

    [To get another look at the same physics: Put a drop of ink in a glass of water and see it “dissolve.” How long on average would we have to wait for it to spontaneously come back together, Q, why? And, if the drop were instead parts for our famous little jet, which have to not only be clumped but configured to fly, how long for that to happen by chance, Q?]

    For, I am essentially scaling down to quasi-molecular scale the point that Hoyle — no mean thermodynamicist! — made in remarking about tornadoes in junkyards and assembling 747’s by lucky correlations of materials and forces that just happened to do the relvant clumpingt andf configuring work: voila — a flyable jumbo-jet.

    So, why is it that over in Seattle, Boeing doesn’t save money and send in the twisters into the jumbo-jet parts warehouses? [Onlookers, the answer is so obvious that it can only be dodged by cute evasions.]

    Sadly as this point sums up, that is just what has happened in 116.

    3] you believe/present your thought experiments/extrapolations as stronger than is supported by the scientific process. DaveScot’s example of the flaws in your thought experiment about clustering, earlier in this thread, is one representation of why your confidence/asserted correctness should be tempered.

    Again, in the CORRECT — and easily accessible — original context of my remarks . . .

    what is the flaw in seeing that [i]undoing the tendency of diffusion requires clumping work, and that [ii] reliably configuring biofunctional molecules requires highly informationally directed configuring work, as [iii] the config space for long enough macromolecule chains is vastly beyond the probabilistic resources of the observed cosmos?

    Indeed, is this not just what good old prof Wiki testifies to in the telling excerpt in 116 above? [Which I must note, also, you have neatly failed to discuss. Onlookers, compare the sequence of italicised words there.]

    In short, you have tilted at a strawman, and have failed to address not only my EXPLIXCIT contexts, but also evidently overlooked the corroborating citation of a hostile witness.

    4] ), your premise ignores the biasing effect of electrical charge on the molecular bonding. Quite simply, that extrapolated thought experiment of yours can be shown to be insufficiently developed (i.e. weak premises) to be able to provide sufficiently valuable results.

    Dismissal based on a convenient strawman.

    Please address the actual — and easily accessible — argument on the merits.

    5] “All thought experiments, however, employ a methodology that is a priori, rather than empirical, in that they do not proceed by observation or physical experiment.”

    As the very examples of Galileo show, thought experiments are often empirically anchored, and can be quite compelling.

    So, I close for now by asking three questions to show that I am not indulging in ad hominems but am calling attention to selective hyperskepticism, backed up now by insistence on irrelevant distraction after irrelevant distraction in the teeth of easily accessible evidence and facts:

    I –> What is the material difference between diffusion of ink particles and that of essentially similarly sized microjet parts [which per argument can be of similar weight too; say made of smart plastics]?

    II –> Having clumped the particles [and undone diffusion], what is the essential difference between the need to configure monomers in biofunctional macromolecules in precise order, and to configure microjet parts in precise order to get a flyable jet? [And note he underlying starting context, Sir Fred Hoyle on a tornado in a junkyard forming a flyable 747, scaled down to molecular levels so molecular forces such as those that would have been at work in pebiotic soups can go to work.]

    III –> What is the essential difference between slightly futuristic nanobots and the biological smart molecules that read-off the DNA code, then step by step assemble a protein by following algorithmic instructions?

    And, finally finally [but one!], on the main issue: I argue that the smarts in computers comes form intelligent agents, not from the machines themselves.

    Further to this, in all cases of observed origin of FSCI, that is also the case. So per induction, we have excellent empirical grounds to infer that in all cases of FSCI, we are well warranted to infer to such agents, unless and until someone can show empirically that lucky noise and/or demonstrably reliable natural laws are generating such FSCI.

    Worse, if someone does show that there is a law of nature that forces the cosmos as a whole to form sub-cosmi that are life-habitable, thence onward the formation of life and its diversification at body plan level, that would be suggestive indeed as to the origin and purpose of the physical world!

    GEM of TKI

  119. 119
    kairosfocus says:

    PS: In case you miss my historically relevant literary allusion, consider the following from the prophet Isaiah:

    ISA 42:5 This is what God the LORD says–
    he who created the heavens and stretched them out,
    who spread out the earth and all that comes out of it,
    who gives breath to its people,
    and life to those who walk on it:
    . . . .

    ISA 44:24 “This is what the LORD says–
    your Redeemer, who formed you in the womb:

    I am the LORD,
    who has made all things,
    who alone stretched out the heavens,
    who spread out the earth by myself,

    . . . .

    ISA 45:18 For this is what the LORD says–
    he who created the heavens,
    he is God;
    he who fashioned and made the earth,
    he founded it;
    he did not create it to be empty,
    but formed it to be inhabited

    he says:
    “I am the LORD,
    and there is no other.

    For, manifestations of purpose are signs of intent-ful mind at work.

  120. 120
    kairosfocus says:

    PPS: And, that is what Isaiah saw ever so long ago, now.

    [Notice how I am specifically citing him as a witness to the longstanding human insight on what evident purposefulness points to; just as I earlier cited Cicero on what complex, functional digital information normally calls forth from us: inference to message, not lucky noise. To infer to message in convenient cases and — without seriously addressing the CSI and explanatory filter issues — to lucky noise in “inconvenient” ones resting on the same empirically anchored basic probabilistic resources in config spaces challenge, is IMHBCO selective hyperskepticism.]

    GEM of TKI

  121. 121
    kairos says:

    #106 aiguy

    Archeology does not have any notion at all of some abstract class of entities that philosophers call “intelligent agents”. Archeology is the study of ancient human (and only human) civilizations.

    Your argument isn’t correct. Archeology studies human civilizations (and mainly because they are actually the only one found, but if aliens would have left artifacts archeology woul studi them too) BUT the techniques for recognizing if something is actual a human artifact or a result of natural forces are independent on this restriction. They are basicly techniques for recognizing something that is the result of an intelligent agent from what isn’t.

    …to ET searching
    There has never been any published scientific inference to an extra-terrestrial life form to explain anything. If, someday, SETI finds some phenomenon and wishes to argue that ET life forms are the best explanation, we can argue about the merits of their argument. If for example they detected a wide-band EM signal emanating from within a pulsar that had prime-number intervals (like the Contact example) I would think most scientists would not accept that life-forms were responsible.

    Come on Aiguy, don’t try to be in contradiction with your expertise and with the word “intelligence” that is present in your nickname. You perfectly know that in that case scientists would be pretty sure that intelligent agents were involved. They know that pulsars are characterized by regular emissions and this was just the reason scientists could argue for its naturla and rotatory nature. But emissions tracing a sequence of prime nuners of reasonable length couldn’t be due to natural forces. And after all if this wouldn’t be the case the whole SETI project would be scientifically useless.

  122. 122
    kairosfocus says:

    Onlookers:

    I still shaking me poor head with astonishment on how reliably a step by step not so hard to follow — if you will but read — argument — is being misread or ignored and the resulting army of strawmen is then knocked down, doused with oil and ignited, clouding the atmosphere with blinding, noxious smoke.

    All, as I warned of at the head of my always linked, and invited us to a better path:

    INTRODUCTION: The raging controversy over inference to design, sadly, too often puts out more heat and blinding, noxious smoke than light. (Worse, some of the attacks to the man and to strawman misrepresentations of the actual technical case for design [and even of the basic definition of design theory] that have now become a routine distracting rhetorical resort and public relations spin tactic of too many of the defenders of the evolutionary materialist paradigm, show that this resort to poisoning the atmosphere of the discussion is in some quarters quite deliberately intended to rhetorically blunt the otherwise plainly telling force of the mounting pile of evidence and issues that make the inference to design a very live contender indeed.)

    Be that as it may, thanks to the transforming impacts of the ongoing Information Technology revolution, information has now increasingly joined matter, energy, space and time as a recognised fundamental constituent of the cosmos as we experience it. For, it has become increasingly clear over the past sixty years or so, that information is deeply embedded in key dimensions of existence. This holds from the evidently fine-tuned complex organisation of the physics of the cosmos as we observe it, to the intricate nanotechnology of the molecular machinery of life [cf. also J Shapiro here! (NB on AI here, and on Strong vs Weak AI here and here . . . ! )], through the informational requisites of body-plan level biodiversity, on to the origin of mannishness as we experience it, including mind and reasoning, as well as conscience and morals. So, we plainly must frankly and fairly address the question of design as a proposed best current explanation — and as a paradigm framework for transforming the praxis of science and thought in general, not just technology — as, it has profound implications for how we see ourselves in our world, indeed (as the intensity of the rhetorical reaction noted just now indicates) it powerfully challenges the dominant evolutionary materialism that still prevails among the West’s secularised educated elites.

    Therefore, it is appropriate for us to now pause and survey the key facts, concepts and issues, drawing out implications as we seek to infer the best explanation for the information-rich world in which we live.

    So, can we clear the air and start over, on the merits of the actual issues?

    Sigh . . .

    GEM of TKI

  123. 123
    kairosfocus says:

    Kairos:

    Maybe you can help us all sort this out?

    (Complete with an inspirational thought or two . . . just remember to translate the Greek this time!)

    GEM of TKI

  124. 124
    DaveScot says:

    kf

    I think you’re going beyond my point. To reiterate, my point was that properly designed proteins self-assemble into larger complex structures. If the component parts of jet aircraft were of the same nature as proteins then they too would self-assemble. That said, to create parts that self-assemble takes MORE design, not less, more teleology, not less.

  125. 125
    DaveScot says:

    aiguy

    If, someday, SETI finds some phenomenon and wishes to argue that ET life forms are the best explanation, we can argue about the merits of their argument.

    If, someday, evolutionary biologists can demonstrate mutation and selection creating some complex organic structure such as a bacterial flagellum, we can argue about the merits of the argument that all complex organic structures originated by the same mechanism.

    What’s good for the goose is good for the gander. Surely you’re not proposing that a double standard be enforced, right?

  126. 126
    Q says:

    KF points out, in 118 “First of all, when I use the abbreviation “IMHCO” I mean “in my humble but CONSIDERED opinion,” which is of course open to correction.”

    Then I apologize for arriving at the wrong, but researched, meaning for your use of IMHCO.

  127. 127
    Q says:

    kf, in 121 comments “But emissions tracing a sequence of prime nuners of reasonable length couldn’t be due to natural forces. ”

    Maybe I’ve missed something, but has it been proven that there is no function which can result in a long sequence of prime numbers?
    (By “proven”, I mean shown to be consistent with the accepted hypothesis and theorems. To clear up a question back in 118, that is the context I meant for “prove” in that context.)

  128. 128
    kairosfocus says:

    Mr Scott:

    Thanks for your remark at 124. We were plainly talking at cross-purposes, as the above reveals in light of my App 1 section 6 point xi on my context.

    Unfortunately, this led Q to imagine that there was a basic flaw in the point I was making in that section.

    Hopefully, the atrmosphere can now be cleared.

    And indeed, designing proteins so that they will inter alia cluster and self-assemble into the key mutual working configurations required for life functions, implies a further degree of specified complexity in the dna that codes for them.

    That just makes it that the more hard to get to the functionality islands in the sea of possible amino acid polymer configurations.

    GEM of TKI

  129. 129
    kairosfocus says:

    Q:

    In re 126 – 127:

    1] Aoology accepted. I just didn’t know there was another interpretation out there!

    2] No 127 was by Kairos, who it seems is a European. I am a Jamaican resident in Montserrat.

    3] In any case, products of prime numbers of sufficient length are used in making hard to break codes, precisely because there is no existing defined algorithm that elegantly and efficiently generates or identifies primes in succession. Wiki, that ever so humble source, notes:

    Proving a number is prime is not done (for large numbers) by trial division. Many mathematicians have worked on primality tests for large numbers, often restricted to specific number forms. This includes Pépin’s test for Fermat numbers (1877), Proth’s theorem (around 1878), the Lucas–Lehmer test for Mersenne numbers (originated 1856),[1] and the generalized Lucas–Lehmer test. More recent algorithms like APRT-CL, ECPP and AKS work on arbitrary numbers but remain much slower.

    For a long time, prime numbers were thought as having no possible application outside of pure mathematics; this changed in the 1970s when the concepts of public-key cryptography were invented, in which prime numbers formed the basis of the first algorithms such as the RSA cryptosystem algorithm.

    Since 1951 all the largest known primes have been found by computers. The search for ever larger primes has generated interest outside mathematical circles. The Great Internet Mersenne Prime Search and other distributed computing projects to find large primes have become popular in the last ten to fifteen years, while mathematicians continue to struggle with the theory of primes.

    GEM of TKI

  130. 130
    Q says:

    KF, in 129 pointed out, “2] No 127 was by Kairos, who it seems is a European. I am a Jamaican resident in Montserrat. ”
    Double drat! Two gaffes (at least) at about the same time. I’ll keep that in mind – kairos is not kairosfocus. (I had been confused when you referred to Kairos in some of your posts!) I apologize to both of you.

    KF indicates “Unfortunately, this led Q to imagine that there was a basic flaw in the point I was making in that section.”

    Yup. Two different problem spaces accidentally being explained as one. Now I understand what happened.

    Back to my query about primes. They are an interesting side discussion for this thread, regarding the inference of intelligence. I agree, as has been mentioned, that detection of primes is a good clue that intelligence is involved.

    It leads to my thought experiment to explore the application of the explanatory filter to observations regarding intelligence: Let’s assume an astronomer sees a stellar body, and carefully observes that its color slightly oscillates. He then observes that the oscillation is composed of 9 superimposed frequencies – each a multiple of a base frequency. The periods of those 9 frequencies are 1, 2, 3, 5, 7, 11, 13, 17, 19 times the period of the base.

    Would we conclude that the presence of the 9 prime numbers is the result of intelligent agency?

  131. 131
    gpuccio says:

    kairosfocus:

    Thank you for your patience and persistence in trying to stick to fundamental truths, in a discussion which is often tiresome and irritating. I confess that sometimes I am discouraged by the useless complexity that human intelligence is able to create about rather simple issues (that too, I think, is a prerogative of conscious free intelligence). With that, I am not affirming that the “solution” to the fundamental problems of consciousness is easy (indeed, it’s just the opposite), but at least it should be possible to define in a simple way the problems and the different ideas in the field. Instead, a lot of ambiguity and self-contradiction always emerges, and that does not help clarify things for those who want to make an intellectual choice in this difficult field.

    I’ll try again to sum up a few important points, in my own view:

    1) Consciousness is an empirical fact. It is experienced by each one of us, and that experience is shared through “indirect” ways (communication, language, etc.). Indeed, if we exclude solipsitic positions, everyone of us does not doubt that ithers are conscious in much the same way as he is. But our knoweldge of the phenomenon of consciousness is totally empirical and personal. We are intuitively aware of our personal consciousness: that is our primary knowledge, and to that empirical experience we give a name: consciousness. We can call it with any other name, but the fact we are naming remains the same. Being a result of experience it is, indeed, a fact, not a theory, or a concept, or anything else. A fact. We “are” conscious. Indeed, all other facts, and theories, and many other things, are experienced only “in” consciousness, and as modifications of consciousness itself. So, in a way, consciousness if “the mother of all facts”, the supreme fact, the only direct reality we experience. Everything else is, to some degree, “indirect”.

    2) We have different degrees of certainty of the existence of consciousness:
    a) absolute certainty for our personal consciousness (we experience it!), although Dawkins and company could object, at least for themselves…
    b) almost absolute certainty for the consciousness of other human beings (after all, it is an inference, although probably the strongest inference ever made; and yet, solipstists have challenged it).
    c) various degress of certainty, depending on personal ideas, for the existence of consciousness in other biological beings (very likely at least for superior animals, nut here bthe inference becomes more subjective).
    d) a generally accepted inference of the absence of consciousness in non biological objects (an inference again, maybe an “argument from incredulity”).
    e) a recent inference, accepted by many, and rejected by many others, that some special kind of non-biological objects, specially computers, “can” become conscious if their computations are complex enough in a certain, ill defined way (parallel computing? neural networks? loops?).

    3) I think the problem we are discussing is indeed the “e)” inference, and not, I hope, the “a)” experience and the “b)” inference. Those who challenge those first two points (and there are many of them) are definitely too weird for me, and I will friendly let them to their world view, wishing them all the best.

    For the others, let’s consider the “e)” inference. Provided that it is indeed an inference, and not a fact, I would like to suggest that it is a very strange and unsupported inference. Probably, the only reason at all for such a bizarre inference is a well rooted faith in two premises:

    e1) Human beings are conscious (that is, indeed, the b) inference, and I think we all can agree)

    e2) Human beings are “only” their visible body,which is made of matter just the same as anything else.

    From these two premises, comes easily:

    e3) Something in the human body, most likely the structure of the brain, is the cause of consciousness

    and:

    e4) If a brain can do it, why not a computer?

    That’s, in few words, the basis of the theory we all know as strong AI.

    My observations:

    1) As everyone can see, strong AI is not really an inference. It is rather a logical deduction from two premises. But, while the first one (e1) is a very well supported inference, the second (e2) is only an unwarranted statement, unsupported by any evidence or logical argument. Or at least, some think it is supported by both, and some (including me) don’t agree.

    But let’s assume that those who believe in the “e2” statement have their reasons, more or less acceptable, to do so. Still, the plausibility of strong AI depends exclusively by those reasons, because strong AI is not a scientific inference, but only the logical consequence of the “e2” statement, in other words of a purely materialistic interpretation of human beings, and not the contrary.

    So, all those who affirm that strong AI, in any of its variants, has demonstrated the purely material nature of humans, are wrong. The opposite is true. If you can demonstrate the purely material nature of humans, strong AI follows inevitably.

    But there are very strong arguments “against” strong AI, and therefore, indirectly, against its logical premise, that is the purely material nature of humans.

    First of all, strong AI is the typical theory which is full of self-contradictions, smartly hidden by complex words and unsubstantial concepts.

    The biggest contradiction is the following: AI theories, and all the information theory in general, maintain (correctly) that, in computations, the results are independent of the hardware. If that is true, and the emergence of consciousness depends purely on the structure of the software, then even an abacus, if complex enough and with the right structure, should become conscious. After all, any computation can be performed on a very big abacus like machine, given enough time and resources. Would that enormous, and very very long, abacus computation become conscious?

    Other unwarranted fantasies: if a simple computation has no subjective counterpart, why should a sum of simple computations become subjectively aware? I am already hearing the voices: parallel computing (what does it matter? A computation is the same, either you use a serial or a parallel algoritm to make mit); loops (what does it matter? Loops are always used in simple computations, and they are not aware; why should complex loops be aware?); neural networks (again, if a simple neural network is not conscious, why should a bigger one be?); and, finally, emergent properties: ah, that’s really smart; emergent properties and self-organizing processes are really the triumph of materialist metaphysics! You can make anything “emerge”, whatever that means, if you use the right silly words in the right silly context. But, unfortunately, consciousness is not a “property” at all. Consciousness is a fact, the mother of all facts. Properties, on the contrary, are categories of reason and mind, in other words they are very indirect complex human mental entities experienced, ultimately, in consciousness. Or, at least for some emergent properties, in the credulous consciousness of some.

  132. 132
    gpuccio says:

    Q:

    “Would we conclude that the presence of the 9 prime numbers is the result of intelligent agency?”

    I don’t think so. I have not done the computations, but I don’t think that the sequence is complex (improbable) enough in the Dembski sense: in other words, it should be well far from the UPB. But if the sequence were, for instance, of the first 10^6 prime numbers, I think that would be very different.

    In that case, a design inference becomes naturally the best explanation, unless and until a mechanism based on necessity is realistically hypothesized or, better still, proved.

  133. 133

    I, in the first place, take issue with the idea that freewill is constituted by a predetermined set of outcomes rather than ownership of one’s decisions. God might say, “The stove is hot, therefore CloseEncounters will not touch it,” and I respond, “That’s right I’m not going to touch it – I don’t want to get burned.” So did I lose freewill because God had prior knowledge of my inherent aversion to overbearing temperature? Or did I maintain freewill by *choosing* not to burn my hand? I think we need to re-examine what it means to have freewill.

  134. 134
    kairos says:

    #130 Q

    I agree with the comment by gpuccio.
    And I add that the example you have provided is very ideal for a natural explanation at least for a short sequence of primes. In fact their presence in the frequency spectrum of a signal cou1d be simply due to some kind of spwecific emission event. In any case the presence of a long sequence of primes-frequencies PLUS the fact that ONLY those are present in the signal is a sure sign of non-natural production.

    #123 KF

    Maybe you can help us all sort this out?
    (Complete with an inspirational thought or two . . . just remember to translate the Greek this time!)

    What about this? I suppose translation isn’t necessary 🙂

    (John 1,1-3)

    1 EN ARXH HN O LOGOS KAI O LOGOS HN PROS TON QEON KAI QEOS HN O LOGOS
    2 OUTOS HN EN ARXH PROS TON QEON
    3 PANTA DI AUTOU EGENETO KAI XWRIS AUTOU EGENETO OUDE EN O GEGONEN

  135. 135
    kairosfocus says:

    A few notes:

    First, let us see if we can clear the air on microjets etc and cogently address the issues on the merits.

    BTW, GP . . .

    Prelim note: proof is not in the remit of science, if it is understood as demonstration beyond rational dispute relative to axiomatic premises and empirical “facts” that are generally acceptable to rational beings. That, given the history, we can label the error of Galileo — pope Urban VIII saw more clearly than he on this.

    What science can do is provide empirically anchored provisional warrant on inference to best explanation that is reliable in the world of our repeatable experiences.

    And, historical sciences are observational sciences that address once for all events so they really provide more or less “plausible” models of the natural regularities and chance patterns that may have acted in the past, as in effect a “science fiction” reconstruction of the unobserved distant past. So, we should not confuse geochronology and/or cosmological models and/or macroevolutionary and origin of life models with “an ideal, indisputably accurate view of the actual past (apart form minor details of course)” — as too many evo mat advocates like to pretend that historical sciences provide; cf current US NAS statements.

    “Sciences” of the distant past, are at best plausible reconstructions that are consistent with the patterns and processes we observe in the here and now. At worst, they are deceptive images and stories “made to look [and sound] like” the men, birds, beasts, planets and stars of creation. (Cf. Rom 1 19 – 25 & 28 ff for this literary allusion.)

    But, let us not forget the theme for the thread, so that I need to look at a few points:

    1] MODEL: The brain as the mind’s i/o control computer . . .

    Refer to BarryA, OP:

    In the comment thread to my last post there was a lot of discussion about computers and their relation to intelligence. This is my understanding about computers. They are just very powerful calculators, but they do not “think” in any meaningful sense. By this I mean that computer hardware is nothing but an electro-mechanical device for operating computer software. Computer software in turn is nothing but a series of “if then” propositions. These “if then” propositions may be massively complex, but software never rises above an utterly determined “if then” level . . . . For example, the “then” in response to a particular”if” might be “access a random number generator and insert the number obtained in place of the variable in formula Y.” “Unpredictable” is not a synonym for “contingent.” Even if an element of randomness is introduced into the system, however, the way in which the computer will employ that random element is determined.

    Now the $64,000 question is this: Is the human brain merely an organic computer that in principle operates the same way as my PC?” . . . If the brain is just an organic computer, even though human behavior may at some level be unpredictable, it is nevertheless determined, and free will does not exist. If, on the other hand, it is not, if there is a “mind” that is separate from though connected to, the brain, then free will does exist.

    As just re-excerpted, BarryA has posed the issue very well — a mark of a good Attorney. [If I am ever in Colorado and need such help . . .]

    To put it in the appropriate control systems terms, we know the human body is in effect a bio-tech robot. (My favourite demo is to relax one of your hands, and use the other to press gently, first on the bulge of muscles in your forearm just beyond the elbow, then at the cluster of tendons at the writst, on the palm side. Your fingers will move like magic. It never fails to shock students who see it for the first time!)

    The brain is known to be the Input-output, control processor for the biotech robot.

    That brings us to . . .

    +++++++++

    2] PAUSE – controls 101 tutorial in a nutshell:

    As an introduction to the idea, look at the diagram here.

    [The Wiki article on control theory has a top-level diagram that fails to distinguish the controller from the actuator and plant in the feedforward path and so since a pic is worth a 1,000 words, it falls out of favour as a good first go-to 101 link. It also fails to show the significance of the comparator, which generates the error signal that drives the controller. When it comes to control theory, due to its supreme difficulties, I have a zero tolerance for basic errors.]

    I’ll try a pseudo-diag,as I can find no easy simple one out there:

    [a] FEEDFORWARD PATH:

    ref i/ps [r(t)] -> COMP. -> e(t) -> CONTROLLER -> . . .

    -> ACTUATORS -> PLANT -> controlled o/ps, c(t)

    [b] FEEDBACK PATH:

    c(t) -> SENSORS + FEEDBACK -> b(t)

    b(t) -> COMPARATOR

    [c] COMPARATOR ACTION:

    r(t) – b(t) -> e(t)

    Summarising:

    –> The reference inputs r(t) give the purposive targets to the comparator-controller elements. [For us, that’s the MIND; control theory is yet another situation in sci-tech where agency comes into the science]

    –> The comparator senses the ref i/ps and the fed back sensor data on the plant behaviour, to generate the error signal — i.e it does a gap scale analysis.

    –> The conrtroller then tries to close the gap based on its error/gaps inputs, through driving actuators that act onthe plant in question.

    –> Of course, not all situations are controllable inthe face of external forces and noise etc

    –> And, feedback control lends itself to pathologies such as self-reinforcing oswcillations etc. [Turning negativce feedback into reinforcing feedback through lags inthe feedback process . . .]

    –> I will spare you the math on frequency and time domain continuous and discrete state analysis. [Let’s just say that the complex frequency domain is an eye-opener! A great online look is here.]

    My key claim is simple: the core issue is where the software and reference inputs in the brain acting as controller, come from. And of course both of these are informational issues — surprise.

    +++++++++

    3] The core “TKI school of thought” contention:

    I, non-robot, contend I: that such a “software” and purposive self-determination require information storage capacity plainly far beyond UPB, so that chance is an inadequate explanation of the complex and highly specified information that is functional in intelligent life, and hard or soft-wired into our neural network architecture brains — very different from the Von Neumann archi of the classic digital computer.

    Such FSCI is vastly beyond the credible reach of lucky noise on the gamut of our observed cosmos.

    Thus, on inference to empirically anchored, best explanation, mind is the credible source of our agent behaviour, mind with significant [but not unlimited] freedom of choice and action that are not rooted in lucky noise (which is a-rational and unpredictable in the specific rather than decisional) or deterministic (thus unchoosable) natural regularities tracing to mechanical necessity alone.

    Further to this, I contend II: that we are contingent and live in a contingent cosmos, on the evidence in hand.

    That points to a further cosmogenetic agent of vast power and intelligence as the relevant necessary being to adequately explain the empirical data of the cosmos and us as contingent agents within it.

    I further contend, III: that this view is coherent, factually [empirically] adequate and explanatorily powerful and elegant; though unpalatable to the modernist mentality.

    Finally, I contend IV: that on a reasonable — historically well warranted and philosphically non-question begging, classical — “definition” of science, such an approach is scientific and potentially fruitful of inquiry.

    For we may freely explore the dynamics and programming of he i/o processor and its effects on the biotech robot, within the moral limitations of dealing with minds/souls that are ends in themselves.

    What is that definition of science? We need look no further for a good simple statement than reasonable, College-level dictionaries:

    science: a branch of knowledge conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 — and yes, they used the “z” Virginia!]

    scientific method: principles and procedures for the systematic pursuit of knowledge [”the body of truth, information and principles acquired by mankind”] involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster’s 7th Collegiate, 1965]

    That brings us to the issue: can we in turn create artificial intelligences?

    4] Is AI possible?

    That, as GP pointed out, depends on what you mean by AI etc.

    But, we already know it is possible to create embodied intelligent agents, as we are just that!

    The real question is whether we can create embodied artificially intelligent agents — not just sophisticated software and hardware that deterministically and/or by feeding in judicious amounts of random [or more usually pseudo-random] data, extends our natural ability to act into the world and solve problems etc.

    That sounds like a grand sci-tech project to me. [Just — we must take case to obey Asimov’s ethical laws of robotics so we avoid making Frankenstein monsters that would devour us! (AIG any guidance on controlling the many would-be Dr Strangeloves out there?]

    GEM of TKI

  136. 136
    kairosfocus says:

    PS, FYI: Asimov’s laws of robotics (for guiding AI research and technology):

    the Laws state the following:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Later, Asimov added the Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”; the rest of the laws are modified sequentially to acknowledge this.

    Note why I point out these are ethical not technical laws. (I also seem to recall, Singapore or some other jurisdiction wrote the laws into their Law.)

    GEM of TKI

  137. 137
    kairosfocus says:

    Kairos:

    Thanks!

    Very suitable: the WORD/REASON/LOGIC/INFORMATION himself as the author of creation!

    That looks like the fundamental ID prediction, and one that is being substantiated more and more tot he astonishment, anger and confounding of those who thought lucky noise and natural regularities tracing to mechanical necessity were enough . . .

    GEM of TKI

  138. 138
    gpuccio says:

    Hi, kairosfocus.

    Obviously, you are right: I just meant “proved” in the empirical scientific sense, that is supported by facts, not in the logico-mathematical sense of “demonstrated”. It is good to be precise, anyway, given the general epistemological confusion in these debates, especially with darwinists/materialists.

    About your remark: “The real question is whether we can create embodied artificially intelligent agents”, I would say: If we can, then they will not be purely material deterministic machines. But indeed, I believe that consciousness and intelligence are the manifestation of a “transcendental” I “through” complex phenomenic realities (bodies, minds, etc.).

    By the way, are you suggesting a racial discrimination against frankenstein monsters? 🙂

  139. 139
    kairosfocus says:

    PPS: Finally found a useful discussion on control theory and related cybernetics, with nice pics, here.

  140. 140
    kairosfocus says:

    GP:

    In fact the issue of the Frankenstein destructive monster led to a first take on robotics which was a condition of slavery.

    In the 1990’s,m Asimov more or less “oversaw” an emancipating, partnership-oriented revision to his laws:

    In the 1990s, Roger MacBride Allen wrote a trilogy set within Asimov’s fictional universe. Each title has the prefix “Isaac Asimov’s”, as Asimov approved Allen’s outline before his death. These three books (Caliban, Inferno and Utopia) introduce a new set of Laws.

    The so-called New Laws are similar to Asimov’s originals, with three substantial differences.

    The First Law is modified to remove the “inaction” clause (the same modification made in “Little Lost Robot”).

    The Second Law is modified to require cooperation instead of obedience.

    The Third Law is modified so it is no longer superseded by the Second (i.e., a “New Law” robot cannot be ordered to destroy itself).

    Finally, Allen adds a Fourth Law, which instructs the robot to do “whatever it likes” so long as this does not conflict with the first three Laws.

    The philosophy behind these changes is that New Law robots should be partners rather than slaves to humanity. According to the first book’s introduction, Allen devised the New Laws in discussion with Asimov himself . . .

    Onward in discussion various other playouts, the point of further modiufyinghte first law has been raised: a small group of robots claims that the Zeroth Law of Robotics itself implies a higher Minus One Law of Robotics: A robot may not harm sentience or, through inaction, allow sentience to come to harm. [But what if these sentient beings happen to be evil and destructive, a la Hitler and co or Stalin ands co?]

    This is all very interesting and shows the potential of sci fi [and related fantasy and apocalyptic of various stripes; cf the online book here on Islamic vs Christian apocalyptic and how this may play into our near-future . . .] to pay out models and test the core worldviews issues.

    Ze plot thickens my dear Dr Watson . . .

    GEM of TKI

    PS: I would like for us all to read the Derek Smith article on cybernetics just linked, and Acts 27, considered as a microcosm of cybernetics with active agent involvement [including the issue of of the supernatural-prophetic interacting with men making self-determined collective decisions shaped by rhetoric and by events . . .] — including questions of democratic governance in a socio-technical systems context — and then take this thread to the next level. I think we can do something serious with it, if we keep on issue!

  141. 141
    kairos says:

    #137 kairosfocus

    Very suitable: the WORD/REASON/LOGIC/INFORMATION himself as the author of creation!

    That looks like the fundamental ID prediction, and one that is being substantiated more and more tot he astonishment, anger and confounding of those who thought lucky noise and natural regularities tracing to mechanical necessity were enough . .

    I’ve always thought that the incipit of John’s gospel is one of the most powerful statement of the intelligent and pervasivecreation by God, especially in the words (which in Greek sound even more effective):

    3. PANTA DI AUTOU EGENETO KAI XWRIS AUTOU EGENETO OUDE EN O GEGONEN

    I think that this point should be more and more read by christians that are supporters of theistic evolution.

  142. 142
    kairosfocus says:

    Kairos, GP and others (incl esp Q and AIG):

    First, I had hoped, over the past few days, that once the cross-purpose remarks, distractions and resulting confusions were cleared, there would now be a discussion of serious direct and incidental issues on this very important thread on AI and related ID issues, on the merits.

    I therefore find it highly interesting that once the situation cleared over the past few days, the field was abandoned to the ID supporters; as (by and large) happened with the previous Epistemology thread, thread NFL theorems thread and the Amazon thread. (That is beginning to suggest to me that the ID case is sufficiently strong on the merits that the objections are rooted in distractions, miscommunications and side-issues — inadvertent or in some cases at say NCSE or ACLU levels, probably agenda-driven.)

    Be that as it may, I have raised the Derek Smith remarks on cybernetics and the interesting historical incident in Acts 27 as a context for further development of the key questions of this thread.

    For instance:

    1] DS, in the previously linked on control and the use of information:

    to have control you have to be able to manipulate information. Specifically, you need to transmit information on how you want your mechanism to perform in the first place, then you need to receive information on how things are currently going (in order to keep that performance within tightly preset limits of acceptability), and then you need to transmit information whenever you want to make the necessary adjustments. This gives you two types of control information. The transmitted information is feedforward – the information which instructs a mechanism or process on what needs to be done, and the received information is feedback – information coming back from that mechanism or process, and telling you how things are progressing.

    These two fundamental types of information then circulate in a very special way in what is known as a control loop. This is a mechanism (such as Lee’s or Mead’s) for detecting deviations from some preset standard, and for taking appropriate corrective action.

    In short, once we see tight, feedback-based control in the face of environmental disturbance, we have significant information processing in the context of a specific architecture and algorithmic framework.

    This of course very rapidly exceeds the UPB, and plainly manifests FSCI.

    2] Another important control concept emerged with the development of power-assisted steering for ships. Here, too, the essence of the problem was that muscle power alone was not enough . . . The turning of all such rudders required considerable manual effort especially in high seas, and in vessels of any size block-and-tackle systems had to be used to “gear down” a lot of turns on the helm (ie. the steering wheel) to a small displacement of the rudder itself . . . . Servomechanisms (or “servos”, or “slave systems”) are important because they allow a small control system to control pieces of far heavier machinery.

    This is of course most directly relevant to the challenges in the Ac 27 case, which also allows us to distinguish local direction control and global decision-making, planning, management and emergency response on the path to the desired port, i.e control in the small and in the large for a mechanical displacement requiring considerable power.

    This is also relevant to animal navigation and to proposed autonomous robots that exhibit AI.

    3] a comparator cannot actually do its job at all without some sort of memory capability, and things get worse in servo-assisted systems due to the physical separation of servo and controller . . . motor activity – efference – does not just induce movement, but also affects what is picked up by the senses. As soon as you start moving, proprioceptors will tell you about limb position and balance, cutaneous receptors will tell you about changes in touch, pain, temperature, and pressure, homeostatic systems will signal requests for blood pressure and blood glucose maintenance, and special senses (eyes and ears) will detect the changing visual and auditory shape of the world. The senses, in short, are involved every bit as much in motor activity as are the motor pathways.

    He then speaks to the practical solutions:

    What efference copy systems do, therefore, is subtract what you expect your senses to tell you next from what they actually tell you next. This gives zero if things are going to plan, but a non-zero error signal if they are not. This comparison is achieved by momentarily storing an image of the main motor output – the efference copy – and by then monitoring what is subsequently received back from the senses – the reafference. The principal benefit, of course, comes when the two flows totally cancel each other out, because this leaves the higher controller free to get on with more important things. Every now and then, however, the system encounters some sort of external obstacle, or “perturbation”. This causes the reafference not to match the efference copy, and this, in turn, causes the higher controller to be interrupted with requests for corrective action. And because the sensors in an efference copy system thereby become capable of confirming for themselves that the effectors are working to plan, this is a highly efficient way of reducing unnecessary network traffic.

    This, BTW, has a lot to say to the significance of athletic visualisation of perfect performance as an aid to such performance! (And to how once we have got the knack through supervised or exploratory instruction and practice, we can then have “muscle memory” of how to do a skilled task “effortlessly.”)

    Perhaps, not so BTW: this speaks straight into the issue of training of AI type systems!

    That brings us to:

    4] An even more advanced way to make use of past experience is to develop some form of predictive control system, that is to say, a system where the efference at every level of the control hierarchy is prepared well ahead of actual need (up to seconds ahead in many cases, but hours or even days ahead in the case of more “strategic” predictions). Indeed, there is considerable insight to be gained (even as non-technologists) by considering the problems faced by robotics engineers. Maravall, Mazo, Palencia, Perez, and Torres (1990), for example, are among the many research teams working in this area, and have obtained good results with robots capable of constantly making guesses at what is coming next.

    Thus, we see what-if planning and foresight and anticipation of contingencies etc etc, thence all the issues of governance to plan and guide the ship to the best advantage of its company. unfortunately, unsound counsels prevailed and led to disaster. (And at a yet higher level, these freedoms of action and error were used by God in the account, to advance his own long-term strategy of moving Paul from Jerusalem to Rome!)

    That brings up the significance of issues tied to DS’ fig 2:

    5] This is a more powerful version of the control system shown in Figure 1. The three original modules are still there (albeit the internal logic is no longer shown), only now they have to support multiple muscle groups (top right), which, properly coordinated, allows the organism to develop significantly more complex behaviours such as swimming, running, flying, etc. There are also several important new modules (and important new memory resources to go with them). (1) a higher order controller (far left) replaces the external manual source of command information. This means that there is no longer any high-side system boundary, making the new layout self-controlling. That is to say, it is now capable of willed behaviour, or “praxis”

    That is, once there is an intelligent controller to drive the servo loops etc, there is now a capacity for autonomy: self-controlled behaviour.

    Thus now we have a conceptual apparatus to address the issue of the brain as a controller and the mind as the intelligent director that in effect feeds creative, decisional, configurational information to the lower level i/o processor, the controller.

    The evo mat contention is that such higher order direction is emergent from the lower order controller, and that it is in effect the product of a process of random variation and natural selection ultimately going back to the proposed events in a hypothesised pre-biotic soup aeons ago.

    The design argument on this is that the relevant entities, starting with the basic biochemistry of life, exhibit multiple tiers of FSCI which is beyond the probabilistically credible reach of random walk based config space initial searches [arrival of the fit — not survival of the fittEST — issue] that would be required to get us to the islands of functionality in the beyond merely astronomical sea of possible configurations. The vast majority of these are non-functional in the relevant senses.

    So, may we now proceed?

    GEM of TKI

  143. 143
    Q says:

    KF, mentions in 142, “(That is beginning to suggest to me that the ID case is sufficiently strong on the merits that the objections are rooted in distractions, miscommunications and side-issues — inadvertent or in some cases at say NCSE or ACLU levels, probably agenda-driven.)”
    Now please don’t take this the wrong way, but your posts are invariably really long and cover a lot of ground. I’ve tried to discuss specifics of some of the posts with you, but because of the quantity of topics in your posts, it becomes difficult to stay focussed. (Maybe my deficiency, in that I can’t tell the forest from the trees.)

    But, it is incorrect to assume that it suggests any strength to your arguments – that strength comes directly from how well the arguments stand up against analysis, not whether the analysis stops.

    I would prefer it if we could discuss specific issues, even if they are side-issues exploring variants of larger topics being raised. (Note that I’m not addressing any strength or weakness of any positions, but instead am addressing the process in which those points are discussed.)

  144. 144
    kairosfocus says:

    Q:

    In re: it is incorrect to assume that it suggests any strength to your arguments – that strength comes directly from how well the arguments stand up against analysis, not whether the analysis stops.

    You just substantiated my point!

    Onlookers, observe the typical evo mat [and of course fellow-traveller] assertion/ assumption/ inference that the design theory conclusion is “unsubstantiated” or “weak” or a fallacy etc. Then, as the above thread documents, the attack is to the red herring leading out tot he strawman that is duly knocked over. Depending on the case, it is then left on the ground or subjected to mayhem or even soaked in oil and ignited to cloud the atmosphere and poison it. Then, when the flames are doused and the atmosphere is given a chance to clear — whoops, there goes another red herring leading out to yet another strawman. In short, the meta-level analysis reveals a telling rhetorical pattern frequently resorted to by those who have institutional power but nor solidity on the merits, in defence of an agenda. And of course, coming down from the level of a NCSE or an ACLU to a Q, that then leads to multiple layers of red herrings, a creowd of oil-soaked strawmen a-burning and a very clouded and poisoned atmosphere.

    So, back to the beginning, i.e Cicero:

    Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them. How, therefore, can these people assert that the world was made by the fortuitous concourse of atoms, which have no color, no quality—which the Greeks call [poiotes], no sense? [Cicero, THE NATURE OF THE GODS BK II Ch XXXVII, C1 BC, as trans Yonge (Harper & Bros., 1877), pp. 289 – 90.]

    Onlookers, do you see any credible reason to conclude that on the gamut of the observed universe, chance + necessity can spontaneously create FSCI-bearing configurations? Why then is there always an objection to the FACTS:

    [i] that the only directly known cause of FSCI is agency, and

    [ii] that on grounds of how hard it is to get by random walk from arbitrary initial points to isolated islands of function in vast config spaces specified by FSCI, we will reliably — on good statistical thermodynamics principles as already discussed and exemplified [and BTW, thought experiments ARE from Galileo on used as part of the warranting process of modern science pace Q’s objections] — starve for probabilistic resource exhaustion on the gamut of the cosmos before we can reach to the shores of bare functionality so that much-beloved hill-climbing optimisation algorithms [such as NS in various forms] can move on towards “the fittest.”

    In short, chance + necessity alone do not credibly account for FSCI relative to principles of scientific induction. But, agency is KNOWN to routinely do so.

    So, inference to agency on cases where we just happened not to be there to directly observe is reliably warranted and routinely used all over science. Indeed, it is selective hyperskepticism that suddenly objects when evo mat assertions are under challenge by the direct implications of key cases in point.

    Let’s cut to the chase, in points:

    1 –> Q, this thread is about the nature of agency, in light of the significance of agency. [For brevity, cf, BarryA’s OP. That’s another complaint — refusal to look at references or quote-minig od references e.g. the missing of my point xi above.]

    2 –> Incidental to that, the issue of the legitimacy of inference to probabilistic resource exhaustion by random searches on a ocnfig space came up, leading to my example of the scaled-down Fred Hoyle tornado makes a jumbo as it passes through a junkyard in Seattle. [Onlookers, observe the tiptoe-away once the relevance of xi was highlighted on the strawman objection on protein-protein interaction being “spontaneous — also Dave Scott’s point that if components are so-made that they self-cluster for working, that is a sign of MORE design, not less.]

    3 –> Back on point, I have highlighted that it is arguable that the brain serves as the i/o processor for the mind acting as intelligent director, using Derek Smith’s exploration of cybernetics issues as a context.

    4 –> In particular, I have pointed to the significance of the architecture of servo control loops in the human body considered as a bio-robot controlled by the brain, the information storage transmission and algorithmic processing requirements and their implications for the presence of multi-layered FSCI. [This is already well beyond the credible reach of chance on the gamut of the cosmos, cf. Trevors and Abel (in a peer-reviewed article), also here in a second peer-reviewed ID-supportive article, on the implications of sequence complexity.]

    5 –> Further to this, I have also pointed to the significance of Smith’s introduction of the intelligent director for the controller and of associated required memory, imaginative projection of scenarios for action, decision-making etc.

    6 –> Namely, and autonomous sophisticated servo- system sufficient to manage the functions of a bio-robot, is highly information-rich, and that with information that is imaginative, speculative, exploratory and creative, though anchored to real-world experience.

    7 –> In short, it is an intelligent agent.

    8 –> Further more, it draws on the “imaginative info-sphere” of likely to be effective dynamical configuration and change models, to guide i/o control and actuation, which act into the world to effect desired purposes and achieve goals.

    9 –> These imaginative infosphere models are creative and reasoned based on the logic of the dynamics of functional islands in configuration spaces, not at all the fruit of random walks from arbitrary initial positions. Precisely, they are the work of intelligent agents.

    10 –> So, we see again that when we observe say the FSCI of this comment, we infer to actuation under intelligent agent control acting through fingers and PC technology to end up as messages sent and posted at the UD blog, not mere lucky noise. Mind over matter in short.

    11 –> Thence, when we observe the storage and transmission and use of such FSCI as system-functional information, we are well-warranted to infer to agency as the responsible force. That holds for DNA in OOL and OOBPLBD, and it holds for the underlying physics of a fine-tuned, life-facilitating cosmos.

    So, now, Q: just where and why is it that — on inference to best explanation across live option alternatives relative to factual adequacy, logical and dynamical coherence and explanatory elegance/power — you find the above chain of reasoning defective?

    [Worldview-level question-begging and/or selective hyperskepticism are not allowed. Also, you should reckon with Trevors and Abel’s two peer-reviewed papers — even if only the abstracts and whatever dipping you need to do to make sense of the summaries — in your answer.]

    GEM of TKI

  145. 145
    kairosfocus says:

    PS: FYI Q, it also takes up far more to responsibly address an objection on the substance implicated, than it does to assert the objection and use it as a dismissal to fend off an issue rather than actually consider the matter carefully and fairly on the balance of the merits. [That’s how the classic fallacy of the closed mind works, insofar as it is “rational.” Emotional prejudices we can of course dismiss as irrelevant to this discussion, surely.]

    In sum: Dismissals do not have to be accurate, they do not need to be balanced or balancing, they do not need to adduce facts and make arguments by explanation or by implication from accepted points, nor do they need to look at comparative difficulties.

    By sharp contrast, serious and responsible arguments do.

    So, let us return to the merits.

  146. 146
    Q says:

    OK, on the merits:

    First, it is a given that unless analogies have 100% correlation to the topic being investigation, they will provide less than 100% of an explanation of that analogous topic.

    So, when you ask just where and why is it that — on inference to best explanation across live option alternatives relative to factual adequacy, logical and dynamical coherence and explanatory elegance/power — you find the above chain of reasoning defective?, my first point is that your analogy is less than 100% representative of all possible such systems. For extreme systems with obviously complex interactions, sure, let’s not argue.

    But there is a huge spectrum of systems that can “store”, “transmit”, and exhibit “functionality”. Some will have very few parts. Some of these will not be so obviously complex that could automatically infer that they exhibit FCSI. Some will need to be actually evaluated to see on which side of each step of the explanatory filter they lie. Again, that is not arguable – a spectrum of possibilities does exist, and we presently have limited knowledge.

    So, my argument with your chain of reasoning, no matter how elegant, and no matter how many references you link to, is that your conclusion is too broad for your premises. (BTW, I was unable to link to the Trevors and Abel links you provided. But, this link http://www.iscid.org/boards/ub.....00674.html about “Self-organization vs. self-ordering events in life-origin models” does. This thread isn’t about life-origin models is it? BarryA didn’t bring that up, neither did I. So, I did not argue with your xi above. BarryA was dicussing qualia, and mind being separable from brain.)

    To fisk your argument (ellipses are used for brevity, and not to hide parts of a quote available directly above): You start with the analogy of brain as machine (… the brain serves as the i/o processor for the mind acting as intelligent director…; … architecture of servo control loops in the human body considered as a bio-robot controlled by the brain …; … the intelligent director for the controller and of associated required memory…) Nothing really to dispute here, as it is setting up the analogy.

    You arrive at a conclusion about the analogy (In short, it is an intelligent agent.) Sure. That is why the analogy was selected.

    Then, you extend the analogy to a new scenario (… Thence, when we observe the storage and transmission and use of such FSCI as system-functional information, we are well-warranted to infer to agency as the responsible force.) This is where you apply an inference to the general case of FCSI. Note that it does not correlate 100% to the original analogy.

    Finally, you arrive at a wholly new conclusion using the process (That holds for DNA in OOL and OOBPLBD, and it holds for the underlying physics of a fine-tuned, life-facilitating cosmos.) The original system was not about DNA, and didn’t really mention fine-tuning. Even if the conclusion is right, your premises don’t lead that conclusion having the 100% confidence inherent in how it is written.

    As I mentioned, not all systems are as complex as your original scenario. Some are no where near as obviously so complex. DaveScot was correct in mentioning that some systems can assemble from materials exposed to random events (like a tornado in a junk yard). This is even more likely (many orders of magnitude more likely) if these materials are able to exert force over a distance – like with charge. Note that this is not an argument for life-origins – it is about your claims regarding the likelyhood of some basic storage and transmission systems arising without intelligent agency.

    These non-obvious, and relatively simple systems may or may not fall on the FCSI side of an explanatory filter. An experiment would be needed to demonstrate how well your analogy can be extrapolated to some of these non-obvious cases. As a side note, even Galileo backed up his thought experiments with actual experiments. That’s one of the main differences between him and Plato.

  147. 147
    StephenB says:

    Q: You don’t seem to appreciate what Barry A has accomplished with this post. In effect he has approached the mind/brain problem from both a scientific and a philosophical vantage point. In other words, he is using the reasoned conclusions of each discipline to verify the reasoned conclusions arrived at by the other. In my judgment, kairosfocus has adequately addressed the scientific difficulties involved. So much so, that I think it is time to get to the bottom line, because I don’t think you are going to accept the scientific evidence for the reality of the mind. I notice, for example, that you have neither commented on or even read “The Spiritual Brain.” So I think it is time to change direction and face the philosophical point.

    Here is Barry A’s proposition: {1}Free will requires the presence of a non-mind independent of the brain. {2}a non-material mind independent of the brain indicates free will. This is an extremely revealing comment. In philosophy, it is known as a bi-conditional proposition, which means, If A/then B. Also, If B/then A. Usually, that pattern does not hold in logic, but it does hold here. Barry A is right but you seem to shrug off all of the implications involved. Inasmuch as you disavow the existence of the mind, it is time to make the corresponding assertion about volition—go ahead and reject free will and complete the cycle. Take the final step and concede that all of our attempts to persuade each other are futile. We are nature’s plaything, and the laws of nature operating through our “brain” dictate our every move.

    Given your perception of reality, why you bother to raise objections at all. If your world view is true, then Kairosfocus, myself and everyone else on this blog do what we do only because fate requires it of us. We are, for wnat of a better term, determined to think and act as we do. Since we have no volitional powers, why do you appeal to them? Why raise objections in an attempt to influence when it has already been established that only non-material minds can influence or be influenced? Why propose a change of direction when only intelligent agencies have the power to do that? Since brains are subject to physical laws of cause and effect, they cannot rise above them and, therefore, cannot affect them. Brains cannon influence brains. Why then, do you ask any of us to change our minds when, in your judgment, there are no minds to change?

  148. 148
    ari-freedom says:

    StephenB wrote
    Take the final step and concede that all of our attempts to persuade each other are futile.

    how can he concede to do anything if he’s just “following orders”? There’s no answer to the “why” you’re asking; he just does

  149. 149
    kairosfocus says:

    Q:

    First, sorry on the T-A links; I mistakenly assumed that they would be still available. Doubtless they are now under the US$ 32 a peek walls that are now ever more common!

    And, the issue of crystallisation or vortex or convection cell formation in your linked in 146 – VERY BAD ANALOGIES, BTW! — is relevant to the overall question, as it speaks to the formation of order under chance and necessity. But as has been known since the days of the OOL researchers leading up to Thaxton et al in the early 1980’s, such “order” has little or nothing to do with the information-rich “organised complexity” that is functional and specific, in the cases of interest to us. Cf the discussion on the origin of the FSCI concept here in my always linked, App 3).

    You, unfortunately, therefore continue to show the force of my point on the rhetorical pattern at work. On select points:

    1] Analogies are weak arguments . . .

    [Q, 146]: unless analogies have 100% correlation to the topic being investigation, they will provide less than 100% of an explanation of that analogous topic . . . . But there is a huge spectrum of systems that can “store”, “transmit”, and exhibit “functionality”. Some will have very few parts. Some of these will not be so obviously complex that could automatically infer that they exhibit FCSI. Some will need to be actually evaluated to see on which side of each step of the explanatory filter they lie. Again, that is not arguable – a spectrum of possibilities does exist, and we presently have limited knowledge.

    So, my argument with your chain of reasoning, no matter how elegant, and no matter how many references you link to, is that your conclusion is too broad for your premises.

    First, this is nicely vague.

    But, FYI, I am NOT arguing by “analogy,” but have pointed — right from the beginning [starting with the 50 BC cite of Cicero, which may be seen at the HEAD of my always linked and is cited above] to the key instance of FSCI. Namely, digital — discrete-state — information storage with sufficient string length that chance configurations are maximally unlikely to arrive at the islands of functionality in the vastness of the configuration space relative to reasonably available probabilistic resources. In short, if something stores or transfers information that requires the equivalent of significantly more than 500 – 1,000 2-state storage elements [the upper threshold being to allow precisely for LARGE islands of functionality], then for excellent reason we routinely and reliably infer to agency, not lucky noise as its credible source.

    Thus, when your block of text excerpted just above gets around to saying that some things store and transmit information that is functional, while being simple, you have again gone after a strawman.

    I am aware of such cases, and have no interest in them; I am perfectly willing to allow inferences that would improperly assign a simple design to chance. And tha tis exaclty what the explanatory filter deliberately does: is the matter traceable to law-like necessity – then not designed.

    Is it contingent but sufficiently lacking in complexity that it could be accounted for by chance on the gamut of the cosmos as a whole acting as a vast lottery running at the rate of one trial every 10^-45 seconds, with one marked atom, and you could plunge your hand in anytime, anywhere and have a reasonable chance to pluck out that marked atom? If so, then, chance.

    Only on cases significantly beyond the reach of chance on the gamut of the cosmos, are we interested in inferring to design as the best explanation. And indeed, we routinely do just that in cases of the digital strings in say this blog thread .

    In short, “simple” cases of functionality are irrelevant to the cases of interest that DO exhaust the available probabilistic resources of the observed cosmos: [a] OOL based on DNA of credible minimal chain length 300 – 500,000 bases [600,000 – 1,000 bits, translating to 2-state digital equivalents], [b] OOBPLBD that may require the innovation of 100,000,000 or more such elements dozens of times to account for the Cambrian life revolution, [c] the informational requirements to get the finetuning of the observed fine of our life-habitable cosmos. [On this last, proposing vast multiples of the cosmos with randomly jiggled parameters etc is precisely an admission of just how much information needs to be accounted for — it is a crude attempt to exhaust the config space requirements by assuming a vast invisible cluster of unobserved sub-cosmi].

    2] my argument with your chain of reasoning, no matter how elegant, and no matter how many references you link to, is that your conclusion is too broad for your premises.

    Naked, dismissive assertion based on a strawman.

    Again, my premises are – as just yet again summarised and as may be read at length in the always linked — in the first instance relative to cases of sufficiently complex, functionally specific digital strings. [And recall, digital does not mean just 2-state.]

    My initial observation is that on grounds tied to basic principles of thermodynamics that underly the phenomenon of diffusion for instance, and on direct empirical observation, in all cases where we directly know the origin of FSCI, it is a product of agency. So, on induction through empirically anchored inferencer to best – and of course provisional [i.e. Falsifiable] – explanation, FSCI is a reliable marker of agent action.

    Indeed, as the Derek Smith link and excerpts I made in 142 show, this is sensible as agents are able to envision a suitable pattern of configurations and control actuators to move towards the islands of probable functionality in the config space. Thence, they are able to refine the functionality towards better and better results through hill-climbing, improving-of-performance behaviour: practice makes perfect.

    But in so doing what they have first done is to use background knowledge and/or imagination to isolate likely sites of islands of desirable functionality in the config space, i.e. they have cut out most of the search required by a random-walk starting from an arbitrary location. [And as my thought experiment, Hoyle’s similar one on a grander scale and of course Robert Shapiro’s remarks in Sci Am as excerpted in 116 above all show, this is not by any means an irrelevant consideration!]

    Let us then look at he points in say no 135, the key issue and contentions I to IV:

    My key claim is simple: the core issue is where the software and reference inputs in the brain acting as controller, come from. And of course both of these are informational issues — surprise . . . .

    I, non-robot, contend I: that such a “software” and purposive self-determination require information storage capacity plainly far beyond UPB, so that chance is an inadequate explanation of the complex and highly specified information that is functional in intelligent life, and hard or soft-wired into our neural network architecture brains — very different from the Von Neumann archi of the classic digital computer [That is we are dealing with FSCI, for which rthe best, empirically anchored explanation is agent action.] . . . .

    I contend II: that we are contingent and live in a contingent cosmos, on the evidence in hand.
    That points to a further cosmogenetic agent of vast power and intelligence as the relevant necessary being to adequately explain the empirical data of the cosmos and us as contingent agents within it.
    I further contend, III: that this view is coherent, factually [empirically] adequate and explanatorily powerful and elegant; though unpalatable to the modernist mentality.
    Finally, I contend IV: that on a reasonable — historically well warranted and philosophically non-question begging, classical — “definition” of science, such an approach is scientific and potentially fruitful of inquiry.
    For we may freely explore the dynamics and programming of he i/o processor and its effects on the biotech robot, within the moral limitations of dealing with minds/souls that are ends in themselves.

    [. . .]

  150. 150
    kairosfocus says:

    3] This thread isn’t about life-origin models is it? BarryA didn’t bring that up, neither did I. So, I did not argue with your xi above. BarryA was dicussing qualia, and mind being separable from brain.) . . .
    Classic evasion.
    I came in at the request of AIG, in response to his 93, where he was inter alia discussing the issue of CSI – the superset to which FSCI belongs,and the issue he makes over what intelligence is. I did so by in part highlighting why we infer to intelligent agency as a causal force in seeing FSCI based on relevant statistical thermodynamics principles-based considerations. This can be seen in my remarks at 99, where I first address the issue of definition as epistemic, investigatory process.

    Then I conclude by excerpting the definition process used for intelligence in my always linked, which raises precisely the issues that I later addressed. In 103, I cited the Cicero case to show just how long since a sufficiently complex and functional digital string has been held to be a marker of agency at work. In short my observations are consistent with longstanding intuitive understanding. It is in that context and in the context of the typical selective hyperskepticism used in dismissal that I pointed to the significance of DNA as an illustrative example, and then introduced the microjets case of getting to DNA [and proteins etc] in a model of the pre-jet [cf prebiotic] soup.

    Note in this context, onlookers, that in 104, Q is addressing very similar matters, on the probability question. To do so, he raises the issue of the explanatory filter. (Indeed, in 111, repeating the dismissive mantra on weak arguments, he then went on to misrepresent my remarks on the EF, as if I were saying – just the opposite to what I do say, repeatedly and even insistently – that the EF produces a conclusion as by deductive argument rather than by provisional, empirically anchored inference to best and explanation.)

    It is in that general context of discussion that I laid out my summary argument ion 109 and gave the thought experiment in 110, which in point xi explicitly identifies that it is speaking to in effect the pre-jet soup as a model of the [statistical thermodynamics principles based] dynamics of the pre-biotic soup to bring out the issues on origin of discrete-state, functionally specified information bearing strings. [DNA string elements are 4-state, and protein string elements by and overwhelming large are 20-state.]

    As onlookers can easily see by scrolling back up, a subsequent talking at cross-purposes occurred when Dave Scott spoke to the microjets case on the assumption that I was speaking to protein-protein functional interactions [which, once proteins are duly assembled and put into proximity is “spontaneous”], and my continued assumption that we were both talking to the context of the pre-jet soup, as an analogy of the prebiotic soup – considered as a test case on the credibility of chance + necessity alone forming FSCI-bearing information strings [thence addressing the inference to intelligent agency as the most credible source of such FSCI] – led to Q’s earlier dismissal attempt that my argument was not cogent.

    Subsequently, the matter of cross-purpose was clarified through pointing out point xi, but instead of addressing the issue on its merits, we are back to vague “weak agument” dismissals and evasions. Just as I pointed out in 142.

    BTW, it should be noted that by 113 I did take up the issue in the thread in the main:

    on the main issue in the blog thread, I observe that, per AmH dict as a witness, the word empirical means:
    a. Relying on or derived from observation or experiment: empirical results that supported the hypothesis. b. Verifiable or provable by means of observation or experiment: empirical laws.
    It seems to me that our first person experience of ourselves as agents with reasonably reliable minds that manifest intelligence [e.g through producing functional information], and our consistent observation of others as agents fits in under this rubric. So, I think there is excellent reason to hold that no claimed account of intelligence that ignores or cannot credibly ground this fact and its origins on its premises, is a non-starter.

    4] You arrive at a conclusion about the analogy (In short, it is an intelligent agent.) Sure. That is why the analogy was selected.
    At last we get to a specific “analogy.”

    If you had paused to read Derek Smith’s argument, you would have seen that he was specifically speaking of the human body as a complex servosystem. Robots are the general class of the relevant servosystems [e.g consider the control of your arm], from the perspective of control systems engineering and cybernetics.

    In that context, he pointed out to the existence of two levels of controllers in an autonomous system, the first being the actual i/o control processor that drives effectors/actuators and draws in feedback information on the actual vs planned track of motion. The second is the intelligent director, as I described it, which images the relevant desired path ahead of time and sets up the track that the control loop proper can then track actual expected vs intended track, thence responding tot he difference between planned and actual track, reducintg information processign dramatically.

    Cf here points 3 – 5 in 142, noting how DS uses, “higher order controller” in a context that explicitly addresses freedom of conception of desired path, decision and action, i.e creative self-direction, actually using the concept “it is now capable of willed behaviour, or “praxis” .” [This also raises the implication of coming close to self-consciousness and qualia as raised by BarryA, though it does not directly address it. Certainly the autonomous servosysrtems in view are sensitive to their current and anticipated environments, select goals and paths to them, then act with management of actual vs intended performance. But that still leaves out own known self-awareness — sense of I-ness and associated phenomena such as qualia – as an open question as to what it is and whence it comes from; though it is a massive empirical fact. ]
    ANALOGY , as a dismissal term on my use of “intelligent agent,” is plainly inapt once we are dealing with predictive control systems in which the efferent pattern may be creatively imaged and decided on well ahead of time. For, we are now dealing with intelligent, creative, decisional and complex, functional actions that generate FSCI that guides action ahead of time, and then triggers a servo-based regulatory process that keeps complex performace close to track.

    Of course, this invites the point that instinct can supply some of the higher order programming, but that immediately raises the point as to where did such FSCI-based programs come from other than by their empirically known best explanation [as again discused above]: agency.
    Also, so soon as autonomous creativity and its qualitatively different adaptability are to be reckoned with, intelligent agency is directly implicated in the servo-system. And, such is at least possibly the target of AI research.

    But the bottom-line is clear: the brain can, plainly, credibly be viewed as in material part an i/o processor with a major focus on execution and regulation rather than creativity. It acts on information but is nor credibly sourcing that information through lucky noise or empirically known mechanical regularities tracing to mechanical necessity. And from our own observation of agent action, we know that agents do produce such FSCI as a matter of routine behaviour – whether or no such sits easily with evo mat paradigms.
    But at this cross-paradigm scientific research programme level evo mat cannot properly claim to be a privileged position, nor may it self-servingly redefine science to suit its wishes.

    [. . .]

  151. 151
    kairosfocus says:

    5] you arrive at a wholly new conclusion using the process (That holds for DNA in OOL and OOBPLBD, and it holds for the underlying physics of a fine-tuned, life-facilitating cosmos.) The original system was not about DNA, and didn’t really mention fine-tuning. Even if the conclusion is right, your premises don’t lead that conclusion having the 100% confidence inherent in how it is written.
    Now, what is it that I actually said “holds for” DNA in OOL and OOBPLBD and OO POLFFTC? Let the record of post 144 speak:

    11 –> Thence, when we observe the storage and transmission and use of such FSCI as system-functional information, we are well-warranted to infer to agency as the responsible force. That holds for DNA in OOL and OOBPLBD, and it holds for the underlying physics of a fine-tuned, life-facilitating cosmos.
    So, now, Q: just where and why is it that — on inference to best explanation across live option alternatives relative to factual adequacy, logical and dynamical coherence and explanatory elegance/power — you find the above chain of reasoning defective?

    But, in my actual argument, just what are we well-warranted [cf above for why] to infer to, and relative to just what observations?

    a –> OBSERVATION: the storage and transmission and use of such FSCI as system-functional information
    b –> CONTEXT: the empirically known reliable origin of FSCI, thus andchoering the SCIENTIFIC INFERNCE that FSCI is a reliable sign of agency acting in intelligent design.
    c –> INFERENCE: On IBE in the context just summarised, agency is the best explanation of the observed FSCI.
    d –> Is this properly subject to the OBJECTION excerpted: “your premises don’t lead that conclusion having the 100% confidence inherent in how it is written”?
    e –> ANS: Plainly not – we are dealing with empirically anchored inferences on IBE as does all of science. In short this is a strawman objection that flies in the face of what I have explicitly and repeatedly directly stated – and this has been going on all the way back to 109 above, Q! (One is tempted to infer to deliberate misrepresentation [and at NCSE-ACLU level, aka Dr Barbara Forrest, whom I on good reason hold to be plainly dishonest and committing of what would in any other jurisdiction but the USA with its very poor libel laws, be seen in court as failure to carry out plain duties of care resulting in improper damage to reputation, careers etc — that is credibly the case], but the more likely problem at this level is confusion and “seeing” what is “expected” of ID thinkers, but what is actually a distortion of what we have had to say.)

    In short, yet again we see a strawman argument. And the convoluted, factually inapt and incoherent rhetoric in the immediately following gives the game away . . .

    6] not all systems are as complex as your original scenario. Some are no where near as obviously so complex. DaveScot was correct in mentioning that some systems can assemble from materials exposed to random events (like a tornado in a junk yard). This is even more likely (many orders of magnitude more likely) if these materials are able to exert force over a distance – like with charge. Note that this is not an argument for life-origins – it is about your claims regarding the likelyhood of some basic storage and transmission systems arising without intelligent agency.

    Now, on points:

    f –> The EF explicitly excludes the cases of functional discrete state information that are insufficiently complex to be beyond the credible reach of chance on the gamut of the cosmos. To advert to this at this stage as if that is a cogent objection is a strawman.

    g –> Dave Scott explicitly observed in 124 that: “my point was that properly designed proteins self-assemble into larger complex structures. If the component parts of jet aircraft were of the same nature as proteins then they too would self-assemble. That said, to create parts that self-assemble takes MORE design, not less, more teleology, not less.
    h –> Onlookers, kindly note that I have first cited point xi at 110 above; which makes it plain that this last as just bolded is the precise context of my remarks, though it is clearer to say that proteins have a key-lock fitting mechanism that is based on the coding of the amino acid chain that makes them fold into precise shapes that then when the appropriate proteins are clustered close enough makes them slide together and “click” to perform life-functions. Also, in clarifying the cross-purpose remarks based on not observing that point, I have excerpted point xi at least twice since and referred to it many times. In particular, the proteins have to be made first which is where the FSCI enters the picture.

    i –> And BTW amino acids actually “prefer” to react with non-amino acids. That is part of why there is the elaborate, FSCI-reeking code-reading and chaining algorithmic mechanism in the cell to manufacture proteins as I excerpted at 116 from Wiki.
    j –> In short, yet another strawman. For, we are NOT dealing with the emergence of “basic” information storing and using systems – that the EF deliberately sorts out before going to an inferenfce on FSCI to agency as its credible source relative to what we do know on the origin of FSCI in directly observed cases, but highly sophisticated ones that are based on codes and algorithms that themselves are manifesting FSCI far beyond the reach of chance + necessity on the gamut of the cosmos. [THAT is why the pre-jet soup is so relevant to the postulated pre-biotic soup which alleges to plausibly show how proteins and DNA etc could self assemble by chance and necessity in plausible pre-life environments. The attempts manifestly fail and fail repeatedly because they do not have the dynamic power to account for the origin of FSCI.]

    [. . .]

  152. 152
    kairosfocus says:

    7] These non-obvious, and relatively simple systems may or may not fall on the FCSI side of an explanatory filter. An experiment would be needed to demonstrate how well your analogy can be extrapolated to some of these non-obvious cases.
    WHAT such “simple systems” that manifest sufficient complexity of function to be relevant while being safely on the chance side of the EF threshold? How do these systems relate to the systems that are explicitly in view that ARE on the beyond the reach of chance side of the extended UPB?

    Oh . . I get it: this is “climbing Mt Improbable” step by step from simple to complex again.

    Basic problem: the threshold of relevant functionality for the storage element mainly in view, DNA [RNA is in effect an extension of DNA and the five key nucleic acids just implied, ACGTU are hard to get to in any plausible pre-biotic soup], is MANY ORDERS OF MAGNITUDE beyond the UPB. DNA of typical scale at least 300k – 500 k elements, to code for dozens to hundreds of proteins of typical length 300 monomers. 4^300k ~ 9.94 *10^180,617 cells in the config space, so islands of functionality of any reasonable scope are utterly lost in the config space of mostly non-functional cells. You have to get to the shores of life function to climb up to more and more effective function!

    And the experiments in question relvant to the microjets case are as simple as EXPT A:

    how long will you need to wait for to get a drop of ink dripped in a glass of water to re-assemble by chance once diffused?

    [Ans: reliably, longer than the lifespan of the cosmos. Indeed, as Q knows [notice how above he never ever denied having at least a minor in physics or at least enough College-level physical science to understand what is going on] this is very close to the basis for the highly successful, enormously empirically well-supported, statistical thermodynamic version of the second law of thermodynamics.]

    Similarly, EXPT B:

    Collect the letters for a suitably long statement in English, say the just above last paragraph from EXPT A, which has in it by my count some 320 26-state letters. This is comparable in configurational complexity to a typical protein molecule.
    Now, take the letters and put them in a box, like Cicero, but don’t simply shake and drop on the ground: take one out at a time, 320 times in succession, replacing the letter and shaking again after jotting down. How long on average will you have to wait for the 320 letters to give not even the same as the above, but ANY coherent sentence of at least 150 characters length? [Information theorists will immediately tell us, longer than the observed universe credibly has or will exist. Indeed, you may well see up to 6 letter words but once you go to 7-letter words and beyond, the performance will sharply drop off as rally long words are not so common in the space of possible configurations: 26^7 ~ 8.03*10^9, whilst the entire vocab of English when I was a HS student was routinely said to be 800,000 words.]

    Remember, in the relevant pre-biotic soups we need to get to dozens of proteins, the DNA to template them, and the algorithms and code and machinery to express all of the same – all by chance to get to the first level of functionality, thence chance + competition to get to enhanced functionality.
    Speculative models based on just-so stories that glide over such challenges are metaphysics, not science.

    So we can safely take “non-obvious” to mean: not observed but here are some nice just-so stories.

    8] As a side note, even Galileo backed up his thought experiments with actual experiments. That’s one of the main differences between him and Plato.

    Strawman again.

    As onlookers can confirm by looking above at say 116 then Q at 117 then my further at 118, I showed that many of Galileo’s most scientifically persuasive and telling cases were precisely where he took thought experiments that were not actually performed, or where he – the U-trough that led to the principle of inertia – idealised the real world and went beyond where experiment can go.

    Since this is already a multi-parter, here is 118:

    a –> Consider [Galileo’s] U-troughs and metal balls rolling down then “trying” to get back up to their original level as he made the tracks smoother and smoother.

    b –> He then argued that in a perfectly smooth track, the balls would rise back to their original level. (Have you, Q, ever seen a perfectly smooth and actually friction-free trough? [Or even a friction-free air track or air table?])

    c –> He then made the next in-thought extension: flatten out the rising arm, so that the ball is on a smooth in effect infinitely long track and never gets a chance to rise back to its original level. Thus, Galileo arrives at and in so doing warrants in effect Newton’s First Law of Motion [i.e., in our terms, of MOMENTUM], the law of inertia – BY EMPIRICALLY ANCHORED THOUGHT EXPERIMENT. (Actually, if memory serves, he mistakenly thought that the ball would go in a circle — going a bit far with the fact that the Earth has been known since 300 BC to be a sphere.)

    d –> This brings us to a slippery phrase that as one knowing about scientific inference to best explanation [IBE], you MUST know is utterly inappropriate to such a context for science: proof of new knowledge. [a cite from Q at 117] Scientific knowledge of consequence is provisional, and empirically testable and reliable, not “proved.” AND THE SLIPPING IN OF SUCH A LOADED CONCEPT TO PREJUDICE THE CASE IN A SITUATION WHERE YOU DON’T WANT TO GO WITH THE IMPLICATIONS OF IBE, IS SELECTIVE HYPERSKEPTICISM.

    Q, one could be tempted to suggest that the rhetorical stratagem is to wait till enough time and commentary has passed to slip back in a refuted point as if it stands.

    But the better explanation is that you have either forgotten or simply never noticed the point in the first place.

    So, yet another sadly irrelevant, misunderstanding based strawman.

    Onlookers, at length the rhetorical pattern is painfully clearly exactly what I spoke to in 142.

    GEM of TKI

  153. 153
    jerry says:

    I have two comments having just read StephenB’s last post.

    Wow, what a great post that does not necessarily need any reference to what has proceeded. I have not followed this thread because of the time and interest in the topic itself. However, I often think that it would be great if someone would take the time to summarize the arguments made on these long involved threads. I know it is not going to happen because we all have time restrictions. StephenB’s post is short and well worth preserving for future discussions so the lot of us don’t spend 100+ comments rehashing the same material.

    My second comment is that the audience for our comments is not just each other here who actually write comments because that is a very small group. My experience is that few are moved very much to change their stripes based on discussions here. Instead they go to the wall, often with inanity and lack of logic to defend their beliefs. Can anyone name a Darwinist who has modified their views based on the comments expressed here about ID or even conceded that ID has a point? They inevitably leave quickly or get banned because of their intransigence or behavior. The threads are for those who don’t comment and read the various arguments that are made and are open to a mind change. They are probably a small number of the viewers since I believe that most of the non-commenting viewers are mostly members of the choir just reinforcing their belief system but are unwilling to interject themselves into a debate.

  154. 154
    kairosfocus says:

    OOPS:

    Badly phrased point:

    Is it contingent but sufficiently lacking in complexity that it could be accounted for by chance on the gamut of the cosmos as a whole acting as a vast lottery running at the rate of one trial every 10^-45 seconds, with one marked atom, and you could plunge your hand in anytime, anywhere and have a reasonable chance to pluck out that marked atom? If so, then, chance.

    I meant that if the odds of winning this just identified lottery are worse or comparable to the odds of the configs in question coming about by chance, then chance is deemed — very generously IMHBCO — a reasonable explanation under the terms of the EF.

    Only if the odds of getting to functionality in the config space are much worse than that will the filter infer to agency as the most credible explanation.

    StephenB, also the point is that Q’s mind guiding his body’s servosystems [to type up his posts for instance] may be just lucky noise generating the intelligent director templates to guide his i/o processor to drive his hands etc to type out the messages. So it is possibly chance and necessity only at work, what with quantum indeterminacy to guarantee that random behaviour is conveniently accessible to the neuronal networks, to feed into the deterministic dynamics!

    Or, plainly speaking, we are right back at the incoherence of evolutionary materialism [as I said at 106 in the epistemology thread that led to this one]:

    [evolutionary] materialism . . . argues that

    [a] the cosmos is the product of chance interactions of matter and energy, within the constraint of the laws of nature. Therefore, [b] all phenomena in the universe, without residue, are determined by the working of purposeless laws acting on material objects, under the direct or indirect control of chance.

    But [c] human thought, clearly a phenomenon in the universe, must now fit into this picture. Thus, [d] what we subjectively experience as “thoughts” and “conclusions” can only be understood materialistically as unintended by-products of the natural forces which cause and control the electro-chemical events going on in neural networks in our brains. (These forces are viewed as ultimately physical, but are taken to be partly mediated through a complex pattern of genetic inheritance and psycho-social conditioning, within the framework of human culture.)

    Therefore, [e] if materialism is true, the “thoughts” we have and the “conclusions” we reach, without residue, are produced and controlled by forces that are irrelevant to purpose, truth, or validity. Of course, the conclusions of such arguments may still happen to be true, by lucky coincidence — but we have no rational grounds for relying on the “reasoning” that has led us to feel that we have “proved” them. And, if our materialist friends then say: “But, we can always apply scientific tests, through observation, experiment and measurement,” then we must note that to demonstrate that such tests provide empirical support to their theories requires the use of the very process of reasoning which they have discredited!

    Thus, [f] evolutionary materialism reduces reason itself to the status of illusion. [g] But, immediately, that includes “Materialism.” For instance, Marxists commonly deride opponents for their “bourgeois class conditioning” — but what of the effect of their own class origins? Freudians frequently dismiss qualms about their loosening of moral restraints by alluding to the impact of strict potty training on their “up-tight” critics — but doesn’t this cut both ways? And, should we not simply ask a Behaviourist whether s/he is simply another operantly conditioned rat trapped in the cosmic maze?

    In the end, [h] materialism is based on self-defeating logic, and only survives because people often fail (or, sometimes, refuse) to think through just what their beliefs really mean.

    As a further consequence, [i] materialism can have no basis, other than arbitrary or whimsical choice and balances of power in the community [that is, might makes “right”], for determining what is to be accepted as True or False, Good or Evil. So, [j] Morality, Truth, Meaning, and, at length, Man, are dead . . .

    Have fun getting out of this vicious spiral, Q.

    GEM of TKI

    GEM of TKI

  155. 155
    kairosfocus says:

    Jerry:

    The posts and threads also serve as a tutorial cum forum that exposes people to the arguments and issues on both sides of the debates.

    When we can see that one side is on the merits and one is on the strawmen, then it is telling us something. (That is why after first arguing the ID case tentatively in another blog, I came to see that the argument is seriously compelling. That I saw seriously educated deeply informed and articulate technically competent people routinely getting hysterical and abusive in defence of their evo mat views was even more telling.)

    GEM of TKI

    PS: have a look at Rom 1:19 – 25 and 28 – 32. We have been down this sort of road before as a culture, and it would be wise to see where on history it is likely to lead.

  156. 156
    Q says:

    StephenB, in 147, mentions You don’t seem to appreciate what Barry A has accomplished with this post. In effect he has approached the mind/brain problem from both a scientific and a philosophical vantage point.

    Actually, I do appreciate BarryA’s post. My approach to the analysis is a bit different, and I arrive at somewhat different conclusions. Not firm conclusions like your suggestion that I “disavow the existence of the mind”. But, instead, the conclusions I reach require that we admit some level of ignorance (not only incredulity), i.e. that our explanations are innately limited.

    This is the main reason I object to KF’s posts, and less with BarryA’s posts. I read BarryA’s conclusion as consistent within his given scenario. KF’s are not. He breaks the rules of good science with his extrapolations.

    My main interest in the pursuit of ID is not to explore the obvious problems in which there is an obvious solution. Sure, computers are so simple as to not exhibit what is called qualia. Sure, human brains are so complex that they do. I want to see how those claims hold up when extended into the not-so-obvious domain. Such domain does exist, because things occurs in a continuum – one of my givens to approach these problems. By extrapolation, we can expect to see some brains that are less complex, and some computing machines that are more complex, and at some point, the lines of obvious-ness will be blurred. That is where things will be really interesting for ID.

    At that stage, experiments will be needed. Even some refinement of terms, possibly. But, while simple thought experiments may provide extrapolated clues, they will not be able to provide a sufficiently confident answer.

    This is not a strawman argument, as KF is insisting. It is essential to basic tenets of ID – the filters aren’t cast in stone, they are built around probabilities, and we don’t at this time know on which side of the filter every situation resides. Obvious situations, sure. But not all.

    This is also not to state that I have concluded that brain and mind are or are not separable. In obvious cases, there are good arguments.


    KF, continuing to defend his extrapolations in 152 I showed that many of Galileo’s most scientifically persuasive and telling cases were precisely where he took thought experiments that were not actually performed, or where he – the U-trough that led to the principle of inertia – idealised the real world and went beyond where experiment can go.
    But the experiments were performed. They were performed in many different ways. When they yielded results that differed from the prediction, the experiments were refined. The results of the experiments, although not demonstrating the ideal model Galileo may have described, have led to refinements to asymptotically reach the ideal model. And, the explanation of the ideal model was refined so that even you now agree that it is unattainable.

    KF asks Have you, Q, ever seen a perfectly smooth and actually friction-free trough? [Or even a friction-free air track or air table?]) No. And neither have you, I expect. Which is exactly the limitation of extrapolations I am discussing. It ties right back to the epistemology thread. In Rumsfeld form, we know what we know and we don’t know what we don’t know.
    —–
    But, as others have mentioned, this seems to be quite a repetetive argument. I’m not conceding to KF, but I think I’ve made my point well enough that observers can see that KF’s and my point are not totally contradictory. His arguments are quite fine for the black and white cases. I’m suggesting a more refined approach for the gray areas.

  157. 157
    StephenB says:

    —–Q “This is also not to state that I have concluded that brain and mind are or are not separable. In obvious cases, there are good arguments.”

    Why is it that those who would deny the existence of the mind express the point indirectly by saying that they have not yet reached a decision on the matter? That you cannot make the affirmation is telling enough. That you express no convictions about free-will agency is even more telling. Do you not appreciate the connection between your reluctance to acknowledge the non-material realm and your skepticism about ID?

    ID cannot prove the existence of minds; it assumes the existence of minds. The brain is nothing but a physical organ capable only of sense impressions, extended in the world of physical realities. A design inference, or any concept for that matter, cannot make its appearance there because it has no weight or mass. Only a non-material mind can receive a non-material thought, and only those who believe in both can accept any such thing as a design inference.

    It seems that you have ruled out the concept of intelligent agency apriori, and all of your objections are extensions of that prior commitment. No amount of scienific evidence or argument from analogy can overcome your presupposition that the explanatory filter simply may not cover all the bases, or that its component parts cannot be successfully isolated from one other, or that they cannot be analyzed in a sequencial manner. Like many in your camp you are looking for that fourth alternative—some explanation that does not involve either law, chance, or agency. As W. C. Fields once put it, you are “looking for loopholes.”

  158. 158
    Daniel King says:

    StephenB:

    ID cannot prove the existence of minds; it assumes the existence of minds.

    Why is it necessary to posit a “mind”?

    Isn’t this “begging the question”?

    Isn’t the aim of ID to identify “design” in nature (independent of human agency)?

    Isn’t ID agnostic about what brings or brought about that design?

  159. 159
    DaveScot says:

    stephenb

    A concept has no mass or energy but can you explain to me how it may exist without mass or energy to encode it? How is a thought stored if not in patterns of matter and energy? How can a thought exist in a perfect void? I’m not claiming it can’t. I’m claiming that there is no known way that thought can be independent of matter and energy. That said, I acknowledge that physics is incomplete at the extremes of large and small. We have no quantum theory of gravity and we have no idea what comprises what’s believed to be the bulk of the universe as revealed by observation of the motions of large objects such as galaxy clusters and at the far fringes of the observable universe. This unknown stuff is called dark energy and is thought to homogenously permeate the universe and reveals itself as an anti-gravitational force cumulatively over great distances. If this dark energy permeates the universe and effects normal matter and energy through gravity and we don’t have a quantum theory of gravity who’s to say that this force isn’t quite homogenenous at the smallest scale and isn’t enough to tip the scales of quantum uncertainty in favor of one result or another. This could very well be the basis of what is usually thought of as the supernatural. If we had a more complete physics what’s supernatural under today’s physics might be quite natural in light of tomorrow’s better understanding. That’s par for the course for science – turning the supernatural into the natural.

  160. 160
    Q says:

    StephenB asks, in 157, “Do you not appreciate the connection between your reluctance to acknowledge the non-material realm and your skepticism about ID?”

    Although that is a loaded question, I say “Yes I do”. It is a requisite to have some level of skepticism about sciences, even ID. DaveScot and Daniel above give good reasons that the sketicism, within the framework of a science using the scientific method, is appropriate.

    StephenB says “Like many in your camp you are looking for that fourth alternative—some explanation that does not involve either law, chance, or agency.”
    Actually, I’m much more interested in the demonstrable boundaries between each. Only if the pursuit of the boundaries leads toward a fourth explanation should one be considered, IMO.

  161. 161
    StephenB says:

    —-Daniel Kind: “Isn’t ID agnostic about what brings or brought about that design?”

    Sure. I refer not to the entity that brings about the design but the person that perceives it. I submit that from a philosophical perspective you can’t perceive a design with a brain, you need a non-material mind. I don’t think science can comment on the subject—at least not yet.

  162. 162
    StephenB says:

    —–DaveScot writes, “A concept has no mass or energy but can you explain to me how it may exist without mass or energy to encode it? How is a thought stored if not in patterns of matter and energy? How can a thought exist in a perfect void? I’m not claiming it can’t. I’m claiming that there is no known way that thought can be independent of matter and energy.”

    From the scientific perspective, I don’t think we can comment on it, at least not yet. However, it seems that, philosophically, we must posit a non-material mind that has the capacity to resist and eventually overrule the brains impulses and influence the physical world of cause and effect. It seems to me that the non-material mind produces and receives non-material thoughts, while the material brain produces and receives electrical impulses and other physical properties. In other words, the mind and the brain each produces something in its own image and likeness; each interacts with and influences the other; each plays a role in acquiring knowledge. A non material mind senses an object outside of itself, receives a non-material image that corresponds to a non-material form present in the object.

    Naturally, the mind is dependent on the brain in some fashion, but it can also exert its own influence on the brain and all other parts of the body. Obviously, I am assuming something like a spiritual soul, consisting of an intellect and will. We can’t accept the simplistic formula of materialism, because we have to reconcile causation with responsibility and free will. I am also assuming that both the mind and the brain play a role in storing information and retaining memories. We have two choices: either we accept the contradictions and absurdities of materialism (monism) and determinism, or we embrace the paradox of mind/body dualism. I don’t think there is a third choice. So, yes, I am claiming that a thought MUST be independent of matter and energy in order not be be their slave.

  163. 163
    magnan says:

    It keeps going on, very interesting but ignoring the actual evidence. As I posted in #63 (and backed up with a lot of data in #107), “Although this philosophical/metaphysical debate is interesting, it is basically dry and empty, since it blandly ignores a mountain of empirical evidence for a “spiritual” or nonphysical component to man’s consciousness.”

  164. 164
    kairosfocus says:

    H’mm:

    Comment in mod — why I know not.

    GEM of TKI

  165. 165
    StephenB says:

    magnam: It seems to me we are in total agreement here. I don’t understand the objection.

  166. 166
    kairosfocus says:

    Folks:

    I suspect a word list issue, so I will try this last time:

    ______________

    I am getting just a little excited over emerging sci-tech and worldview level possibilities in this thread . . .

    As I look at the responses, once the “brain as i/o control processor” version of the Derek Smith cybernetics model of autonomous, self-directing efference copy/predictive model -reafference intelligent servo-control systems [cf his Fig 2] was put on the table as a way to look at the issue, several things have jumped out at me:

    I: DESIGNER’S ITCH: I – as often happens — have a major case of “designer’s itch” as I see the way that the DS model correlates very fruitfully with what I know from many of relevant fields in control — and as far away as athletic visualisation for peak performance, muscular memory and even education on the the classic taxonomy of goals for the psychomotor domain — tremendous possibilities. [Contrast the sadly defensive reactions of Q, who cannot even acknowledge that my whole phil of science is based on warrant through provisional inference to empirically anchored inference to best explanation. Nor can he even acknowledge that Galileo used thought experiments that went beyond the empirical in scientific explanation and warrant – i.e Q is indulging not only in strawmen but repeated selective hyperskepticism thereby. Both, as I have explicitly pointed out repeatedly. Then, even sadder, consider that he is a sample of the mindset of today’s Evo-Mat crippled HS science teacher! Any way, let us move on to serious stuff . . .]

    II: FROM CONCEPTUAL MODEL TO DESIGN: I can see how – based on my design and analysis background and my interest in Mechatronics as a breakout, integrative, synergistic design paradigm [I once designed a B.X level Engineering degree programme around mechantronics . . .] — to feed in detailed architectures and dynamics as well as modelling to actually design, BUILD and test one of R Daneel’s early ancestors. (BTW, the first diagram at Wiki is worth the click.) It is for instance telling me that the first easiest place to do something like this in a mobile system, is probably for an aerial vehicle as the environment in the air is relatively obstacle free and easy to monitor for potential obstacles; though of course, nap of the earth stuff for say surveillance ands for agricultural purposes is also implicated. (And I have a lot of potentially interesting economic uses for autonomous unmanned aerial vehicles. [Unfortunately, this potential also speaks to possibilities for cruise missile technology, and explains for instance the performance characteristics of say the Tomahawks of 1991 and 2003 etc. Thence, sadly, the homebrew version too. The world’s defence systems environment has got a lot more complex, evidently . . . but we as citizens need to be aware of that and that it in turn implies a different stance on defence policy and praxis, given that, e.g., what can be brewed up in the equivalent of a home-brew beer kit. On the good side, so can biofuels if ever we get algae fuel going! (Let’s not be too blue . . .)])

    So much for the ID -is – a – science – stopper” NCSE etc mantra!

    Sci-tech development issues . . .

    III: SCI-TECH STARTERS, NOT STOPPERS: In short, the DS-type model is a science and technology starter not a science stopper! That is, including the ID challenge version – [a] can we actually BUILD a self-conscious, AI based robot using the Intelligent Director-i/o processor model? [We can only try . . . i.e here we can go do some real-word experiments – though let us note that we ourselves can arguably be seen as examples of the DS class of sophisticated servo systems.] Or, [b] if not, can we at least build an autonomous one that will exhibit environmentally effective, goal directed behaviour, cost-effectively? And if so, [c] where will that take technology — and science — and phil — too?

    IV: MODELS AND REALITIES (PRESENT AND PROSPECTIVE): Thence, too, the point that a model world is potentially empirically descriptive and can be sufficiently predictive to become the basis for real-world creative action, once a system architecture is logically and dynamically valid, and compatible with known or foreseeable materials and sub-system technologies. In short, the classical sci-tech agenda is: describe, explain, predict control — or at least, influence.

    V: DESCRIBE- EXPLAIN- PREDICT- CONTROL: The DS model passes this test with flying colours: (i) it describes the planning-executing functions of a certain class of known autonomous entities [us humans individually and as Ac 27 summarises, in the community of people having to deal with a potentially hostile environment using socio-technological systems and governance mechanisms . . . H’mm: a democratically governed collective Intelligent Director as systems architecture – debate the options and try the best on balance across votes . . .?], and (ii) it is potentially fruitful of innovating future tech and associated science.

    That leads to emerging phil considerations . . .

    VI: MINDS, BRAINS, AND INFORMATION INTERFACES: In that context, it is obvious that the key interface between mind [as intelligent director] and brain [as i/o control processor] is INFORMATION. [a] Once an efferent copy is there on the hardware, it can then drive the algorithms for effecting and for path-differential monitoring, feedback and adaptations to contingencies. In turn, [b] such an efference copy/predictive model is based on learning and generalisation from experience – suggesting [c] neural network type architectures for at least a part of the more sophisticated levels, and also that [d] the i/o processor, across time, provides key sensor data that helps construct a world-model to guide the Director.

    VII: PHIL/WORLDBVIEW IMPLICATIONS: The Derek Smith Intelligent Director-I/O Processor-Servosystems cybernetics model is of course compatible with a materially expressed director, but also points a way to what we think we experience: thoughts that are self-willed and act into the cause-effect chains of the material world but are independently intelligent — not determined/ driven and wholly reducible to/ “explained” without residue by some blend of chance and necessity acting across aeons from hydrogen to humans. In short, it is independent of the ontological debate over monism/dualism, once the role of information is acknowledged. We cvan use the information-level expression of this to get on with very interesting sci-tech stuff. But also, it plainly puts the dualistic view that there is a sufficiently self-determining, actively creative and intelligent mind that interacts with the body back on the table as a seriously “discuss-able” – and remember that we don’t need to commit tot he reality of an idea to discuss it fruitfully on a modelling what-if basis and to embed such in prospective technologies — conceptual-analytical option in a sci-rtech society. That potency, of course, is why it excited the sort of dismissive remarks, strawman attacks and seen above.

    So, can we look at how we can develop interesting intelligent design oriented sci-tech, while we seriously look also at the worldview-level issues?

    GEM of TKI

  167. 167
    BarryA says:

    This morning I woke up and was lying in bed thinking about stuff before I got up. I was thinking about a particular relationship I have with a wonderful person. At the moment there are some problems, and a part of me (the baser part) was saying “time to distance yourself or terminate the relationship altogether.” Another part of me said, “that thought is unworthy; reject it.”

    When I teach, I frequently tell my students that with respect to almost every hard decision there is a conflict between what you “feel” you want to do and what you “know” is the right thing to do. When that conflict arises, as it must at some point for all of us, always go with what you “know.” Feelings ebb and flow. One might have an almost overwhelming impulse today that is gone a week from now. Ethical knowledge is the only stable and reliable foundation upon which to base our decisions.

    Our culture says, “follow your heart.” Our culture has it exactly wrong. Your heart is fickle, and it will lead you astray. To me the essence of love is always choosing the other no matter whether I “feel” like it at any given time or not. So many marriages fail for lack of understanding of this basic concept. My feelings tell me I don’t like this person I’m married to right now; they also tell me that will never change. I must get a divorce. Nonsense. If I choose to stay and work on the marriage, more likely than not, a week or a month from now my feelings will follow. (Obviously there are limits to this, e.g., a physically abusive relationships).

    So this morning I said, “teacher, teach thyself.” You know beyond the slightest doubt that this person is wonderful. Go with what you know, and not what you happen to feel at this moment, and that is what I chose to do. In other words, I chose to love.

    What does my internal debate have to do with this post? Everything. The fact that I can have an internal debate at all demonstrates that I have a mind that is separate from my brain. My brain sends me impulses such as “I don’t feel good about what’s going on in this relationship just now. End it.” My mind says, “I reject that impulse. It is unworthy.”

    The materialist will say it is just my Freudian id competing with my superego. To which I say, bunk. Freud’s entire project was an attempt to explain (or explain away) in materialst terms that which is obvious to everyone with eyes to see: We have a dualistic nature. Our mind (or spirit if you like) often wars with our brain (or body). This has been known since ancient times (see Romans chapter 7). Freud is thin gruel indeed in relation to this rich tradition of spirit/body duality, and in the end Freud simply makes no sense. How can matter oppose itself within the same cranium? The very thought is absurd.

  168. 168
    Q says:

    KF, in 166, tossed outthe following paranthetical [Contrast the sadly defensive reactions of Q, who cannot even acknowledge that my whole phil of science is based on warrant through provisional inference to empirically anchored inference to best explanation. Nor can he even acknowledge that Galileo used thought experiments that went beyond the empirical in scientific explanation and warrant – i.e Q is indulging not only in strawmen but repeated selective hyperskepticism thereby. Both, as I have explicitly pointed out repeatedly. Then, even sadder, consider that he is a sample of the mindset of today’s Evo-Mat crippled HS science teacher! Any way, let us move on to serious stuff . . .]

    Talk about defensive reactions! That was totally a tangent to the point you were really making, just to poke a stick into something you don’t like – like criticism.

    Besides, your defenisiveness has lead you to misrepresent my point. Try again, sometime later.

  169. 169
    Q says:

    KF in 168, points out “In that context, it is obvious that the key interface between mind [as intelligent director] and brain [as i/o control processor] is INFORMATION.”
    Could you add a bit more to that? In the computer engineering model, information is passed across the interface between modules. The interface is something else, like in a computer it can be a section of the stack that can receive and deliver the data/information. I was expecting that your description would be leading to “the key interface between mind [as intelligent director] and brain [as i/o control] transfers information.”

    Could you explain the extension to the model you are suggesting so that that which is being transferred becomes the medium for transfer? I.e. how does the information simultaneously serve as the interface? When you mention “an efferent copy” of the information, are you suggesting that information has the innate capability to flow from brain to mind? Or, is a separate process/interface needed for the flow between brain and mind to occur?

  170. 170
    kairosfocus says:

    Okay . . .

    First, Patrick et al, thanks for the help on 164 and 166. Ever mysterious are the ways of lady Akismet!

    BarryA, your remarks in 167 on the EXPERIENCE of being an agent with a mind of his own are ever so apt, if one would but listen.

    As to Q at 168 -9:

    1 –> First, kindly go READ Derek Smith as a 101 on the subject, then come back back to us on his points of substance.

    2 –> In particular, attend to the Fig 2 and its context, which more than adequately answer to the questions you have. There you will see the role of memory storage, information transmission and the creation of efferent copies and creative prediction of intended servosystem path by

    “a higher order controller (far left [NB This is what I have termed the Intelligent Director –i.e the intended AI]) [which] replaces the external manual source of command information. This means that there is no longer any high-side system boundary, making the new layout self-controlling. That is to say, it is now capable of willed behaviour, or “praxis.” “

    3 –> As anyone who reads my always linked Section A and notices my general diagram of a communication system will recognise, information in the relevant system- functional sense does not flow automatically: it is created, encoded, transmitted, received, decoded and used. [All of which in our observation require intelligent action to set up.]

    4 –> As to the puerile attempt at turnabout rhetoric in 168, I will waste no more time on such save to briefly point out that onlookers can see for themselves just who is playing rhetorical games with vague dismissals and who is seriously taking up the challenge on the AI issue, and sourcing, applying and putting up a serious model that provides a platform for technological and scientific development; then raising implications and issues on the worldviews core for that development.

    5 –> Maybe some plain speaking will help: Q if you don’t understand block diagram algebra and/or signal flow graphs, register-transfer algebra, and associated issues in the complex frequency domain view of system dynamics [one-sided [0 to + infinity] Laplace Transforms and the application of the s-variable to transfer function analysis is a start, with Z transforms a help (you can simplistically view the Z as a unit time delay element and revert to difference equations)], as well as “the assembly language instruction’s view” [and register level view in general] of relevant technological systems, you are out of your league here; apart from taking time to listen and learn.

    6 –> On the other hand, if you DO understand these things but insist on making the sort of objections above, you are being frankly mischievous, and not in the nice sense. [That is, are you simply playing the troll?]

    ++++++++++

    Gentlemen, let’s look at how we can map out the exploration ahead: Target R Daneel and co — or at least his first intelligently designed evolutionary ancestors.

    And, while we are at it, let us use the issue of the Intelligent Director to reframe the issue of the mind and intelligent agency in terms that can become practical and at the same time open up a way for us to better understand — or should that be, “appreciate” — the enduring and profound mystery of mind and brain.

    For instance, is the Q theory indeterminacy the wedge that allows mind to insert information into brain? Or, what?

    Certainly, once such info is in brain (or i/o control processor more generally), we can see from the Smith model how efference copies and predictors can be used to drive the servosystems of action in the real world, starting with things like speech and typing. Sensor suites, suitably procesed and managed on a differential basis relative to expected/predicted, would then allow for intervention on management by exception.

    BTW,this also fits in nicely with the Weber – Fechner law on how sensor response is proportionate to fractional changes in the sensory inputs, i.e the body’s nervous system response is logarithmic. [12 decades worth of compression i.e our senses can carry signals in a ration 10^12:1 as dynamic range.

    For vision I recall the update period is about 1/8 second so that 16 frames per second starts to look like motion. The eye and the ear are of course classic sensor arrays, and that leads to discrete fourier transforms as a useful tool for analysis. [About 3 – 6 kHz is good enough as an analogue band for recognition of speech. Video in colour classically takes about 6 – 8 MHz. Modern digital schemes are a lot better than that, allowing 1 sa/sec to take up better than a Hz. on control Astrom- Wittenmark’s rule of thumb is that it is nice to have about 6 samples per significant fastest rise time in a digitally controlled system, not the 2 * max freq for classic communication systems.]

    And if this all looks like I am thinking of reverse engineering the human body considered as a bio-tech robot: Of course! (We know what works so let’s start from there!)

    [Notice, I here distinguish between the FACT of mind which is a matter of empirical observation and experience, and coherent and factually adequate theories of mind — which we do not really have. Yet.]

    GEM of TKI

    PS: AIG, where are you? I’d love to hear from you on the issues now on the table, as an AI active practitioner. (After all, I am here just beginning to get my feet wet . . .)

  171. 171
    kairosfocus says:

    PPS: Found a nice intro on transfer functions and block diagram analysis here.

  172. 172
    Q says:

    KF, in 170, suggests [That is, are you simply playing the troll?]
    No. But I’ve been thinking about our different approaches to the issues. And, I have serious interests in the scientific aspects of ID.

    Your approach, I am thinking, is that you are taking an explicitly dualistic approach. In contrast, I’m not excluding dualism, but have narrowed my analysis of the issues, for a specific reason. That is, most of the claims and issues of ID in which I am interested are materialistic. They are about the real world. ID has claims that can be substantiated through the scientific method, at least the material claims.

    This is where I’ve objected to many of your claims. Even dualism has some overlaps with materialism – we aren’t arguing mutually exclusive domains, as your rebuttals of “materialism” seem to be. In the context of this thread, both dualism and materialism address the material issues of the brain. Related to that, as you have asserted, and to which I agree, the philosophies of dualism and of materialism can be analyzed with a purely logical methodology, just as you indicated with regards to the thought experiments. But, and this is important, to be internally consistent, these logic-based arguments only retain their validity in their logic-based domain – namely within their philosophy.

    Instead, if the conclusions from the philosophy’s methodology are used to describe material, observable events, then the line between philosophy and methodology has been crossed. When extended to the material world, logic is the method to provide predictions about observations. Those predictions then need to be validated through the scientific method through actual observations in order to be considered as “fact”.

    I object to the insistence that logic – the tool of philosophy – is sufficient to make absolute claims about the material world. That is the basic premise I find faulty. It is not about “materialism”. It is about proper application of the tools of pure philosophy vs applied science.

    I can be done with this aspect of the discussion, if you are willing.

    BTW, I read your reference to Derek’s explanation about cybernetics. It is consistent with my experience in the field. And yes, I am very acquainted with the issues you mention – block transfer diagrams, frequency domains, feedback loops, Laplace, Fourier, etc. and many more. My questions to you in 169 were not based on ignorance (so please don’t assume such) – they are serious questions about your claims. They are not about mischief – they are about arriving at a different conclusion about computer-based persuasion. See posts 1 and 169 for my argument – perhaps you could address the differences between my position and yours about the material issues concerning the brain.

    I appreciate your reply in 170 about the flow of information information in the relevant system- functional sense does not flow automatically If so – and without insisting I surf the web to get an answer to something you posted in this thread – does your claim in 166 that information is the interface still hold?
    —-
    As a side note, when you mention to AIG “I’d love to hear from you on the issues now on the table, as an AI active practitioner”, he’s not necessarily the only one experienced in that field who is involved in this discussion. I just don’t feel it necessary to argue based on my credentials – except when I think it appropriate – so haven’t shared with you.

    As an aside, you mention “For vision I recall the update period is about 1/8 second so that 16 frames per second starts to look like motion.” This issue is very much related to my experience. The update period is also related to the intensity of the light. Bright images are seen to flicker more at 16 fps, and dim images less so. There are even observations that it is related to the spatial frequency of the contents of the image.

  173. 173
    Q says:

    edit: above “They are not about mischief – they are about arriving at a different conclusion about computer-based persuasion. See posts 1 and 169 for my argument ” should be “They are not about mischief – they are about arriving at a different conclusion about computer-based issues. See posts 1 (in the persuade thread) and 169 above for my arguments.” Late night gaffe.

  174. 174
    DaveScot says:

    Q

    re the flicker/frequency issue

    Remember all the hoopla about subliminal advertising? Single frames with a message of some sort inserted into a 30fps television show aren’t consciously noticed but are noticed by the subconscious.

  175. 175
    kairosfocus says:

    Q:

    I: Perhaps it has not dawned on you that I am/we are now looking — at least as a thought experiment exercise — at the core design concept phase of what would be if played out, a project to BUILD an AI robot.

    (BTW, on the technical side-note the flicker effect is what leads to the use of interlaced scanning in NTSC and PAL/SECAM TV systems — interlaced [odd and even line]half-screens are presented at 60 or 50 Hz, which is above the flicker fusion threshold, at least for most of us; there are people who find TV at 50 fps unwatchable. Film based movies, shown in theatres at 24 fps, are simply double-pulsed — each screen is shown twice then the next frame is shown. This is also the reason why such movies run slightly faster on TV than on the silver screen — they are speeding up the frames. The way colours get rendered using the tongue of colour model is even more interesting on how information is being processed in our sensors and processing elements. So is the stuff on sound using an array of tuned hairs tied to nerve cell chains to do a real-time Fourier transform on the sound as it comes in as time-domain vibrations. BTW, one type of old fashioned analogue frequency meter in effect had a chain of tuning forks that did the same.]

    II: In that context, I have used Smith’s systems architecture model of an i/o controller processor tied to what I have termed an intelligent director that provides higher level, autononomously generated [i.e. self determined, planned and intelligent] trajectory information to the servo controller for “efference copy and reafference” – based control.

    III: I have then drawn out the point that for the mind-body issues in linked phil, we are looking at the question that INFORMATION is the bridge between the different elements and major sub-sections of the enlarged control loop. Thence, that this would speak to the idea that mind [of whatever ontology] and physical body interact through the common entity information. This last, of course, is embodied in signals and used to derive the control action, but which is not confined to any one specific material representation and its natural regularities (save that they must make room for contingency so that diverse signal states may be configured to convey meaning in messages and signals]. For instance, the information on the sheet of paper of the computer screen is at a different level from the chemistry of paper and ink or the physics of LCD based devices.

    We may then look at what this says to the issues of mind and body, including on monist and dualist accounts.

    IV: On the relevant monist account, evo mat, we are back at say Crick’s incoherence, as a classic expression of the materialist’s dilemma:

    The Astonishing Hypothesis is that “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules . . . . … Free Will is located in or near the anterior cingulate sulcus. … Other areas in the front of the brain may also be involved. What is needed is more experiments on animals, …

    In short, this looks very much like confusing the processor architecture and operations of a servo-system controller with the provision of the intelligent directions that give the human body its marching orders. And, even before bringing on board the specific Smith archi, I had made that point, more than once.

    More broadly, and as I for instance excerpted at 154, evo mat runs into the problem that if mind is the product of natural regularities plus lucky noise, it has no credibility as being able to reason seriously about serious matters thast would be required to for instance arrive at evo mat, a philosophical position. In particular, it fatally undermines the grounds for Mathematics and science.

    V: The dualist account starts from the empirical experience and observation of intelligent mind in action.

    We are not — pace the behaviourists [cf the rise of Cognitive psychology in recent decades] — to be reduced to mere stimulus-response arcs. Or, citing Niesser from the just linked:

    …the term “cognition” refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation, as in images [i.e we are, inter alia, back to the Smith templates imaginatively constructed based on intuition and knowledge] and hallucinations… Given such a sweeping definition, it is apparent that cognition is involved in everything a human being might possibly do; that every [1]psychological phenomenon is a cognitive phenomenon. But although cognitive psychology is concerned with all human activity rather than some fraction of it, the concern is from a particular point of view. Other viewpoints are equally legitimate and necessary. Dynamic psychology, which begins with motives rather than with sensory input, is a case in point. Instead of asking how a man’s actions and experiences result from what he saw, remembered, or believed, the dynamic psychologist asks how they follow from the subject’s goals, needs, or instincts.

    1 –> We know [provisionally of course, as is true of all empirically based knowledge] that FSCI is a reliably known artifact of such mind in action, as in all cases where such FSCI is encountered in directly observed process of causation, it comes from intelligent agents.

    2 –> tha tmeans that we have every right to infer to agents on inference to best explanation, in further cases where we do not directly observe: the principle of uniformity of cause-effect patterns that is at the core of science.

    3 –> It is of course reliably (though provisionally) long since known — cf Plato, the Laws, Book X — on vast experience that cause-effect chains reduce to one or more of chance, necessity and agency, e.g. the “celebrated” case of the tumbling die that shows all three in independent action. [This entails that we cannot assume or assert that agency simply reduces to chance and/or necessity.]

    4 –> Onlookers, of course on the strength of much interacion in recent days, Q wishes to posit a fourth unknown factor and/or to posit that we cannot in relevant cases reliably distinguish the effects of the three factors. The first (which he evidently denied on being pressed] boils down to an IOU backed by nothing. The second is patently false once we see the point that FSCI is a reliable sign of agency.

    5 -> For, not only do we observe that agents routinely produce FSCI, but we have access to the principles of statistical thermodynamics. And those tell us that once we deal with vast configuration spaces, we cannot credibly get to islands of functionality from arbitrary initial points, on the grounds of exhaustion of the probabilistic resources of the observed universe. So to infer to agency is to accept the best currently available, empirically anchored explanation.

    6 –> Indeed, as I point out in the always linked Section A, that is just what we routinely do on encountering say a web page: it is functionally specified and complex beyond the UPB, so we infer to agency not lucky noise as its most credible explanation.

    7 –> To suddenly turn around and reject this when we turn to cases like DNA — a complex and functional digital data string [i.e OOL and OO BPLBD], and to reject it on seeing the organised, fine tuned complexity of the physics of the life facilitating cosmos, IMHBCO typically reflects selective hyperskepticism and worldview level question begging, too often backed up by political correctness and abuse of power.

    8 –> For, we have no good logical or physical grounds for excluding he possibility of agency at the relevant points, which inter alia means that — especially as regards models of cosmogenesis — we should be open to the possibility of an intelligent an necessary being as the cause of the observed cosmos. [The ad hoc patchwork of multiverse alternative models is equally a set of metaphysical – not empirically anchored scientific — models. Cf my discussion in my always linked, section D.

    ______________

    So, having had to again summarise what I HAVE been saying all along, in the teeth of repeated distortions, I trust that from now on, we can proceed to discuss on the merits, not the strawmen.

    GEM of TKI

  176. 176
    DaveScot says:

    kf

    TV and movie screens are not the same. A TV picture would be flicker free at 30hz if it wasn’t for the way it is painted on the screen – called a raster scan. The TV picture is painted by an electron beam whose point hits a phosphor coating and makes it glow (or not) at that point. The beam starts at the upper left corner, sweeps horizontally across the screen to the upper right corner, shuts off, moves down one line, returns to the left side, and paints another line. It does it fast enough that the phospher lit up at the beginning is still glowing when it gets to the end.

    However, there is still the problem that the picture doesn’t arrive all at once. On a movie screen all parts of any single frame arrive on all parts of the screen at the same instant.

    NTSC video is 30 frames per second. Interlacing takes each frame and presents it by first painting all the odd lines and then all the even lines of any single frame. In this fashion it minimizes the effect of the delay from the when the first part of the image is painted to when the last part gets painted.

    Many modern TVs don’t interlace anymore as raster scanning is required by cathode ray tubes but not by LCD and other types of flat panel or projection TVs. They are called progressive scan and they work by storing the transmitted interlaced raster scan in memory until the completion of a frame then they display it all once at 30 frames per second. This in effect duplicates the way a movie projector works. There is no flicker at 30fps when complete images are displayed all at once.

    I happen to know this stuff because I was responsible for, among other things, the repair to the component level of television cameras and display monitors in the military back in the 1970’s and attended classes on TV theory. While attending college, after I got out of the military, I supplemented my GI Bill college money by fixing TVs as well as calibrating and repairing high end NTSC video editing equipment used in television content production.

  177. 177
    kairosfocus says:

    hi Dave:

    Good to hear from a fellow sci-tech head. I am thinking we can explore the AI side a bit as this has very interesting implications on what it means to infer to design.

    On NTSC — “never twice the same colour!” [and in management, NTSS is too frequent with too may managers who think themselves clever: “never twice the same story . . .” ] — wiki has a reasonable summary here.

    It reads in part:

    The National Television System Committee was established in 1940 by the United States Federal Communications Commission (FCC) to resolve the conflicts which arose between companies over the introduction of a nationwide analog television system in the United States. In March 1941, the committee issued a technical standard for black-and-white television which built upon a 1936 recommendation made by the Radio Manufacturers Association (RMA). Technical advancements of the vestigial sideband technique allowed for the opportunity to increase the image resolution broadcast to consumer televisions. The NTSC compromised between RCA’s desire to keep a 441–scan line standard (which was already being used by RCA’s NBC TV network) and Philco’s desire to increase the number of scan lines to between 605 and 800. The committee compromised and selected a 525-line transmission standard. Other technical standards in the final recommendation were a frame rate (image rate) of 30 frames per second consisting of two interlaced fields per frame>

    In January 1950 the Committee was reconstituted to standardize color television. In December 1953, it unanimously approved what is now called simply the NTSC color television standard (later defined as RS-170a). The updated standard retained full backwards compatibility (“compatible color”) with older black-and-white television sets. Color information was added to the black-and-white image by adding a color subcarrier of 4.5 × 455/572 MHz (approximately 3.58 MHz) to the video signal. In order to minimize interference between the chrominance signal and FM sound carrier, the addition of the color subcarrier also required a slight reduction of the frame rate from 30 frames per second to 30/1.001 (very close to 29.97) frames per second, and changing the line frequency from 15,750Hz to 15,734.26Hz.

    The two fields are one frame; and that is where I got the 60 from.

    I will have to check back on the range for flicker fusion, but as I recall it is 30 – 45 Hz or so.

    The Euro standards — keyed to their mains freq of 50 Hz — is just over the range. The US one is also more or less keyed to its mains freq, and that gets you comfortably past t he range. (Back to the 50s it was cheaper to go with mains than to use crystal controlled oscillators. That gave the Russians BTW interesting headaches on synching across their whole country as there was no one unified grid locked to a common freq by nonlinear frequency pulling effects!)

    Movies from my recall did use the double-hit on light to achieve the same flicker fusion.

    A key point in all the above is how much complexity goes on behind the scenes of any information processing system so that its complexity is only partly captured by issues on encoding of data. That is of course telling on just how conservative the Dembski type UPB is.

    GEM of TKI

  178. 178
    Q says:

    KF, in 175, mentions I have then drawn out the point that for the mind-body issues in linked phil, we are looking at the question that INFORMATION is the bridge between the different elements and major sub-sections of the enlarged control loop.

    I agree with the model you are presenting, and I agree it is a useful expolaration into the duality of mind and brain. I also agree that extrapolations of the model you are suggesting can lead to expectations about brain, and by inference, to expectations about the mind. It can also provide some insight into how they interact.

    So, I must disagree with your claim that “INFORMATION is the bridge”. I much prefer your earlier claim that the interface is the bridge, but not that information is the same as the interface. I am insisting that with the model being used, information is that which flows, the brain is one module that sends and receives, the mind is another, and the interface is the medium which carries the information between the modules. Perhaps I’m using “information” differently from you, but I think not, as you already mentioned that information can’t flow on its own.

    The purpose for maintaining such a clear conceptual difference in mind, brain, interface, and information, is that at least some of those can be theorized and studied in the material domain. Brain can be watched. Information flows into, through, and from the brain. This says that some properties of information can be inferred by observing the properties and behaviors of the brain. This should at least lead to an understanding of the brain-side connection between mind and brain. That is, at least part of the mind-brain interface interacts with the physical world, so at least part of the mind-brain interface can be understood with the scientific process by applying the scientific methodology.

    That part of the mind-brain interaction which cannot be studied with the scientific process can readily be understood to be on the non-physical side of the mind-brain duality.

    I’m not trying to “posit a fourth unknown factor”, as has been said multiple times. I’m trying to understand the boundaries of each of the factors, including the physically observable factors involving the brain and mind.

    In this topic, however, I am insisting that the proper model, as being described, has four, logically separable constructs – brain, mind, information, and interface. (However, if it is argued that the interface should be discussed as being of two parts – half that is brain and half that is part mind, that could make sense too. But, that still keeps information as a separate construct from the interface.)


    DaveScot: Even the new flat panel displays don’t display the image all at once. Not all pixels flip at the same time. Instead, they are also scanned out, with a row and column counter. Flat panels still don’t match the properties of a film projector, in which one shutter controls the entire image.

    The main difference between CRTs and flat panels, as pertains to this discussion, is that the light coming from pixels doesn’t decay once it is illuminated from a flat panel. But, the phosphor of a CRT immediately starts fading once the electron beam passes over it. So, by the time that the raster gets to the lower corner, the opposite upper corner may have faded to near black – and the after-image effect that KF mentions is why that fading isn’t perceived on CRTs. This is why CRTs used as computer displays, and which are progressive, typically need to run at the much higher 72 fps to avoid the perception of flicker. http://en.wikipedia.org/wiki/F....._frequency

    Also, even though film is played at 24 fps, and video at 30 fps (60 half-frames per second, as KF mentions, film-recorded movies do not play back noticeabley faster when played on TV. Instead, occasionally, fields are duplicated to keep the same average full-frame rate. Some variant of a 3-2-3-2 encoding format is used: http://www.videoccasions-nw.com/voframes.html

  179. 179
    DaveScot says:

    kf

    Mibad. There are a few different kinds of artifacts and I was confusing flicker fusion with smooth motion. Smooth motion is deemed acceptable at a sample rate of 24fps. Evidently flicker is caused by the lit and unlit time periods. I was wondering why a shutter on a film projector that flashed exactly the same frame on/off two or three times the frame rate would improve anything. It certainly won’t improve smooth motion as the only way to do that is increase the sample rate and you’d have to do that with the camera not the projector. What it does is reduces eyestrain by making the lit/unlit frequency faster. I stand corrected!

    On LCDs there’s no good reason to use a slow raster scan in updating the display except to cheapen and simplify the video processer side (you can use a video processor made to work with analog CRTs) not to accomodate the data rate of the LCD. The serial interface on modern flat panels is so bloody fast that you can effectively get every pixel on an NTSC resolution panel changing state simultaneously. The limitation is the light shutter speed not how quickly you can change all the input voltages on all the transisters that drive the all shutters.

    Not all phosphors on CRTs are created equal. There are slow phosphors and fast phosphors. You need a fast phospor to display motion without blurring so TVs use fast phosphors but the tradeoff is that you get fusion-flicker at lower frame rates. The orginal IBM PC came with a slow green phosphor monitor and operated at 50hz vertical refresh without a hint of fusion-flicker or eyestrain but any motion left a fading trail behind the moving object like the trail of a meteorite crossing the sky. Early LCDs suffered from same problem as the shutter speed was slower than the vertical refresh rate at some 30 milliseconds. Nowadays the shutter speed is down to 2ms so a framerate of up to 500hz can be accomodated if there was any practical reason to use such a high framerate.

    By the way, I have a patent involving the use of a display technology you probably never heard of called a cold cathode electron beam flat panel. You can see the patent here.

  180. 180
    kairosfocus says:

    Okay:

    This thread is starting to interact with the one over on computer persuasion. I ask that you all look over at my remarks here at no 46 in that thread.

    1] neural nets . . .

    You will see why I am looking at neural networks as an architecture for an intelligent director and also why I insist that information is the bridge. BTW, observe that neural networks are not locked down to being implemented biologically/chemically — routinely, they are now done in software, i.e “infospace.”

    2] Comm interfaces and info flows cannot be separated . . .

    And, the issue of the relevant interfaces is to pass information in compatible formats through ports of one sort or another — a biggie headache I assure you, cf my always linked Fig 1 and cf also the elaboration of the code-decode blocks, here in the ISO OSI “layer-cake” [that’s what I always called it to my students, for the obvious reason] reference model!

    3] Q, 178:Brain can be watched. Information flows into, through, and from the brain. This says that some properties of information can be inferred by observing the properties and behaviors of the brain. This should at least lead to an understanding of the brain-side connection between mind and brain. That is, at least part of the mind-brain interface interacts with the physical world, so at least part of the mind-brain interface can be understood with the scientific process by applying the scientific methodology.

    Yes.

    4] I am insisting that the proper model, as being described, has four, logically separable constructs – brain, mind, information, and interface. (However, if it is argued that the interface should be discussed as being of two parts – half that is brain and half that is part mind, that could make sense too. But, that still keeps information as a separate construct from the interface.)

    We can discuss these separately, but we can’t design, develop or put them together separately. [And that holds whatever the mind proper is and however it is made up. I just have excellent reason to infer that it is real and that it is not determined by the chemistry etc involved in brain function. Further, that its reality and credibility are necessary conditions for the praxis of science.]

    The purpose of an interface is to facilitate info flow, and it is the requisites of that info flow which make up the key to designing and understanding the interface. [For instance I once used a 6402 UART to transmit voice based on understanding the characteristics of the voice as an information-bearing signal and recognising how much bandwidth and signal processing were really needed. The answer is that for voice quality you can get away with a surprisingly narrow bandwidth and bit rate, even with fairly unsophisticated coding. Now, with adaptive, differential, pulse-coded modulation schemes, you can do even more . . .]

    Speaking of which . . .

    5] Dave, 179: Smooth motion is deemed acceptable at a sample rate of 24fps. Evidently flicker is caused by the lit and unlit time periods. I was wondering why a shutter on a film projector that flashed exactly the same frame on/off two or three times the frame rate would improve anything. It certainly won’t improve smooth motion as the only way to do that is increase the sample rate and you’d have to do that with the camera not the projector. What it does is reduces eyestrain by making the lit/unlit frequency faster.

    And, this is also a function of the light level of the screen — you can get away with a lower rate in a relatively dark room and with a relatively dim screen (why ther eis a fairly broad band for flicker fusion). [BTW, there are people who are sensitive to the remaining flicker in especially Euro style TVs at 50 Hz. We are dealing with populations here . . .]

    6] cold cathode electron beam flat panel

    I seem to vaguely recall reading of such a tech, 15 – 2o years ago was it? Nice stuff — hope you made some good money off it!

    Back to the 50’s — from vaguely remembered readings 25 years back — there was an attempt to make an early avionics HUD using see-through flat panel electron beam from the side CRTs.

    Again, this underscores how much information is embedded in the interface’s design itself.

    GEM of TKI

  181. 181
    StephenB says:

    Jerry: Thanks for @153

  182. 182
    jerry says:

    StephenB,

    You’re welcome but I have to thank you for the great post.

  183. 183
    Q says:

    KF, in 180, mentions about brain, mind, interface and information “We can discuss these separately, but we can’t design, develop or put them together separately.”
    Depends upon how one parses the problem. Information can be implemented without interfacing to mind – at least the properties of information as it resides in the brain. Interface can be examined without information flowing across it – at least the physical brain-side of the interface, such as using cadavers with no information flow. Just like any well-structured computer problem, the modules can be examined in isolation from the others.

  184. 184
    kairosfocus says:

    Q:

    Re: Information can be implemented without interfacing to mind . . . Interface can be examined without information flowing across it

    We are speaking to a particular context and thus also the DS architecture.

    It is moreover the case that interfaces are designed in the context of the requisite information flows, thus the informaitonin question.

    Here we are interested in the predictive paths set up imaginatively, volitionally and creatively for the effecting servo-system. These serve as templates for action. What is significant is that relative tot he control part, the templates are in effect givens ands the controller can syntactically track and compare actual with projected then generate control vectors to correct deviations.

    The setting up of the track and of the programs to control the controller are SEMANTIC. That is, for instance a bit based processor can simply sample outputs at given times then compare to expected, generate error signals that drive actuators and monitor onward performance. But what the signals mean is not a necessary part of that — it is in the semantics. What the controller is doing is register-based arithmetic, logic and shift operations on bit strings; it does not itself address what the strings mean.

    That is the job of the programmer.

    As people we plan then act and we monitor deviations and respond to them to get back on track. We do so intelligently – based on meanings and what makes sense, as a rule.

    GEM of TKI

  185. 185
    Q says:

    Pardon my ignorance, KF, but I miss your point in 184. I’m discussing how the computer model you present can be used used as a tool to examine the brain/mind duality. I.e., in the context of this thread, to observe the boundary between “seeing red” and “subjectively experiencing red.”

    I’m arguing that the model you present must be correlated to the physical brain, they physical interface, and the physical information – through observation of the brain, its interaction with information, and its interaction with the interface to the mind. The model you present must not simply treated as a direct representation of the brain. For this examination, brain, information, interface, and mind, and be treated and observed as separable constructs.

  186. 186
    kairosfocus says:

    Q:

    In my context, I am making no initial distinction on whether what we experience as the mind is material or immaterial at the first instance. I am simply pointing to the DS architecture, that allows us to differentiate controller from intelligent director and to assign the locus of creative tasks – getting beyond the Crick-style confusion.

    Then, let us bring to bear the relevant issues on what an intelligent director would be like and how it is set up, by going back to a point in my always linked, section A and a remark by good old materialism-leaning prof Wiki on Instincts [and along the way, DV, we will make reference again to Ac 27 on governance by competing agents in a situation that exhibits tracking in the short term and navigation in the long term relative to an intended path]:

    [GEM of TKI:] let us identify what intelligence is. This is fairly easy: for, we are familiar with it from the characteristic behaviour exhibited by certain known intelligent agents — ourselves. Specifically, as we know from experience and reflection, such agents take actions and devise and implement strategies that creatively address and solve problems they encounter; a functional pattern that does not depend at all on the identity of the particular agents. In short, intelligence is as intelligence does. So, if we see evident active, intentional, creative, innovative and adaptive [as opposed to merely fixed instinctual] problem-solving behaviour similar to that of known intelligent agents, we are justified in attaching the label: intelligence. [Note how this definition by functional description is not artificially confined to HUMAN intelligent agents: it would apply to computers, robots, the alleged alien residents of Area 51, Vulcans, Klingons or Kzinti, or demons or gods, or God.] But also, in so solving their problems, intelligent agents may leave behind empirically evident signs of their activity; and — as say archaeologists and detectives know — functionally specific, complex information [FSCI] that would otherwise be improbable, is one of these signs.

    [“prof” Wiki, 1:] Instinct is the inherent disposition of a living organism toward a particular behavior. Instincts are unlearned, inherited fixed action patterns of responses or reactions to certain kinds of stimuli. Innate emotions, which can be expressed in more flexible ways and learned patterns of responses, not instincts, form a basis for majority of responses to external stimuli in evolutionary higher species, while in case of highest evolved species both of them are overridden by actions based on cognitive processes with more or less intelligence and creativity or even trans-intellectual intuition.Examples of instinctual fixed action patterns can be observed in the behavior of animals, which perform various activities (sometimes complex) that are not based upon prior experience and do not depend on emotion or learning, such as reproduction, and feeding among insects. Other examples include animal fighting, animal courtship behavior, internal escape functions, and building of nests.

    Instinctual actions – in contrast to actions based on learning which is served by memory and which provides individually stored successful reactions built upon experience – have no learning curve, they are hard-wired and ready to use without learning, but do depend on maturational processes to appear.

    [PW, 2:] Intelligence is an umbrella term used to describe a property of the mind that encompasses many related abilities, such as the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn. There are several ways to define intelligence. In some cases, intelligence may include traits such as creativity, personality, character, knowledge, or wisdom.

    [PW, 3:] Creativity (or “creativeness”) is a mental process involving the generation of new ideas or concepts, or new associations between existing ideas or concepts.

    From a scientific point of view, the products of creative thought (sometimes referred to as divergent thought) are usually considered to have both originality and appropriateness. An alternative, more everyday conception of creativity is that it is simply the act of making something new.

    [PW, 4:] Intuition is apparent ability to acquire knowledge without a clear inference or reasoning process.

    It is “the immediate apprehension of an object by the mind without the intervention of any reasoning process” [Oxford English Dictionary].

    Intuition, by definition, has no objective validity. However it is extremely widespread as an apparent phenomenon. For this reason, it has been the subject of study in Psychology, as well as a topic of interest in the supernatural. . . . In psychology, intuition can encompass the ability to know valid solutions to problems and decision making. For example, the recognition primed decision (RPD) model was described by Gary Klein in order to explain how people can make relatively fast decisions without having to compare options. Klein found that under time pressure, high stakes, and changing parameters, experts used their base of experience to identify similar situations and intuitively choose feasible solutions. Thus, the RPD model is a blend of intuition and analysis. The intuition is the pattern-matching process that quickly suggests feasible courses of action. The analysis is the mental simulation, a conscious and deliberate review of the courses of action

    These — together with the DS architecture of a complex servo-system with a controller based on input-output comparison to projected track, and with the projected track being creatively supplied by what I have called an intelligent director – will form a context for the further remarks. [Then, we can deal with subjectivity, consciousness and qualia etc as markers that point to the nature of the relevant director we possess.]

    1] Directors and neural network characteristics and programming.

    In a sense this reworks what was dealt with under a similarish post on a parallel thread, but with adjustments to this thread.

    For, we know what agency is, DIRECTLY IN THE FIRST PERSON, so we experience that intuition, creativity and intelligence are features of agency that routinely act effectively into the world. This is what has to be reasonably explained.

    Thence, we can look at the DS framework and the relevance of an intelligent director – or of a collective of such directors [per Ac 27] — supervising and guiding the i/o processor controlling the servosystems: robot of the future, body in the present, or ship in the past of October 59 AD makes little difference.

    2] Neural networks as a model . ..

    Wiki on neural networks:

    in unsupervised learning [in a neural network] we are given some data x, and a cost function to be minimized which can be any function of x and the network’s output, f. The cost function is determined by the task formulation. [ note this — someone sets the task, sets the goal and sets up the system, i.e the ANN does not ultimately question its final-level purpose.] Most applications fall within the domain of estimation problems such as statistical modeling, compression, filtering, blind source separation and clustering . . . .

    In reinforcement learning, data x is usually not given, but generated by an agent’s interactions with the environment. At each point in time t, the agent performs an action yt and the environment generates an observation xt and an instantaneous cost ct, according to some (usually unknown) dynamics. The aim is to discover a policy for selecting actions that minimises some measure of a long-term cost, i.e. the expected cumulative cost. [Note the preset purpose.] The environment’s dynamics and the long-term cost for each policy are usually unknown, but can be estimated. ANNs are frequently used in reinforcement learning as part of the overall algorithm. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.

    Notice how the learning control system has to be set up to have a creative, imaginative, intelligent and even intuitive supervisory view of the world and its dynamics and conditions so that it can explore and address then model potential costs and benefits of policies then go for the goals it has, and of course monitor and adjust as it tracks across time, leaning from experience.

    That brings up the governance issue as competing policies vie for adoption. Thence, Acts 27 and issues of democratic governance and wisdom.
    But this is a bit afield. We want to go now for what agents are like in our own case.

    3] Mind-brain issues — simplified

    As BarryA observed in the OP:

    computer hardware is nothing but an electro-mechanical device for operating computer software. Computer software in turn is nothing but a series of “if then” propositions. These “if then” propositions may be massively complex, but software never rises above an utterly determined “if then” level . . . . the $64,000 question is this: Is the human brain merely an organic computer that in principle operates the same way as my PC?” In other words, does the Turing Machine also describe the human brain ? If the brain is just an organic computer, even though human behavior may at some level be unpredictable, it is nevertheless determined, and free will does not exist. If, on the other hand, it is not, if there is a “mind” that is separate from though connected to, the brain, then free will does exist . . . .

    “Qualia” are the subjective responses a person has to objective experience. Qualia are not the experiences themselves but the way we respond to the experiences . . . . Consider a computer equiped with a light gathering device and a spectrograph. When light of wavelength X enters the light gathering device, the spectrograph gives a reading that the light is red. When this happens the computer is programmed to activate a printer that prints a piece of paper with the following statement on it “I am seeing red.”

    I place the computer on my back porch just before sunset, and in a little while the printer is activated and prints a piece of paper that says “I am seeing red.”

    Now I go outside and watch the same sunset. The reds in the sunset I associate with warmth, by which I mean my subjective reaction to the redness of the reds in the sunset is “warmth.” . . . .

    Conclusion: The computer registered “red” when red light was present. My brain registered “red” when red light was present. Therefore, the computer and my brain are alike in this respect. However, and here’s the important thing, the computer’s experience of the sunset can be reduced to the functions of its light gathering device and hardware/software. But my experience of the sunset cannot be reduced to the functions of my eye and brain. Therefore, I conclude I have a mind which cannot be reduced to the electro-chemical reactions that occur in my brain.

    Actually, I am astonished that we have to go down to so many details to see the obvious. There are many current views, that (like Crick), would reduce mind to brain. But from our experience of mind – which is necessarily relied upon to think even materialistic thoughts – we do experience free will and intelligent creativity, intuition etc.

    Even in the case of learning artificial neural networks, they have to be set up in ways that fairly reek of organised complexity, pointing onward to agency and intelligence. And, free thinking and acting are conditions of such intelligence. Further to this, we experience ourselves as such intelligent agents.

    Thus, plainly, any view that contradicts the facts of intelligent agency as we experience it, are false-to-fact, and falsified. [IMHBCO, it is the institutional power of lingering evo mat that makes this hard to do, not the logic.]

    And, those facts plainly contradict the notion that mind is an emergence from the properties of matter as we understand them through scientific study. So, on the evidence in hand, mind is more than mater but is capable of interacting with it in interesting ways. Most notably, it is capable to provide the creative, imaginative, intuitive etc that can then guide the servostysems involved.

    4] And if mind has been created . . .

    Then prima facie, mind can be created.

    So, R Daneel is in principle possible. The issue is: how!

    So, let’s roll up our sleeves and sharpen our pencils – the adventure of design science has only just begun . . .

    GEM of TKI

  187. 187
    Q says:

    I think I see the difference in our approaches, and that it may result in different outcomes. Your approach, if I read it correctly, is to start with the Intelligent Director. Specifically, you start with “(L)et us bring to bear the relevant issues on what an intelligent director would be like and how it is set up”

    That is, the Intelligent Designer is one of your premises.

    I’m taking a different point. Start with the measurables – brain, information flow, brain-side of the mind/brain interface (or the correlaries in the DS model) – and by a process of elimination, conclude with what are the properties of mind (or of the Intelligent Designer). My approach makes no initial assumptions about the properties of each element of the process, expect for the assumption that a particular model is being followed, as your logical argument suggests. However, by observing the processes and boundaries of each element, we would conclude with the understanding of “that which remains only indirectly observed is left as mind (or as the Intelligent Director)”.

    If the model were correct, would you expect both approaches to result in the same understanding of the Intelligent Designer, or of mind?

  188. 188
    kairosfocus says:

    Q:

    I just now spent a significant amount of time on a response to Prof Olofsson, harking back to the Padian thread. So pardon my being a bit summary, especiaslly as I see us looping back over old ground.

    1] 187: the Intelligent Designer is one of your premises.

    This loops back to the objection of the Kantians, and is in serious error, as I long since pointed out and linked.

    In the current context, I am looking at a MODEL, by DS, in which what I have called an Intelligent Director is a part, the part that passes creative projected paths for a servo to the controller, whose job is to then try to keep the system on track from moment to moment thus executing the path desired. It is the possession of such an intelligent director capable of making such decisions and creative projections that makes the model in Fig 2 in the predictive form, self-directing and capable of praxis.

    Notice the difference in context and term,s, please. On inference to design, what the explanatory filter approach does is to refuse to rule out by begging the question the possibility of a designer at OOL OOBPLBD and OO FT LFC. Then, the strongly empirically and theoretically supported point that FSCI etc are reliable signs of agency is seen as pointing to agency on a basis of inference to best inductively anchored explanation.

    The price tag for rejecting this is selective hyperskepticism, as I have pointed out.

    2] Start with the measurables – brain, information flow, brain-side of the mind/brain interface (or the correlaries in the DS model) – and by a process of elimination, conclude with what are the properties of mind

    First, one measures in a context that already implicitly addresses explanatory alternatives, i.e the models are there all the time.

    Second, on our own case we start from our life-experience as agents. We know from the inside what it is to be intelligent, creative etc, and that is the context in which mind as a term has been developed. We are therefore reasoning by family resemblance to known cases in point. Whatever mind is, we have an example in point that is empirical.

    We now set out to see what “stuff” mind is made of – what it is is different from that it is; the latter being the premise of all our intellectual activity. In that pursuit, we see that certain artificial entities are a possible model: supervised servosystems in which an intelligent director creatively sets the path and the expected observations along it to guide the controller to keep on track.

    We can compare two cases, one that would be hard to do in realistic cases [we have already done simpler case by making Model-referenced adaptive controllers and their descendants] but is technologically feasible, and one [Ac 27] where we see a classical account of a ship voyage under direction of a steersman in the face of decisions by the ship’s company and the environment.

    The result is to show that information is a key intermediary between intelligent direction and control.

    Further by comparison with our experience we know that creative synthesis of such paths is based on understanding of configurational possibilities and dynamics, thus on going right to islands of functionality instead of being lost in vast config spaces and trying to find function through random walks that soon run out of probabilistic resources. On this, the neural network model is useful, but notice the types that speak to such self-directed learning tare also involved in serious FSCI to set them up.

    So we see a crucial difference between mind and chance + necessity only in design.

    3] My approach makes no initial assumptions about the properties of each element of the process, expect for the assumption that a particular model is being followed, as your logical argument suggests. However, by observing the processes and boundaries of each element, we would conclude with the understanding of “that which remains only indirectly observed is left as mind (or as the Intelligent Director)”.

    A look at the just above will show that it reiterates that there is no assumption before the fact on the ontological nature of mind, only that we recognise that mind is – we ourselves experience it.

    More to the point we also infer that since mind in our case is contingent, creation of mind is possible so the trick is to identify how to do it. Thence R Daneel here we come!

    By observing he characteristics of the DS model andf the relevant neural networks and comparing with the observed and experienced behaviour of human minds, we then can make comparisons on boundaries and ask questions on the origins and ontology of the relevant components.

    GEM of TKI

  189. 189

    […] *By “subjectivity,” Hart means a person’s subjective experience of phenomena as distinct from the phenomena themselves.  The discussion of subjectivity is often tied to the concept of “qualia.”  See, e.g., here. […]

  190. 190
    Axel says:

    ‘If the brain is just an organic computer, even though human behavior may at some level be unpredictable, it is nevertheless determined, and free will does not exist. If, on the other hand, it is not, if there is a “mind” that is separate from though connected to, the brain, then free will does exist.’

    Well, that means that the materialists are themselves, the simplest and most obvious confirmation that humans possess free will, perversely exercised though it be/is.

    Given the ubiquitous, unchallenged (in any intelligent sense) evidence for ID in nature, is there any computer with a capacity for rational inference that would, that COULD, reject the evidence?!!!!!!!!!!!!!!!!!!!!!

    Is there any similar equipped computer that could arrive at the conclusion that something could turn itself into everything?

    You could go on for a while showing how they invite the derision of future generations for the way in which they choose to monkey with reason in the most obviously perverse ways, couldn’t you?

  191. 191
    Axel says:

    1 + 1 = 2 seemed a very apt ‘gizmeter’ – forget what it’s called now – in relation to the point of my post, above.

  192. 192
    Axel says:

    As with so many other precepts of Christian faith, eventually it will be established that ‘person-hood’ is fundamental to intelligence, as is implicit in the definition of the soul in the Roman Catholic catechism, as the memory, will and understanding. Nota bene, ‘the will’.

    Moreover it confirms what Christians know about God, himself, namely, that, if anything, he is not less personal than us, as some great, impassive monolith, but more personal than we can even imagine: the persons of the Most Holy Trinity being the very ‘fons et origo’ of our individual person-hood (in his image), evidently, with implications for the nature of our intelligence. As indeed, some of those NDE’s indicate, with experiences of omniscience in the Holy spirit, (as members of the true vine, the mystical body of Christ).

  193. 193

    […] *By “subjectivity,” Hart means a person’s subjective experience of phenomena as distinct from the phenomena themselves.  The discussion of subjectivity is often tied to the concept of “qualia.”  See, e.g., here. […]

Leave a Reply