Uncommon Descent Serving The Intelligent Design Community

It’s all about information, Professor Feser

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over at his blog, Professor Edward Feser has been writing a multi-part critique of Professor Alex Rosenberg’s bestselling book, The Atheist’s Guide to Reality: Enjoying Life without Illusions. Rosenberg is an unabashed defender of scientism, an all-out reductionist who doesn’t believe in a “self”, doesn’t believe we have thoughts that are genuinely about anything, and doesn’t believe in free will or morality. Instead, he advocates what he calls “nice nihilism.” In the last line of his book, Rosenberg advises his readers to “Take a Prozac or your favorite serotonin reuptake inhibitor, and keep taking them till they kick in.”

Edward Feser has done an excellent job of demolishing Rosenberg’s arguments, and if readers want to peruse his posts from start to finish, they can read them all here:

Part One
Part Two
Part Three
Part Four
Part Five
Part Six

Professor Rosenberg’s argument that Darwinism is incompatible with God

In his latest installment, Professor Feser takes aim at an argument put forward by Rosenberg, that Darwinism is incompatible with the idea that God is omniscient. In his reply to Rosenberg, Feser also takes a swipe at Intelligent Design, about which I’ll have more to say below. In the meantime, let’s have a look at Rosenberg’s argument against theistic evolution.

Rosenberg argues as follows: Darwinian processes, being non-teleological, do not aim at the generation of any particular kind of species, including the human species. What’s more, these processes contain a built-in element of irreducible randomness: variation. Mutations are random, and no one could have known in advance that evolution would go the way it did. Therefore if God had used such processes as a means of creating us, He could not have known that they would be successful, and therefore He would not be omniscient.

In his response, Feser criticizes Professor Rosenberg’s argument on several grounds, arguing that:

(i) belief in the God of classical theism does not logically entail that the emergence of the human race was an event planned by Him (i.e. God might have intentionally made the cosmos, but we might have been an accident);

(ii) God may have intended that the universe should contain rational beings (who possess the ability to reason by virtue of their having immortal souls) without intending that these beings should be human beings, with the kind of body that Homo sapiens possesses – hence our bodies may be the result of an accidental process;

(iii) if you believe in the multiverse (which Feser doesn’t but Rosenberg does), it is perfectly consistent to hold that while the evolution of Homo sapiens may have been improbable in any particular universe, nevertheless it would have been inevitable within some universe; and

(iv) in any case, the probabilistic nature of Darwinian processes does not rule out divine intervention.

Professor Feser’s big beef with Rosenberg’s argument: Divine causality is of a different order from that of natural causes

But Professor Feser’s chief objection to Rosenberg’s anti-theistic argument is that it ignores the distinction between Divine and creaturely causality. At this point, Feser takes pains to distinguish his intellectual position from that of the Intelligent Design movement. He remarks: “What Aristotelian-Thomistic critics of ID fundamentally object to is ID’s overly anthropomorphic conception of God and its implicit confusion of primary and secondary causality.” (I should point out in passing that Intelligent Design is a scientific program, and as such, it makes no claim to identify the Designer. Nevertheless, many Intelligent Design proponents would be happy to refer to this Designer as God.)

God, argues Feser, is like the author of a book. Intelligent natural agents (e.g. human beings) are the characters in the story, while sub-intelligent agents correspond to the everyday processes described within the story. The key point here is that God is outside the book that He creates and maintains in existence (i.e. the cosmos), while we are inside it. God’s causality is therefore of an entirely different order from that of creatures. To say that God intervened in the history of life in order to guarantee that Homo sapiens would emerge (as Rosenberg seems to think that believers in God-guided evolution are bound to believe) is tantamount to treating God like one of the characters in His own story. In Feser’s words, it “is like saying that the author of a novel has to ‘intervene’ in the story at key points, keeping events from going the way they otherwise would in order to make sure that they turn out the way he needs them to for the story to work.” In reality, authors don’t need to intervene into their stories to obtain the outcomes they want, and neither need we suppose that God intervened in the history of life on Earth, so as to guarantee the emergence of human beings.

Feser then argues that things in the world derive their being and causal power from God, just as the characters in a story only exist and alter the course of events within the story because the author of the story wrote it in a way that allows them to do so. For this reason, Feser has no philosophical problem with the notion of Darwinian processes being sufficient to generate life, or biological species such as Homo sapiens. Causal agents possesss whatever powers God wants them to have, and their (secondary) causality is genuine, and perfectly compatible with the (primary) causality of God, their Creator. Just as “it would be absurd to suggest that in a science fiction novel in which such-and-such a species evolves, it is not really Darwinian processes that generate the species, but rather the author of the story who does so and merely made it seem as if Darwinian processes had done it,” so too, “it is absurd to suggest that if God creates a world in which human beings come about by natural selection, He would have to intervene in order to make the Darwinian processes come out the way He wants them to, in which case they would not be truly Darwinian.”

The problem isn’t one of insufficient causal power in Nature; it’s all about information

When I read this passage, I thought, “Aha! Now I see why Professor Feser thinks Intelligent Design proponents have got the wrong end of the stick. Now I see why he thinks we are committed to belief in a tinkering Deity who has to intervene in the natural order in order to change it.” For Feser inadvertently revealed two very interesting things in his thought-provoking post.

The first thing that Professor Feser inadvertently revealed was that he thinks that the difficulty that Intelligent Design proponents have with Darwinian evolution has to do with power – in particular, the causal powers of natural agents. As an Aristotelian-Thomist, Feser sees no difficulty in principle with God granting natural agents whatever causal powers He wishes, so long as they are not powers that only a Creator could possess. Why could not God therefore give mud the power to evolve into microbes, and thence into biological species such as Homo sapiens?

But the problem that Intelligent Design advocates have with this scenario has nothing to do with the powers of causal agents. Rather, it’s all about information: complex specified information, to be precise. By definition, any pattern in Nature that is highly improbable (from a naturalistic perspective) but is nevertheless capable of being described in a few words, instantiates complex specified information (CSI). So the philosophical question we need to address here is not: could God give mud the power to evolve into microbes and thence into the body of a man, but rather: could God give mud the complex specified information required for it to evolve into microbes and thence into the body of a man?

The answer to this question, as Edward Feser should be aware from having read Professor Michael Behe’s book, The Edge of Evolution (Free Press, 2007, pp. 238-239), is that Intelligent Design theory is perfectly compatible with such front-loading scenarios. Indeed, Behe argues that God might have fine-tuned the initial conditions of the universe at the Big Bang, in such a way that life’s subsequent evolution – and presumably that of human beings – was inevitable, without the need for any subsequent acts of God.

A second possibility is that God added complex specified information to the universe at some point (or points) subsequent to the Big Bang – e.g. at the dawn of life, or the Cambrian explosion – thereby guaranteeing the results He intended.

A third possibility is that the universe contains hidden laws, as yet unknown to science, which are very detailed, highly elaborate and specific, unlike the simple laws of physics that we know. On this scenario, complex specified information belongs to the very warp and woof of the universe: it’s a built-in feature, requiring no initial fine-tuning.

Personally, my own inclination is to plump for the second scenario, and say that we live in a cosmos which is made to be manipulated: it’s an inherently incomplete, open system, and the “gaps” are a vital part of Nature, just as the holes are a vital feature of Swiss cheese. I see no reason to believe in the existence of hidden, information-rich laws of the cosmos, especially when all the laws we know are low in information content; moreover, as Dr. Stephen Meyer has pointed out in his book, Signature in the Cell, all the scientific evidence we have points against the idea of “biochemical predestination”: simple chemicals do not naturally arrange themselves into complex information-bearing molecules such as DNA. I also think that front-loading the universe at the Big Bang would have required such an incredibly exquisite amount of fine-tuning on God’s part that it would have been much simpler for Him to “inject” complex specified information into the cosmos at a later date, when it was required. (When I say “at a later date”, I mean “later” from our time-bound perspective, of course, as the God of classical theism is timeless.) However, this is just my opinion. I could be wrong.

Complex specified information has to come from somewhere

One thing I’m quite sure of, though: not even God could make a universe without finely-tuned initial conditions and without information-rich laws, that was still capable of generating life without any need for a special act of God (or what Intelligent Design critics derogatorily refer to as “Divine intervention”, “manipulation” or “tinkering”). The reason why this couldn’t happen is that complex specified information doesn’t come from nowhere. It needs a source. And this brings me to the second point that Professor Feser inadvertently revealed in his post: he seems to think that information can just appear in the cosmos wherever God wants it to appear, without God having to perform any specific act that generates it.

This is where the book metaphor leads Feser astray, I believe. The author of a book doesn’t have to specify exactly how the events in his/her story unfold. All stories written by human authors are under-specified, in terms of both the states of affairs they describe – e.g. what’s the color of the house at 6 Privet Drive, next door to Harry Potter’s house? – and in terms of the processes occurring within the story – e.g. how exactly do magic wands do their work in Harry Potter? What law is involved? J. K. Rowling doesn’t tell us these things, and I don’t think most of her readers care, anyway.

But here’s the thing: God can’t afford to be vague about such matters. He’s not just writing a story; He’s making a world. Everything that He brings about in this world, He has to specify in some way: what happens, and how does it happen?

One way in which God could bring about a result He desires is by specifying the initial conditions in sufficient detail, such that the result is guaranteed to arise, given the ordinary course of events.

A second way for God to bring about a result He wants is for Him to specify the exact processes generating the result, in such detail that its subsequent production is bound to occur. (On this scenario, God brings about His desired effect through the operation of deterministic laws.)

A third way for God to produce a desired effect is for Him to make use of processes that do not infallibly yield a set result – i.e. probabilistic occurrences, which take place in accordance with indeterministic laws, and which involve a certain element of what we call randomness. In this case, God would not only have to specify the probabilistic processes He intends to make use of, but also specify the particular outcome He desires these processes to generate. (This could be accomplished by God without Him having to bias the probabilities of the processes in any way: all that is needed is top-down causation, which leaves the micro-level probabilistic processes intact but imposes an additional macro-level constraint on the outcome. For a description of how this would work, see my recent post, Is free will dead?)

Finally, God may refuse to specify any natural process or set of initial conditions that could help to generate the result He desires, and instead, simply specify the precise spatio-temporal point in the history of the cosmos.at which the result will occur. That’s what we call an act of God, and in such a case, the result is said to be brought about purely by God’s will, which acts as an immediate efficient cause generating the effect.

But whatever the way in which God chooses to bring about the result He desires, He must make a choice. He cannot simply specify the effect He desires, without specifying its cause – whether it be His Will acting immediately on Nature to bring about a desired effect, or some natural process and/or set of conditions operating in a manner that tends to generate the effect. Whatever God does, God has to do somehow.

But couldn’t God make evolution occur as a result of a probabilistic process?

Let’s go back to the third way available to God for generating a desired result: namely, working through probabilistic processes. What does Intelligent Design theory have to say about this Divine modus operandi? Basically, what it says is that it is impossible for God to remain hidden, if He chooses this way of acting, and if the desired effect is both improbable (in the normal course of events) and capable of being described very briefly – in other words, rich in complex specified information. For even if the micro-level probabilities are in no way affected by His agency, the macro-level effect constitutes a pattern in Nature which we can recognize as the work of an intelligent agent, since it is rich in CSI.

Professor Feser, working from his authorial metaphor for God, seems to have overlooked this point. The human author of a story can simply write: “Y occurred, as a freakish but statistically possible result of process X.” Here, the author simply specifies the result he/she intends (effect Y) and the process responsible (probabilistic process X, which, as luck would have it, produced Y). Because the effect in the story (Y) is both the result of a natural process (X) occurring in the story, and the result (on a higher level) of the author’s will, it appears that nothing more needs to be said. Feser seems to think that the same holds true for effects brought about by God, working through probabilistic processes: they are both the work of Nature and the work of God. Hence, he believes, nothing prevents God from producing life by a Darwinistic process, if He so chooses.

Not so fast, say Intelligent Design proponents. Probabilistic processes have no inherent tendency to generate outcomes that can be concisely described in language. If an outcome that can be described in a very concise manner is generated by a probabilistic process, and if the likelihood of the outcome is sufficiently low, then it is simply wrong to put this down to the work of Nature. The real work here is done by God, the Intelligent Agent Who specified the outcome in question. It’s fundamentally wrong to give any credit to the natural probabilistic process for the result obtained, in a case like this: for even if God works through such a process, the process itself has no tendency to aim for concisely describable outcomes. God-guided evolution is therefore by definition non-Darwinian. Contrary to Feser, it is not absurd for Intelligent Design proponents to argue that when “such-and-such a species evolves, it is not really Darwinian processes that generate the species,” since Darwinian processes are inherently incapable of generating large amounts of complex specific information, and when we trace the evolution of any species back far enough, we will find that large amounts of complex specific information had to be generated.

Putting it another way: not even God could make an unintelligent natural process with a built-in tendency to hone in on outcomes having a short verbal description. Such a feat is logically impossible, because it would be tantamount to making an unintelligent process capable of making linguistic choices – which is absurd, because language is a hallmark of intelligent agents. Not even God can accomplish that which is logically imposible.

I hope Professor Feser now recognizes what the real point at issue is between Darwinism and Intelligent Design theory. I hope he also realizes that Intelligent Design is not committed to an anthropomorphic Deity, or to any particular Divine modus operandi. ID proponents are well aware of the distinction between primary and secondary causality; we just don’t think it’s very useful in addressing the problem of where the complex specified information in Nature came from. The problem here is not one of finding a primary (or secondary) cause that can generate the information, but rather one of finding an intelligent agent that can do so. Lastly, ID proponents do not think of God as a “tinkerer who cleverly intervenes in a natural order that could in principle have carried on without him,” for the simple reason that Intelligent Design is a scientific program concerned with the detection of patterns in Nature that are the result of intelligent agency, and not a metaphysical program concerned with the being of Nature as such. Metaphysical arguments that Nature depends for its being on God are all well and good, but they’re not scientific arguments as such. For this reason, these metaphysical arguments fall outside the province of Intelligent Design, although they are highly regarded by some ID proponents.

Is Variation Random?

Finally, I’d like to challenge the claim made by Professor Rosenberg and other Darwinists that biological variation is random. Stephen Talbott has skilfully dismantled this claim in a highly original article in The New Atlantis, entitled, Evolution and the Illusion of Randomness. Talbott takes aim at the oft-heard claim, popularized by Richard Dawkins and Daniel Dennett, that Nature operates with no purpose in mind, and that evolution is the outcome of random variation, culled by the non-random but mindless mechanism of natural selection. Talbott’s scientific arguments against Dawkins and Dennett are devastating, and he makes a convincing scientific case that mutation is anything but random in real life; that the genomes of organisms respond to environmental changes in a highly co-ordinated and purposeful fashion; and that even the most minimal definition of random variation – i.e. the commonly held view that the chance that a specific mutation will occur is not affected by how useful that mutation would be – crumbles upon inspection, as the whole concept of “usefulness” or “fitness” turns out to be irretrievably obscure. At the end of his article, Talbott summarizes his case:

Here, then, is what the advocates of evolutionary mindlessness and meaninglessness would have us overlook. We must overlook, first of all, the fact that organisms are masterful participants in, and revisers of, their own genomes, taking a leading position in the most intricate, subtle, and intentional genomic “dance” one could possibly imagine. And then we must overlook the way the organism responds intelligently, and in accord with its own purposes, to whatever it encounters in its environment, including the environment of its own body, and including what we may prefer to view as “accidents.” Then, too, we are asked to ignore not only the living, reproducing creatures whose intensely directed lives provide the only basis we have ever known for the dynamic processes of evolution, but also all the meaning of the larger environment in which these creatures participate — an environment compounded of all the infinitely complex ecological interactions that play out in significant balances, imbalances, competition, cooperation, symbioses, and all the rest, yielding the marvelously varied and interwoven living communities we find in savannah and rainforest, desert and meadow, stream and ocean, mountain and valley. And then, finally, we must be sure to pay no heed to the fact that the fitness, against which we have assumed our notion of randomness could be defined, is one of the most obscure, ill-formed concepts in all of science.

Overlooking all this, we are supposed to see — somewhere — blind, mindless, random, purposeless automatisms at the ultimate explanatory root of all genetic variation leading to evolutionary change….

This “something random” … is the central miracle in a gospel of meaninglessness, a “Randomness of the gaps,” demanding an extraordinarily blind faith. At the very least, we have a right to ask, “Can you be a little more explicit here?” A faith that fills the ever-shrinking gaps in our knowledge of the organism with a potent meaninglessness capable of transforming everything else into an illusion is a faith that could benefit from some minimal grounding. Otherwise, we can hardly avoid suspecting that the importance of randomness in the minds of the faithful is due to its being the only presumed scrap of a weapon in a compulsive struggle to deny all the obvious meaning of our lives.

My response to Rosenberg

I would like to briefly respond to Professor Rosenberg’s argument that belief in God is incompatible with Darwinism. He is right about one thing: not even God can use randomness to bring about highly specific results, without “injecting” the complex specified information that guarantees the production of the result in question. If you’re a thoroughgoing Darwinist who believes that evolutionary variation is inherently random and that Nature is a closed system, then there’s no way for God to do His work. However, on an empirical level, I see no reason to believe that evolutionary variation is inherently random: Talbott’s article, from which I quoted above, cites evidence that the effects of environmental change on an organism’s genome are highly co-ordinated by the organism itself. What’s more, recent scientific evidence that even the multiverse must have had a beginning, and that even the multiverse must have been exquisitely fine-tuned, points very strongly to the fact that Nature is not a closed system. (See my article, Vilenkin’s verdict: “All the evidence we have says that the universe had a beginning”, which also contains links to my recent posts on cosmological fine-tuning.) And of course, Professor Feser has done an excellent job of expounding the metaphysical arguments showing that Nature is not self-sufficient, but requires a Cause.

Comments
Petrushka:
Despite claims to the contrary, the only process known for designing proteins and for discovering their folds is cumulative selection. In chemistry or in simulations, it’s cut and try.
The one BIG problem with that is cumulative selection has never constructed a protein from scratch, ie in the absence of any other protein(s).Joe
February 4, 2012
February
02
Feb
4
04
2012
05:52 AM
5
05
52
AM
PDT
You have, a priori, dismissed CSI for my beach because you happen to know (as I do) what caused the pattern.
CSI doesn't even apply in the beach scenario.
But you cannot insert a “cause of pattern” term into the CSI calculation because the whole point of the CSI calculation is to determine whether the cause of the pattern is Chance, Necessity of Design.
Specified complexity, not CSI. You don't use CSI for a beach Lizzie.Joe
February 4, 2012
February
02
Feb
4
04
2012
05:49 AM
5
05
49
AM
PDT
Here is your error, kf:
The size-based sorting along the length of the beach is a case of mechanical ordering, and is of low contingency, so low information storing potential. That is one aspect.
You have, a priori, dismissed CSI for my beach because you happen to know (as I do) what caused the pattern. But you cannot insert a "cause of pattern" term into the CSI calculation because the whole point of the CSI calculation is to determine whether the cause of the pattern is Chance, Necessity of Design. So you are assuming your consequent.Elizabeth Liddle
February 4, 2012
February
02
Feb
4
04
2012
04:14 AM
4
04
14
AM
PDT
KF: “The aspect you presumably had in mind was the grading of pebble size from one end to the other. This is accounted for on chance plus necessity.” Liz: “Well I don’t think it is” Science: “Lateral grading of beach sediments can be achieved by down drift and/or long shore drifting.” http://www.jstor.org/pss/4298523 It appears the pebble gradients are accountable by chance and necessity. And the chance-necessity mechanism for the origin of the genetic code is___
I wasn't clear. What I meant was that kf cannot dismiss the beach's CSI because it is discounted by a "Chance and Necessity" explanation. Dembski's (and, as far as I can understand it, kf's) formula for calculating CSI does not have term that says: A pattern only has CSI if it is not caused by Chance or Necessity. That would be entirely circular. Dembski's Fisherian test is that if the CSI of a pattern is beyond a certain value, then we can reject chance or necessity and conclude design. So kf is not being logical if he is arguing that, well, no, the beach doesn't have CSI because it's created by Chance and Necessity. The point of CSI is that it is supposed to tell us whether an object was created by Chance or Necessity, and, if not, by Design. To calculate the CSI by including a known cause in the calculation would render it useless. Not that I accept the Chance vs Necessity dichotomy. That's another fundamental problem with Dembski's argument, but perhaps it's better to leave that for another time, seeing as in his 2006 formulation he dropped the distinction.Elizabeth Liddle
February 4, 2012
February
02
Feb
4
04
2012
04:12 AM
4
04
12
AM
PDT
P: The question of isolation is to be addressed on the known challenge of isolation of protein fold domains, the known general pattern of functional specificity to achieve a given capacity, cases like the ATP synthase enzyme, which is a nanotech motor, and the like. At gross scale, we can look at cases like the origin of the avian type one way flow lung, from the bellows type lung. All these have long since been pointed out, and ducked or brushed aside rhetorically. Ideological lock-in of Lewontinian a priori evolutionary materialism and/or its fellow travellers, seems the most credible explanation. KFkairosfocus
February 3, 2012
February
02
Feb
3
03
2012
11:52 PM
11
11
52
PM
PDT
Dr Liddle I just saw you jumping from one aspect to another, i.e. you are back to the size sorting aspect; you are jumping from one aspect to another, in a context where it is explicit -- cf the EF diagrams you have been referred to over and over a gain over the past nigh on year -- are on a per aspect basis. the complexity and specificity must address the SAME aspect of the object, process or phenomenon in question, or you are just confusing yourself. The whole point of scientific analysis is that we isolate relevant aspects of phenomena and address them analytically, and from that we build up an overall picture cumulatively. With a pendulum, we lock ourselves to a given length, a given swing arc, a given bob-mass, and see how we get a given period. then, systematically, we vary, and see the effects, and notice the mechanically necessary aspects that are showing by low contingency, once we set up a given set-up. Then we see how we have scatter around the ideal model, attributable to various random chance effects. Then also, we may see that there is a personal equation aspect, reflecting the experimenter's own particular patterns of behaviour. And,we may have someone cooking results -- Galileo did, he should have seen enough to note that he arc length does affect period, why in school labs we tell students not to swing more than about 6 degrees.) Please, this is not dubious stuff, to be resisted all along. You will end up in inconsistent standards of warrant very fast if you do that. Similarly, you will confuse yourself if you refuse to recognise the need to look per aspect. And, notice how I have used classic paradigm examples, in the implicit context that a lot of science works by paradigm examples and family resemblance, perhaps with elaborations on a case by case basis. The size-based sorting along the length of the beach is a case of mechanical ordering, and is of low contingency, so low information storing potential. That is one aspect. I addressed this, and you jumped to another aspect, as though there was not an isolation at work. I then noted that the precise pattern of location of pebbles in a vicinity (or even overall) and their alignment in space and relative to each other in contact is a different aspect. That has a lot of randomness in it driving complexity and high contingency. This aspect, is highly contingent but resists simple, compressed description that allows you to give the particular pattern. And, the patterns are non-specific, within a broad range, there is no difference between effects of configs. E.g., whether a given flat and oval shaped pebble, no xyzabc, is face up or face down makes but little difference to the functionality of a pile of pebbles shaped by waves. But that ability to be face up/down DOUBLES the number of possibilities for the beach as a whole. Let us say it has eight possible orientations, that gives us three more bits, i.e. we now have FOUR doublings of the number of possibilities for the beach as a whole, and yet whether the pebble is in the horizontal space, in positions 0 to 7, and face up or down, make little or no difference to the behaviour of the beach as a whole or in the pebble's vicinity. (The whole obseerved universe put to work could not sort through all the possibilities of Chesil beach's pebbles from birth to heat death. So, the arrangement possibilities like this are complex indeed, but hey are not specific, in senses (a) the sort of variability in question has no significant effect on any reasonably defined function, and (b) there is no way to compress and tighten the description from essentially enumerating the locations and orientations of all pebbles. That is, turn the beach into a pseudo-lottery. If you give any particular set of values from the huge config space, obviously it will be very unlikely that that particular config will turn up, but the thing here is that this is in effect a specification of a one-state zone T, and it will not be simply describable.) The lottery game tactic -- as long since pointed out and it seems brushed aside -- is a red herring. I suggest you re-read what Orgel and Wicken have to say, long before Dembski came on the scene; they are in the IOSE intro page that you have been so quick to dismiss. maybe you read, but not with intent to understand. This reminds me of trying to work through the long division algorithm with a young child who is full of objections and distractions. Remember, Dembski's model and metric were designed in light of the issues highlighted by these men and the like. By being informational, it confines us to high contingency aspects, which are the only ones that can store lots of info, 500+ bits being in view, so W is beyond reasonable search resources. By looking at specificity of outcomes, that is simply and independently describable, i.e. comes from a special zone T in W, it locks out lotteries. the issue then becomes, with chance and necessity on one side and intelligence on the other, what best explains being in T from W? The search challenge makes C + N a most implausible explanation, for long since easily understandable reasons. Intelligence is routinely seen as allowing us to achieve this sort of outcome, as close to hand as text of posts in this thread. Can you at least acknowledge this empirical fact, of how intelligence routinely tosses up cases E in zones T in fields of possibilities W? And, how hard it is for a blind chance and necessity alternative to land you in such a zone T? What we have been discussing all along is an issue that pivots on high/low contingency AND on specificity in the sense just again pointed out. Functionality, especially that pivoting on digital code strings, is simply the easiest to directly see, and enables understanding the other cases that are reducible to that, once we see how we can represent a nodes-arcs topology as a structured set of digital strings. And, yes, GP's dFSCI case is WLOG, once we genuinely are dealing with being in a narrow, separately and simply describable zone T from a W. Of course, all sorts of objections and obfuscations can be tossed out to cloud the issue and distract attention from the pivotal issue on the table. We see that going on all the time. That does not change the simple fact that it is all about complex, specific information, and especially functionally specific complex information and linked organisation on a Wicken wiring diagram. Yes, we can have difficulties, we can have hard cases, we can have puzzles and limitations, but he CSI concept and its derivatives are about something real and important. It is the consistent refusal to acknowledge this, that is so telling about what has gone wrong with the institutionally dominant evolutionary materialist paradigm in science. And, in the context that counts, functional objects that are dependent on a narrow set of configs, T, in a W, the log-reduced chi metric expression allows us to focus the issues: Chi_500 = Ip*S - 500, bits beyond . . . Is the matter informational/highly contingent? is it a matter of a law of necessity? Ip settles that. (BTW, that is in some respects a postponement of the design inference issue, as the cluster of laws of physics seems also to be contingent and functionally specific in their organisation, i.e. the issue then is pushed back to the design of the observed cosmos. But, that is a reasonable division of work.) Is the matter functionally specific? Does it come from a narrow zone T in W, such that wandering outside of T by injecting random noise or some other reasonable and observable test, will lead to breakdown of function? Is it hard to find the next zone of function once we have lost function? S accounts for these questions. if we have Ip exceeding 500 and S being 1, we then have cases where the inference is best explained on design. there are billions of test cases where we can separately know, and the inference is reliable. So there is an epistemic right to use it on inductive reliability. To counter this, we need credible counter-examples, which of course, have not been forthcoming, hence the sort of rhetoric-heavy, selectively hyperskeptical objections we keep on seeing. Please, let us not just go in circles of recycled, long since cogently answered -- but evidently brushed aside -- objections. That would point to closed minded ideologically driven a priori commitment (specifically to evolutionary materialism a la Lewontin) in the face of empirical evidence and linked reasonable analysis. GEM of TKIkairosfocus
February 3, 2012
February
02
Feb
3
03
2012
11:41 PM
11
11
41
PM
PDT
Scott, You're digging the hole deeper:
If you specify a sort order using a random number generator and match each symbol with a value, then you are assigning that meaning to the symbol.
I've already shown that you don't have to match each symbol with a value in order to do a sort, and that the program doesn't even need to know that there are symbols involved. In fact, there don't even have to be symbols at all. A computer can sort abstract patterns that don't symbolize anything. (And please don't come up with some absurd rationalization like "a pattern always symbolizes itself". A meaningless abstract pattern doesn't symbolize anything; it just is.) I wrote:
‘Flying in front of the fire station’ is assuredly not part of the flag’s meaning, however.
You replied:
Unless, that is, it is determined that flying a flag in front of the fire station does have meaning and it is done to convey that meaning.
No, because if flying the flag in front of the fire station symbolizes something different from what the flag by itself symbolizes, then it is a different symbol.
If you see the flag flying at half mast, does that mean anything to you? It means something to the person who flew it at half mast, and they expect that at least some people will recognize that meaning.
Of course. People use symbols to convey meaning. But a flag flying at half mast is a different symbol from the flag itself. A flag flying at half mast symbolizes a significant death or a tragedy. The flag itself does not symbolize either of these things.
Similarly, the operation of natural laws upon the molecules of one’s brain does not explain why one person writes a novel and another whistles Beethoven in the men’s room.
Why not? You've agreed that the brain operates according to physical law without supernatural intervention:
I agree that all of it operates within natural law and that none of it violates any laws of physics. Otherwise I would have to think that something bizarre and supernatural occurs every time I imagine a shopping list, write it down, and then go to the store and retrieve the physical items corresponding to my abstraction.
If making a shopping list just involves a succession of physical brain states evolving according to the laws of physics, why do you believe that the same thing can't be happening when someone writes a novel or whistles Beethoven?
I admit error all the time, even when I’m right. I’m married.
When you think you're right. I suspect your wife might agree with me on that. :-)champignon
February 3, 2012
February
02
Feb
3
03
2012
11:16 PM
11
11
16
PM
PDT
Scott, You posted your response in the wrong place, so I'm reproducing it here:
Champignon,
I can generate a sort order for a set of symbols using a random number generator. According to you, the output of my random number generator is therefore part of the meaning of the symbols. Does that not strike you as absurd?
It’s not absurd. If you specify a sort order using a random number generator and match each symbol with a value, then you are assigning that meaning to the symbol. What part of that is confusing? It does not require that the sort order is the only meaning assigned to the symbol. I’m sorry if I’m not get getting this whole ‘backed into a corner’ vibe. What I’m saying makes perfect sense.
‘Flying in front of the fire station’ is assuredly not part of the flag’s meaning, however.
Unless, that is, it is determined that flying a flag in front of the fire station does have meaning and it is done to convey that meaning. If you see the flag flying at half mast, does that mean anything to you? It means something to the person who flew it at half mast, and they expect that at least some people will recognize that meaning. The example you used to demonstrate what does not have meaning is actually a real example of something that literally can and does have meaning. Perhaps you should have thought about that longer. Although I’ve clarified the wording of my initial statement for the benefit of the one person challenged by it, I think that nearly anyone who read it understood it the first time.
And can we finally move on to this point? … Relating symbols to referents is something that brains do. You’ve told me that you agree that brains operate according to physical law. If so, then in what sense does physical law fail to explain the mapping?
I’m happy to move on to your next question, as it implicitly concedes the point you’ve wasted so many posts quibbling over. You may find my answer to this question we’re ‘finally moving on to’ back at 12.1.1.2.30. Put about as simply as I can, physical laws do not explain everything that they permit or enable. A car enables a person to drive from A to B. If you park your car outside a store and it’s across the street when you come out, how will you explain that? Is ‘the car drove there’ an adequate explanation? It’s almost certainly true that the car drove there, and whoever drove it did so the same way anyone drives any car. But an explanation of how the car operates and how one drives it would not sufficiently explain how it got across the street. That is not the explanation you would want. Similarly, the operation of natural laws upon the molecules of one’s brain does not explain why one person writes a novel and another whistles Beethoven in the men’s room. And the operation of natural laws does not explain why some cultures selected the symbols “7? and “XII” to represent the number or quantity seven. They do not explain that there is an abstract concept of “seven” which can be associated with any symbol. If you disagree then – use natural law to explain why the digits 0-9 represent numbers or – use natural law to explain any set of symbols or – offer even the vaguest hypothetical process by which a natural law might produce any set of hypothetical symbols. Keep in mind that in each case the symbols are not just a reflection, an effect of what they represent, but are used both to create and to receive abstract information. You know, just like when someone imagines 52 cards, prints them on paper or represents them within a simulation, and someone else understands their meaning, even if only in the limited sense of knowing in which order to sort them. I admit error all the time, even when I’m right. I’m married. But I won’t admit error when I’m right just because you ask me to.
champignon
February 3, 2012
February
02
Feb
3
03
2012
06:46 PM
6
06
46
PM
PDT
KF: "The aspect you presumably had in mind was the grading of pebble size from one end to the other. This is accounted for on chance plus necessity." Liz: "Well I don't think it is" Science: "Lateral grading of beach sediments can be achieved by down drift and/or long shore drifting." http://www.jstor.org/pss/4298523 It appears the pebble gradients are accountable by chance and necessity. And the chance-necessity mechanism for the origin of the genetic code is___junkdnaforlife
February 3, 2012
February
02
Feb
3
03
2012
06:21 PM
6
06
21
PM
PDT
<blockqIf you wish to assert such connexion in the large, show us empirically, kindly. KF The question of connectability will be decided by research like that being done by Thornton. The existence of alleles demonstrates that most coding sequences have functional and selectable variants within reach of a single mutation.Petrushka
February 3, 2012
February
02
Feb
3
03
2012
04:13 PM
4
04
13
PM
PDT
Thanks for your reply. I'd like to address it, but it might be helpful to have your response to the rest of my post first. If not, I'll try to respond in the morning. Cheers LizzieElizabeth Liddle
February 3, 2012
February
02
Feb
3
03
2012
03:50 PM
3
03
50
PM
PDT
Hi Elizabeth, Thanks for your post at 1.1.2.2.1. A few quick points in reply: (1) You write:
A “natural biasing factor” is exactly what Darwin’s “natural selection” is. And we know that it happens because we can actually observe it in real time. But you can’t do probability calculations on it. They won’t demonstrate the truth of Darwinian evolution any more than they will demonstrate the truth of ID. It’s just not the right methodology for this question.
I have several problems with this statement. First, we can indeed observe Darwinian evolution in real time, but it doesn't follow from that that given a few billion years, changes of the magnitude of, say, the Cambrian explosion are likely to occur, or even possible. That's an unwarranted extrapolation. We have good grounds for saying that evolution has occurred - e.g. our loss of the gene for producing vitamin C points to our kinship with the apes. But that tells us nothing of whether the process that led to the human body was a Darwinian one or not - or even whether it was a natural one or not. To demonstrate this, you really have to show it's feasible, or reasonably likely to occur, where "reasonably likely" is defined as "having a probability of 10^(-120) or more, over a time span of a few billion years" - which I think is a pretty reasonable request. Normally in science, whenever you're proposing a brand new mechanism for an object you're designing (let's say, a new car engine), you have to demonstrate its feasibility, and I think I've set the bar pretty low with my 10^(-120) probability hurdle. If you can't clear that hurdle, then you might just as well say that fairies were responsible for human evolution, as I argued in my post, The 10^(-120) challenge, or: The fairies at the bottom of the garden . Second, Darwinians are wont to say that the demand for a probability calculation is an unreasonable request, because the events described happened so long ago. But if we're going to establish how they happened, as opposed to whether they happened, then we do need to have a mechanism that's been shown to work on the scale hypothesized. (My legs are good enough to let me do a long-jump of perhaps four meters, but they certainly won't let me jump across the Grand Canyon - even in a billion years.) Third, Darwinians often balk at the distance between the two events whose probability pathway they are asked to calculate, and object that so many complicating factors could occur along the way that it's impossible to do any realistic number-crunching. I say that's baloney. Even when you give them all their desired ingredients and ask them to perform a relatively modest calculation for the probability of a structure evolving, they can't do it. Two cases in point: (a) To this day, Darwinians refuse to even attempt to calculate the probability of functional proteins evolving from a primordial soup filled with amino acids. They don't like the scenario of amino acids hooking up to form proteins, because they know perfectly well that it can't realistically happen in the time available, as only an infinitesimal fraction of amino acid sequences are in any way functional. So they suggest a backdoor route: RNA formed, and gave rise to proteins and DNA. Fine; what's the probability of a chain of RNA that's big enough to make a protein, forming from nucleotides? Silence. (b) Darwinians have not yet attempted to calculate the probability of a bacterial flagellum forming from its precursors - which is remarkable, as I understand they've proposed some fairly detailed scenarios. Why the reluctance to calculate? Why the reluctance to even set upper and lower probability bounds for the likelihood of the process occurring over a period of four billion years? I find their coyness very curious. Fourth, you object that we can't do probability calculations with ID either. That depends on what probability you want to calculate. If you are referring to the probability that the Designer would design DNA, or bacterial flagella, or any particular structure, then I agree. But if you are referring to the the probability that the Designer could design DNA, or bacterial flagella, if He set His mind to it, then that's easy to calculate. So long as the structure is one that teams of human scientists can produce in a laboratory, and so long as the Designer is super-human, then the answer is 1. (2) Referring to Dr. Stephen Meyer's objections to biochemical predestination (which were also voiced long before him, by biologists such as Professor Dean Kenyon), you write:
Unless he thinks that every time a cell divides, or any biochemical process takes place, the molecules are actually being pushed around by little intelligent angels or something. Clearly the biochemistry works, even if where don’t know exactly how. And when we find out, then we will be in a better position to understand the origin of DNA.
Well, we know that DNA replicates by natural means today, and we know that cells divide by processes not requiring intelligent guidance to make them work, but that doesn't tell us whether the first DNA double helix (or, for that matter, the first cell) formed by non-foresighted natural processes or not. All the scientific investigations we've done so far suggest that the probability of DNA forming is extremely remote. Please don't take my word for it: have a look at my recent post, The Big Picture: 56 minutes that may change your life , where you can hear chemistry Professor John C. Walton lecture on abiogenesis, or scroll down to view my summary of his lecture. (3) You raise a point of substance when you write:
Dembski has this backwards, by his own methodology. It was he who cast “natural” causes as the null, and insisted on Fisherian, not Bayesian, logic. And under his own chosen statistical method, it is up to him to show that the pattern in question could not be generated under that null. And I have just demonstrated that he cannot do this, and does not even attempt to. He regards “equiprobable” as the null, and natural non-linear stochastic processes (and many others) do no not produce equiprobable outcomes. He can’t have it both ways. Either he models the null properly, or he must cast natural processes as as H1.
I attempted to address the question of the null hypothesis in my post, Why there’s no such thing as a CSI Scanner, or: Reasonable and Unreasonable Demands Relating to Complex Specified Information and Of little green men and CSI-lite , back in March/April 2011. (You might want to have a look at those, by the way.) In the first post, I wrote:
During the past couple of days, I've been struggling to formulate a good definition of "chance hypothesis", because for some people, "chance"; means "totally random", while for others it means "not directed by an intelligent agent possessing foresight of long-term results"; and hence "blind" (even if law-governed), as far as long-term results are concerned. In his essay, Professor Dembski is quite clear in his essay that he means to include Darwinian processes (which are not totally random, because natural selection implies non-random death) under the umbrella of "chance hypotheses". So here's how I envisage it. A chance hypothesis describes a process which does not require the input of information, either at the beginning of the process or during the process itself, in order to generate its result (in this case, a complex system). On this definition, Darwinian processes would qualify as a chance hypotheses, because they claim to be able to grow information, without the need for input from outside - whether by a front-loading or a tinkering Designer of life. CSI has already been calculated for some quite large real-life biological systems. In a post on the recent thread, On the calculation of CSI, I calculated the CSI in a bacterial flagellum, using a naive provisional estimate of the probability P(T|H). The numeric value of the CSI was calculated as being somewhere between 2126 and 3422. Since this is far in excess of 1, the cutoff point for a specification, I argued that the bacterial flagellum was very likely designed. Of course, a critic could fault the naive provisional estimate I used for the probability P(T|H). But my point was that the calculated CSI was so much greater than the minimum value needed to warrant a design inference that it was incumbent on the critic to provide an argument as to why the calculated CSI should be less than or equal to 1.
Let me stress that I claim no expertise whatsoever on bacterial flagella or the origin of life. All I'm saying is: you're welcome to suppose the existence of biasing factors in Nature that favor the production of these structures if you wish. And if you find them, they'll dramatically lower the CSI estimate for these structures in the process. Does that mean that CSI estimates are completely uninformative? Not at all; it just means that they're provisional and falsifiable - which is a good thing, not a bad thing, from a scientific perspective. (4) You also overlook the alleged improbability of producing a compressible structure (which may be overcome by natural biasing) with the improbability of naturally producing a code (which as far as I know has never been achieved naturally) or something that can perform a function (which is highly debatable, depending on how liberally you define "function") write:
Therefore, all we can say, if a pattern exhibits CSI is that it was not produced by a process in which all permutations of the pattern elements are equiprobable. And so it does not allow us to distinguish Intelligent Design from stochastic natural processes (or non-stochastic ones, actually – the EF did at least allow us to do that). But it gets worse. We know that non-linear natural processes (stochastic or non-stochastic) can produce vastly complex patterns that have all the appearance of design. All the mathematics of chaos tell us that. And what Darwin proposed was a non-linear natural process.
The fact that Chesil beach, whose order is highly compressible, formed naturally, does not tell us whether the DNA code can do so. Not does it tell us whether new cell types can perform, or new biological structures capable of performing new functions. Sorting by size is not a particularly complicated thing for Nature to do. (5) Regarding Behe's alleged requirement of three simultaneous mutations: I think that's a mis-reading of what he says in The Edge of Evolution, just as "Hoyle's fallacy" is based on a mythical mis-reading of what he wrote. Bye for now.vjtorley
February 3, 2012
February
02
Feb
3
03
2012
03:36 PM
3
03
36
PM
PDT
Dr Liddle, We keep going in circles, pardon. recall, the EF looks per aspect. The aspect you presumably had in mind was the grading of pebble size from one end to the other. This is accounted for on chance plus necessity.
Well, I don't think it is.
Now, you are looking at the pebbles piled and thus in a large number of possible configs. For this aspect, the problem is low specificity. Within the broad ambit of a beach profile, particle location and orientation on packing together is not very specific. And, the function, if you want to call it that is not seriously determined by being in a narrow zone in the space of configs. S = 0.
None of that makes sense to me, kairosfocus. There are certainly a very large number of possible configurations of pebbles. Of those, there are a tiny proportion that are as simply described as "graded linearly in ascending size from west to east". So the zone is very narrowly defined, and the "configuration space" is vast
On Dembski’s summary, there is low specificity. Complex but not particularly specific, as Orgel spoke about crystal grains in granite. In short, not organised. Randomness, not organisation, dominates the aspect.
No. On Dembski's definition, the specification is extraordinarily precise. The pebbles are so finely graded that a fisherman, landing at night, can tell where he is by the size of the pebbles. In other words, the arrangement is not "random" at all. It is a highly ordered grading.
And, all of this distinction has been on the table since 1973. you will note that the Shannon-type metric is a measure of info-carrying capacity of a code-system, not a measure of specified complexity. The Chi Metric, Durston’s Fits metric and the modified Chi_500 metric all address specificity, thus changing the Shannon metric.
I am talking about Dembski's measure of specified complexity, CSI. I am not talking about those other things, if they are different.
This is done in various ways but the simplest to see is functional specificity, requiring complex, information-rich organisation. Especially, when expressed in a code, and again especially a string. Which last happens to be the case with DNA. Proteins are manufactured in an automated unit based on those coded instructions. Using the case of Chi_500: Chi_500 = Ip*S – 500, bits beyond the solar system threshold. As pointed out, if there is not a high contingency similar to bit values in a string, then we do not have high contingency. And indeed, this is WLOG, as a networked, composite object can be reduced to a nodes and arcs list, with parts enumerated and orientation coded for. This is routinely done with CAD software. Specificity can be tested by injecting random noise and observing effects. English text with maybe 1 – 3% noise will be annoying, but readable — about one to two errors per average word. Much beyond this and chaos [ think of scanned Gutenberg books, about 95% reliability is needed, and it is annoying at that level]; we have sophisticated processing, but there are limits. Wiring diagrams are a LOT less noise tolerant as a rule. Don’t even THINK about making random changes to high power circuits. Overall, I get the impression that very little of what design thinkers have been saying, or its context, has been understood. I suspect much of that is because things that are unexceptional have been treated with selective hyperskepticism, leading to incoherence on standards of warrant. For instance, if one is in a very special and isolated zone T in a config space W, such that the available resources would strongly lead to the expectation that a chance and necessity based blind walk would be maximally unlikely to land in T, the best explanation for being in T is design. Nothing in that should be too hard to understand for anyone made to do a treasure hunt game with no clues on whether one is warmer or colder, just hit or miss, blindly. Actually, too, being reduced to such incoherence on what is demanded to accept a finding is a strong indicator that some rethinking is needed on the part of objectors to the design inference. Please, re-read to understand, rather than to toss out seemingly clever objections that seem rather to pivot on want of thorough understanding. GEM of TKI
Are you actually reading my posts, kf?Elizabeth Liddle
February 3, 2012
February
02
Feb
3
03
2012
03:32 PM
3
03
32
PM
PDT
P: Still on strawman. Observe the system that makes the proteins and come back to us on how selection pressure on proteins drives DNA codes, several steps upstream. Intelligence routinely is able to find heuristics that break through seemingly impossible odds against chance based random walks. And the strong evidence is that the space is NOT connected in the large, though it may be connected in the small, just think about requisites of folding.As in islands, not continents of function. If you wish to assert such connexion in the large, show us empirically, kindly. KFkairosfocus
February 3, 2012
February
02
Feb
3
03
2012
03:31 PM
3
03
31
PM
PDT
Dr Liddle, We keep going in circles, pardon. recall, the EF looks per aspect. The aspect you presumably had in mind was the grading of pebble size from one end to the other. This is accounted for on chance plus necessity. Now, you are looking at the pebbles piled and thus in a large number of possible configs. For this aspect, the problem is low specificity. Within the broad ambit of a beach profile, particle location and orientation on packing together is not very specific. And, the function, if you want to call it that is not seriously determined by being in a narrow zone in the space of configs. S = 0. On Dembski's summary, there is low specificity. Complex but not particularly specific, as Orgel spoke about crystal grains in granite. In short, not organised. Randomness, not organisation, dominates the aspect. And, all of this distinction has been on the table since 1973. you will note that the Shannon-type metric is a measure of info-carrying capacity of a code-system, not a measure of specified complexity. The Chi Metric, Durston's Fits metric and the modified Chi_500 metric all address specificity, thus changing the Shannon metric. This is done in various ways but the simplest to see is functional specificity, requiring complex, information-rich organisation. Especially, when expressed in a code, and again especially a string. Which last happens to be the case with DNA. Proteins are manufactured in an automated unit based on those coded instructions. Using the case of Chi_500: Chi_500 = Ip*S - 500, bits beyond the solar system threshold. As pointed out, if there is not a high contingency similar to bit values in a string, then we do not have high contingency. And indeed, this is WLOG, as a networked, composite object can be reduced to a nodes and arcs list, with parts enumerated and orientation coded for. This is routinely done with CAD software. Specificity can be tested by injecting random noise and observing effects. English text with maybe 1 - 3% noise will be annoying, but readable -- about one to two errors per average word. Much beyond this and chaos [ think of scanned Gutenberg books, about 95% reliability is needed, and it is annoying at that level]; we have sophisticated processing, but there are limits. Wiring diagrams are a LOT less noise tolerant as a rule. Don't even THINK about making random changes to high power circuits. Overall, I get the impression that very little of what design thinkers have been saying, or its context, has been understood. I suspect much of that is because things that are unexceptional have been treated with selective hyperskepticism, leading to incoherence on standards of warrant. For instance, if one is in a very special and isolated zone T in a config space W, such that the available resources would strongly lead to the expectation that a chance and necessity based blind walk would be maximally unlikely to land in T, the best explanation for being in T is design. Nothing in that should be too hard to understand for anyone made to do a treasure hunt game with no clues on whether one is warmer or colder, just hit or miss, blindly. Actually, too, being reduced to such incoherence on what is demanded to accept a finding is a strong indicator that some rethinking is needed on the part of objectors to the design inference. Please, re-read to understand, rather than to toss out seemingly clever objections that seem rather to pivot on want of thorough understanding. GEM of TKIkairosfocus
February 3, 2012
February
02
Feb
3
03
2012
03:23 PM
3
03
23
PM
PDT
In that case my beach has high contingency. Even if we assume that the grading just needs to be to the nearest millimeter, that gives us about 100 "characters" between a 10 mm pebble and a 110 mm pebble. Let's consider we have a bag of those pebbles and we draw a line of 500 of them. There are 100^500 possible arrangements of pebble sizes on that line. Of those, 100 arrangements are very simple - all the pebbles are the same size. Of linearly graded arrangements, we have from 2 to 100 pebble sizes (99) in equal gradations, so 99. If we allow for non-linear arrangements we have rather more (I'll have to work it out), we have rather more, but the total will clearly come to a tiny fraction of 100^500. And that's for a line of 500 pebbles. I'm talking about a beach of 100 billion pebbles.
And, after many, many months of back-forth on these very points. That suggests to me a presumption on your part that I and others did not know what we are talking about at all in addressing very basic points. It frankly suggests reading to dismiss rather than first to understand.
Well, no. It suggests that I understood exactly what you meant, but rather assumed there might be something more to it than that. It looks as though your CSI is exactly the same as Dembski's and fails for exactly the same reason: it cannot distinguish between a long series of items from an alphabet (high Shannon information, or, in your words, "high contingency") that has been generated by a simple algorithm (and therefore, by definition, high compressibility), and one generated by intelligence.Elizabeth Liddle
February 3, 2012
February
02
Feb
3
03
2012
12:01 PM
12
12
01
PM
PDT
Dr Liddle: Pardon, but could you kindly read here, the very first ID Foundations post? (I refer you there to see that this is not anything novel; the very reason why we distinguish mechanical necessity leading to lawlike regularity of outcomes is that this is manifested in low contingency of outcomes. When we have high contingency of outcomes for an aspect of an object or process, under similar initial conditions we do not have similar outcomes. Nor is the case of sensitive dependence to initial conditions an objection: it is precisely because we are unable to get the similarity that this yields large differences across time to a dynamic system. That is, it is because of small or even imperceptible differences in initial conditions that are essentially chance driven that we get the much amplified variability of outcomes. Indeed I have argued -- I believe to you -- that this accounts for how a die behaves.) For an object to be of information-bearing potential, it has to have high contingency. For instance, a string of ASCII characters, c1-c2-c3 . . . cn has 128 possibilities per position. Only a very few of these, relatively speaking, will be an informational post in English responsive to this thread. So, when Dr Dembski spoke of something as informational, this long since implies high contingency. Complexity then points to a sufficiently large set of possibilities that it would be maximally unlikely and implausible to hit on E's from separately and "simply" describable -- the hint at K-complexity is intended -- zones T within W where T is much smaller than W. What I find here is a conceptual gap that is revealing. And, after many, many months of back-forth on these very points. That suggests to me a presumption on your part that I and others did not know what we are talking about at all in addressing very basic points. It frankly suggests reading to dismiss rather than first to understand. Sorry if that sounds harsh, but this feels a lot like you have wasted a lot of our time. Let's just say that the state space concept and the related phase space concept, are basic to a lot of physics and engineering disciplines. In fact, W is a stand-in for Omega, used by Boltzmann, as in s = k*log W. As Section A and Appendix 1 my always linked note discuss, that them leads to cross connexions between entropy, information and information-carrying capacity. Indeed, maybe we are looking at one reason why engineering and physics types tend to "see" the point of the design inference relatively easily. Do you now see why I have repeatedly talked about islands of function in large seas of non-functional configurations? The issue then is that the nodes of the EF identify, first whether there is or is not high contingency, and then what is the best explanation for a given, highly contingent outcome. And, quite some months ago, we have had an argument along just these lines, in a context where you misperceived the EF. At that point I highlighted essentially what I now am saying, and it is clear that you rejected it. So, it resurfaces. No, I am not idiosyncratically making up dubious ideas (your latest talking point, that takes on some peculiar colour here), we are here close to the holy grail that makes a lot of physics of the small work. And, for that matter this is close to ideas about samples and populations in statistics, INCLUDING the underlying rationale for Fisherian inference testing. Namely that an at random sample in a given set of opportunities to sample is most likely going to come from the bulk of a distribution, not special zones in its far skirts. But, if you have not been able to appreciate that the issue of high contingency is central to information bearing potential or capacity, then you have not grasped the core ideas that lie underneath the design inference. In short, your objections here are directed at not the real matter but a misimpression of it. I suggest that you may find it helpful to read the IOSE introduction page. GEM of TKIkairosfocus
February 3, 2012
February
02
Feb
3
03
2012
11:40 AM
11
11
40
AM
PDT
I've discussed this elsewhere, but got no response. The problem with calling DNA a code is that it gets confused with language. Language is comprised of words and syntax. A dictionary of words has fewer entries than a dictionary of all the possible sentences and paragraphs. We really don't know if there is anything comparable in the genetic code. The closest thing seems to be protein domains. But they are already rather long. Using your argument, they are too long to have assembled purely by chance. If I make a typo in posting, most people can still read and understand what I say. An editor can usually correct it and recreate the intended sequence. Cells also have editors, but when errors get past them and passed on to the next generation, there is no way to recreate the intended sequence. Unlike paragraphs and sentences, there is no dictionary of subunits that partially convey the meaning. No way to tell which of hundreds or thousands of base pairs has been altered. It is the lack of subunits that makes both design and evolution difficult. The dictionary of functional units has the same number of entries as the dictionary of minimum length words. Every possible utterance in the genetic code has it's own entry. Unless it is possible to build functional units incrementally and cumulatively. Unless functional space is connected.Petrushka
February 3, 2012
February
02
Feb
3
03
2012
11:35 AM
11
11
35
AM
PDT
I've read several recent papers on predicting protein folding and see nothing that indicates that protein design can be simulated or modeled to the degree necessary to do biological design without doing actual chemistry. To the extent that we can model protein folding it requires annealing algorithms or genetic algorithms, which are evolutionary. Gpuccio suggests intelligent selection. It's still a variety of evolution, and more importantly, it depends on the connectedness of the search space. The connectedness of functional space is the keystone issue. If function can be connected, there is no need to posit intervention. What you and GP have is an updated gaps argument. You have transferred the argument from bones to sequences, but it's the same argument: no intermediate sequences. I will grant that it's an interesting argument. Everyone admits it. Paleontologists have spent centuries addressing fossil gaps. The desire to fill the gaps is in obsession of mainstream biology. It's not being ignored. And it's not being ignored in the arena of molecular biology. But our technology for investigating genomes is about at the level of Leeuwenhoek's microscope. My opinion is worth what you paid for it, but if it's of any interest to you, I accept your challenge at face value and look forward to the future of research. We shall see as time goes by whether connectedness is supported by evidence.Petrushka
February 3, 2012
February
02
Feb
3
03
2012
11:17 AM
11
11
17
AM
PDT
Champignon,
I can generate a sort order for a set of symbols using a random number generator. According to you, the output of my random number generator is therefore part of the meaning of the symbols. Does that not strike you as absurd?
It's not absurd. If you specify a sort order using a random number generator and match each symbol with a value, then you are assigning that meaning to the symbol. What part of that is confusing? It does not require that the sort order is the only meaning assigned to the symbol. I'm sorry if I'm not get getting this whole 'backed into a corner' vibe. What I'm saying makes perfect sense.
‘Flying in front of the fire station’ is assuredly not part of the flag’s meaning, however.
Unless, that is, it is determined that flying a flag in front of the fire station does have meaning and it is done to convey that meaning. If you see the flag flying at half mast, does that mean anything to you? It means something to the person who flew it at half mast, and they expect that at least some people will recognize that meaning. The example you used to demonstrate what does not have meaning is actually a real example of something that literally can and does have meaning. Perhaps you should have thought about that longer. Although I've clarified the wording of my initial statement for the benefit of the one person challenged by it, I think that nearly anyone who read it understood it the first time.
And can we finally move on to this point? ... Relating symbols to referents is something that brains do. You’ve told me that you agree that brains operate according to physical law. If so, then in what sense does physical law fail to explain the mapping?
I'm happy to move on to your next question, as it implicitly concedes the point you've wasted so many posts quibbling over. You may find my answer to this question we're 'finally moving on to' back at 12.1.1.2.30. Put about as simply as I can, physical laws do not explain everything that they permit or enable. A car enables a person to drive from A to B. If you park your car outside a store and it's across the street when you come out, how will you explain that? Is 'the car drove there' an adequate explanation? It's almost certainly true that the car drove there, and whoever drove it did so the same way anyone drives any car. But an explanation of how the car operates and how one drives it would not sufficiently explain how it got across the street. That is not the explanation you would want. Similarly, the operation of natural laws upon the molecules of one's brain does not explain why one person writes a novel and another whistles Beethoven in the men's room. And the operation of natural laws does not explain why some cultures selected the symbols "7" and "XII" to represent the number or quantity seven. They do not explain that there is an abstract concept of "seven" which can be associated with any symbol. If you disagree then - use natural law to explain why the digits 0-9 represent numbers or - use natural law to explain any set of symbols or - offer even the vaguest hypothetical process by which a natural law might produce any set of hypothetical symbols. Keep in mind that in each case the symbols are not just a reflection, an effect of what they represent, but are used both to create and to receive abstract information. You know, just like when someone imagines 52 cards, prints them on paper or represents them within a simulation, and someone else understands their meaning, even if only in the limited sense of knowing in which order to sort them. I admit error all the time, even when I'm right. I'm married. But I won't admit error when I'm right just because you ask me to.ScottAndrews2
February 3, 2012
February
02
Feb
3
03
2012
11:06 AM
11
11
06
AM
PDT
kf, first of all, I was not critiquing your version of CSI, but Dembski's, and showing that as defined in "Specification: the Pattern that Signifies Intelligence" CSI doesn't work, because it doesn't rule out something like Chesil Beach. But let's take your own argument:
For INFORMATION-carrying capacity to exist, there has to be high contingency, which is then manifested in a way that is locked down to a narrow and specific zone in the field of possibilities, all of that stuff about CASES E IN SPECIFIC ZONES T IN THE WIDER FIELD OF POSSIBILITIES W, as can be seen in NFL by a certain Wm A D. That is crucial.
Well, define what you mean by "high contingency".
In short, until we arrive at high contingency, an entity is not in the zone where informational character is at stake.
Well, I need your operational definition of "high contingency".
Only highly contingent objects storing 500 or more bits of information capacity need apply, and it is in that context that CSI THEN HELPS US DECIDE IF THE OUTCOME OBSERVED IN SUCH A HIGHLY CONTINGENT CASE IS BEST EXPLAINED ON CHANCE OR DESIGN.
So it seems you have added another item to the filter, namely "high contingency". I really do need this definition.
The way that is done is per inference to best explanation, given the balance of clusters of possible outcomes in the space of possible configs. 500 coins in no particular order, chance. 500 coins spelling out he first 73 ASCII characters of this post, design. The pebbles on Chesil beach under the circumstances precisely do not have that high contingency. They are thus not informational.
There are, I estimate, about 100 billion pebbles on Chesil beach. Of the possible ways of arranging those pebbles (let's say to the nearest millimeter, with the pebbles ranging from 10mm diameter to 110mm), a tiny fraction will be graded in size from 10mm at the West end to 110 at the east. That is a very simply described pattern, and I suggest that the only other pattern that is equally simple of description is: graded from 10mm at the East end to 110 at the west. So we have a highly specified pattern, and a high Shannon information-full beach. It fulfills Dembski's requirements for CSI. The pattern is also informative in two senses: one is that "semiotic agents" to use Dembski's term, can orient themselves by it. The second is that the beach itself is constantly renewed, and the sizes of the pebbles already on the beach at least partly determine the sizes of the pebbles that will be deposited there by the next tide. Tell me why it does not have "high contingency". Note, that I do not argue that the beach was designed, or even that this post wasn't. I simply dispute that the CSI metric can distinguish between the two on the evidence of the pattern alone. I think it produces both false positives and false negatives. But I await your definition of "high contingency". Cheers LizzieElizabeth Liddle
February 3, 2012
February
02
Feb
3
03
2012
10:44 AM
10
10
44
AM
PDT
Pet: we have very little experience with designing proteins [I suspect, across this century that will change], but a lot more in designing automated manufacturing and control tapes for such that use digital codes. Those highly specific, functional and complex codes that drive the primary form of AA sequences that fold and form proteins that work in life, as assembled in the ribosome, point rather strongly to design. Please answer the case in front of us, not the easily set up and knocked over strawman. KFkairosfocus
February 3, 2012
February
02
Feb
3
03
2012
10:22 AM
10
10
22
AM
PDT
Elizabeth: Regarding the conversation higher up on the thread, if I understand you correctly, you are saying that you are not a full-time faculty member at the institution with which you are associated. If that is the case, then obviously you do not have the contractual or moral obligations of such a faculty member, and I apologize for accusing you of shirking those obligations. (But if this is the case, you could have saved me embarrassment and yourself further irritating criticism by clarifying this when I first made this accusation a while back. I would have apologized right away.)
I'm actually not going to answer this, Timaeus. If you really want to know more about me I'll leave it to your google fu. I have told you no lies. As far as I am concerned, my credentials are irrelevant to my arguments. They must stand or fall on their merits.
One more pass at the Abbie Smith affair: 1. Of course an error is an error, regardless of what else is in a book. That’s true in the arts as well as in the natural sciences. But if Abbie Smith detected an error in Behe’s account of viruses, that does not make her reply “devastating.” The word “devastating” in these polemical contexts generally means “destroys the author’s thesis.” But one error in one small section of a book does not destroy the author’s thesis. At most her criticism was “devastating” to some remarks Behe made about viruses. But his thesis didn’t depend on what he said about viruses. So maybe what we are disagreeing about is merely the usage of the word “devastating.”
Perhaps we are. But as I understood it, Behe had made an inference about the evolution a gene on the HIV virus that was falsified by Abbie Smith's evidence. But I should probably have used "falsified" rather than "devastated".
2. You didn’t really answer my question, which was about whether scientists consider it appropriate to declare that the author of a book has been refuted when they have read only someone else’s criticism of the book, and have not checked the book itself to make sure that the critic has in fact fairly represented the argument of the book. Many apparently “devastating” arguments are no longer seen as such once the original writing is consulted; often a straw man rather than the real thing has been refuted. Arts people are trained to always go back to the primary source. In the case of Behe’s *Edge of Evolution*, you did not display this kind of training, so I was wondering if that was just a decision of your own (to trust the critic without checking the original), or whether scientists are habitually that academically careless.
That is a fair comment. As Behe's case is made in a book, not in a subscription journal, I do not have access to the primary source. I agree that it was possible that Smith misrepresented his argument. The same applies to the Chloroquine Complexity Cluster argument, which has been thoroughly critiqued. It is possible that those critics have misrepresented his argument too. I did assume that the actual substance of each claim had been fairly represented, but I agree that I am not in a position to check.
In addition, local circumstances would seem to have dictated that a check of the original was in order: the critic clearly was sarcastic, angry, and with a bee in her bonnet, indicating the possibility that she might have read Behe less than dispassionately; and the critic was someone still some months or possibly even years away from completing her doctoral work, and was criticizing a tenured professor with over 35 peer-reviewed papers in his field, which might indicate that the critic had insufficient experience to make a balanced judgment. Yet despite these warning lights to go back to read the original source, you simply accepted Smith’s presentation of Behe’s argument. My arts training tells me this is a faulty procedure. And I think a good number of scientists would agree with me that the original source should be checked before the critic’s statement is simply accepted.
Yes, I agree, that is a fair point. I do tend to assume that scientists are honest, however rude. That is not necessarily a safe assumption.
3. You still haven’t caught the point about behavior. If you say to a critic: “You are rude and obnoxious, but I am going to carefully read your criticism and respond to it just as if you were respectful”, you are taking away all the incentive for the rude and obnoxious critic to change her habits. But if you say: “You may or may not have some valid scientific criticisms in here, but I have no intention of reading them until you apologize for your opening insolence and commit to not speaking to me, or to any scientific colleague, in that manner again,” you have created a strong incentive for the behavior to change. The arrogant young would-be scientist will then realize that his or her projected brilliant scientific career is toast, unless he/she gets a handle on his/her ego and learns the art of civilized academic discourse. This is basic psychology, and as you have a Ph.D. in that subject, I’m surprised you wouldn’t have thought of this already.
I'm not sure whether to laugh or cry. I think I'll laugh. You keep accusing me of not showing the expertise you'd expect from someone with my apparent credentials, yet you happily assume the competence to judge what someone with that expertise should think! But I'll answer: yes, I know a little bit about behavioural training. Actually, it's my field. But, firstly, there is more to behavioural training than rewarding good behaviour and penalising bad. Secondly, I don't consider it my business to reform the behaviour of random internet tough guys! And thirdly, I confess, Abbie makes me laugh. oops.
And by the way, even if one *says* that one is not going to read the paper of the insulting person, that does not mean that one cannot in fact read it (unbeknownst to that person) and thus benefit from any scientific criticisms or detections of error that it contains. The point is that, in order to achieve the desired alteration of anti-social behavior, one should not give any *public acknowledgment* that one has read it. The one thing that *enfants terribles* crave, above all, is attention; the thought that their brilliance is not getting any attention from their elders is the most horrifying thing in the world for them. Therefore, one must make them think that no one is paying the slightest bit of attention to them. Then, surprise, surprise, the *enfant terrible* behavior starts to go away.
Yep. When my son was bullied at primary school, I suggested that if he didn't react so spectacularly when provoked, the bullies would cease to get a reward, and gradually stop (behaviour goes to "extinction" heh - I told you learning was like evolution). I explained about "Skinnerian conditioning" in other words. He came home the next day and said: "Mum, that Skinnerific condition really works!"
Finally, regarding this paragraph: “As for whether you have debated me on substance: perhaps you have; if so I apologise (I am not very good at remembering who I’ve had exchanges with, I’m afraid).” I find it rather insulting. I don’t mean it intends to insult, but it’s still insulting. I have a vivid memory of every single exchange we have had, and can even remember particular twists and turns of argument; yet apparently my careful comments to you, which often take a great deal of time and editing to produce, make little lasting impression on your mind. This is another good reason for abandoning this kind of forum; not only do Behe, Dembski, Miller, etc. not pay attention to anything that is said here; even some of the people addressing each other here forget what their opponents have said if it is more than a few days or weeks ago! If I don’t have any lasting influence upon someone like you, with whom I’ve probably exchanged thirty thousand words over the past six months, in close back-and-forth, point-by-point responses, it’s unlikely anything I say is going to affect the way Eugenie Scott or Bill Dembski conduct themselves.
Yes, I realised it was potentially insulting, and I'm sorry. But you needn't be insulted. I do have a good memory for these exchanges - what I don't have a good memory for, and it's at least partly my age, is the names of the people I was exchanging with. That's partly a result of anonymity of course - if I can put a face and a background to a name, I'm more likely to remember which conversation was with whom. And I've had email exchanges with a few people, and remember which ones. And the other thing is (and it's both a fault and a merit), is that I do, on the whole, tend to focus on substance, rather than tone (as I said) and that again biases me towards remembering what was said rather than by whom. Lastly, perhaps it's worth considering that I stand out here rather more obviously than IDists stand out from each other! I'm obviously female, and not an IDist. That paints me in a fairly vivid colour (though I'm not the only Darwin Girl here, even not including the one that turned out to be a Darwin Boy :)) So don't be insulted. Remind me specifically of the conversations, and I'll almost certainly remember them. I just don't have names very firmly tagged to them. It's the who, not the what, that fails to stick in my aging brain. Or email me :) You'd be very welcome to.
I think this covers everything, Elizabeth. Best wishes.
And to you, Timaeus. Peace. LizzieElizabeth Liddle
February 3, 2012
February
02
Feb
3
03
2012
10:18 AM
10
10
18
AM
PDT
Scott, How far will you go to avoid admitting error? This is actually kind of interesting. Your ego is backing you into ever tighter corners. You wrote:
Sort order, when not derived purely from the physical properties of a thing, is meaning.
I can generate a sort order for a set of symbols using a random number generator. According to you, the output of my random number generator is therefore part of the meaning of the symbols. Does that not strike you as absurd?
It is certainly not the entire meaning of whatever is being sorted. But is absolutely is meaning. It is information attributed to a thing.
The meaning of a symbol is what it represents. The sort order of a symbol is information about the symbol, but it is not the symbol's meaning. The American flag is flying in front of the fire station at 6th and Main. That's information about the symbol. 'Flying in front of the fire station' is assuredly not part of the flag's meaning, however.
And how does it “know” the bolded point above without an input of information, the arbitrary assignment of that relationship? That is my point in its entirety, from the beginning of this discussion until now, and it has not changed.
I've never disagreed, and in fact I explicitly agreed way back here. You're the one who has been stalling the discussion by refusing to correct your errors. So, do you finally agree that this statement of yours is incorrect?
Anything that sorts them, human or otherwise, must have awareness of the meaning of the symbols printed on them.
And can we finally move on to this point?
Scott,
It’s not just any particular mapping that natural laws do not explain. It is the very concept of relating a symbol to a reality.
Relating symbols to referents is something that brains do. You’ve told me that you agree that brains operate according to physical law. If so, then in what sense does physical law fail to explain the mapping?
champignon
February 3, 2012
February
02
Feb
3
03
2012
10:15 AM
10
10
15
AM
PDT
Dr Liddle: Pardon, but had you read my own answer with intent to understand rather than to object you would have seen why a mechanical sort will not fit in with the functional form of CSI, and onwards why the same low contingency system will not have CSI in the general sense. That is part of why I took the matter in two bites, looking at the Ip and the S terms. For INFORMATION-carrying capacity to exist, there has to be high contingency, which is then manifested in a way that is locked down to a narrow and specific zone in the field of possibilities, all of that stuff about CASES E IN SPECIFIC ZONES T IN THE WIDER FIELD OF POSSIBILITIES W, as can be seen in NFL by a certain Wm A D. That is crucial. In short, until we arrive at high contingency, an entity is not in the zone where informational character is at stake. Only highly contingent objects storing 500 or more bits of information capacity need apply, and it is in that context that CSI THEN HELPS US DECIDE IF THE OUTCOME OBSERVED IN SUCH A HIGHLY CONTINGENT CASE IS BEST EXPLAINED ON CHANCE OR DESIGN. The way that is done is per inference to best explanation, given the balance of clusters of possible outcomes in the space of possible configs. 500 coins in no particular order, chance. 500 coins spelling out he first 73 ASCII characters of this post, design. The pebbles on Chesil beach under the circumstances precisely do not have that high contingency. They are thus not informational. And, I actually described a different possible case, where they could be made informational, with a fleet of dredges and barges. Which of course would be by intelligent design. G'day GEM of TKIkairosfocus
February 3, 2012
February
02
Feb
3
03
2012
10:01 AM
10
10
01
AM
PDT
Elizabeth: Regarding the conversation higher up on the thread, if I understand you correctly, you are saying that you are not a full-time faculty member at the institution with which you are associated. If that is the case, then obviously you do not have the contractual or moral obligations of such a faculty member, and I apologize for accusing you of shirking those obligations. (But if this is the case, you could have saved me embarrassment and yourself further irritating criticism by clarifying this when I first made this accusation a while back. I would have apologized right away.) One more pass at the Abbie Smith affair: 1. Of course an error is an error, regardless of what else is in a book. That's true in the arts as well as in the natural sciences. But if Abbie Smith detected an error in Behe's account of viruses, that does not make her reply "devastating." The word "devastating" in these polemical contexts generally means "destroys the author's thesis." But one error in one small section of a book does not destroy the author's thesis. At most her criticism was "devastating" to some remarks Behe made about viruses. But his thesis didn't depend on what he said about viruses. So maybe what we are disagreeing about is merely the usage of the word "devastating." 2. You didn't really answer my question, which was about whether scientists consider it appropriate to declare that the author of a book has been refuted when they have read only someone else's criticism of the book, and have not checked the book itself to make sure that the critic has in fact fairly represented the argument of the book. Many apparently "devastating" arguments are no longer seen as such once the original writing is consulted; often a straw man rather than the real thing has been refuted. Arts people are trained to always go back to the primary source. In the case of Behe's *Edge of Evolution*, you did not display this kind of training, so I was wondering if that was just a decision of your own (to trust the critic without checking the original), or whether scientists are habitually that academically careless. In addition, local circumstances would seem to have dictated that a check of the original was in order: the critic clearly was sarcastic, angry, and with a bee in her bonnet, indicating the possibility that she might have read Behe less than dispassionately; and the critic was someone still some months or possibly even years away from completing her doctoral work, and was criticizing a tenured professor with over 35 peer-reviewed papers in his field, which might indicate that the critic had insufficient experience to make a balanced judgment. Yet despite these warning lights to go back to read the original source, you simply accepted Smith's presentation of Behe's argument. My arts training tells me this is a faulty procedure. And I think a good number of scientists would agree with me that the original source should be checked before the critic's statement is simply accepted. 3. You still haven't caught the point about behavior. If you say to a critic: "You are rude and obnoxious, but I am going to carefully read your criticism and respond to it just as if you were respectful", you are taking away all the incentive for the rude and obnoxious critic to change her habits. But if you say: "You may or may not have some valid scientific criticisms in here, but I have no intention of reading them until you apologize for your opening insolence and commit to not speaking to me, or to any scientific colleague, in that manner again," you have created a strong incentive for the behavior to change. The arrogant young would-be scientist will then realize that his or her projected brilliant scientific career is toast, unless he/she gets a handle on his/her ego and learns the art of civilized academic discourse. This is basic psychology, and as you have a Ph.D. in that subject, I'm surprised you wouldn't have thought of this already. And by the way, even if one *says* that one is not going to read the paper of the insulting person, that does not mean that one cannot in fact read it (unbeknownst to that person) and thus benefit from any scientific criticisms or detections of error that it contains. The point is that, in order to achieve the desired alteration of anti-social behavior, one should not give any *public acknowledgment* that one has read it. The one thing that *enfants terribles* crave, above all, is attention; the thought that their brilliance is not getting any attention from their elders is the most horrifying thing in the world for them. Therefore, one must make them think that no one is paying the slightest bit of attention to them. Then, surprise, surprise, the *enfant terrible* behavior starts to go away. Finally, regarding this paragraph: "As for whether you have debated me on substance: perhaps you have; if so I apologise (I am not very good at remembering who I’ve had exchanges with, I’m afraid)." I find it rather insulting. I don't mean it intends to insult, but it's still insulting. I have a vivid memory of every single exchange we have had, and can even remember particular twists and turns of argument; yet apparently my careful comments to you, which often take a great deal of time and editing to produce, make little lasting impression on your mind. This is another good reason for abandoning this kind of forum; not only do Behe, Dembski, Miller, etc. not pay attention to anything that is said here; even some of the people addressing each other here forget what their opponents have said if it is more than a few days or weeks ago! If I don't have any lasting influence upon someone like you, with whom I've probably exchanged thirty thousand words over the past six months, in close back-and-forth, point-by-point responses, it's unlikely anything I say is going to affect the way Eugenie Scott or Bill Dembski conduct themselves. I think this covers everything, Elizabeth. Best wishes. T.Timaeus
February 3, 2012
February
02
Feb
3
03
2012
08:51 AM
8
08
51
AM
PDT
Despite claims to the contrary, the only process known for designing proteins and for discovering their folds is cumulative selection. In chemistry or in simulations, it's cut and try.Petrushka
February 3, 2012
February
02
Feb
3
03
2012
08:39 AM
8
08
39
AM
PDT
That is one of several reasons why concepts like CSI can distinguish can distinguish between random sorting and non-random sorting, but cannot distinguish between natural processes and intervention. In particular, such metrics cannot distinguish between sortings made by a foresightful process and sortings made by cumulative selection. Which is why KF and gpuccio are careful to assert that functional sequences are isolated and cannot be reached by cumulative selection. But that is a different war, and it will be won or lost by troops on the ground, not by philosophy. The Lenskis and Thorntons and Szostaks and their compatriots will decide the issue of connectness.Petrushka
February 3, 2012
February
02
Feb
3
03
2012
08:35 AM
8
08
35
AM
PDT
So you didn't read my response to vjtorley, kf? Yes, there's a mechanical reason why the stones are sorted. Exactly. Dembski's CSI does not include a term for the process by which the pattern arose - not surprisingly, because the whole point of his metric is to enable us to determine the nature of that process by examining the pattern alone. Your own version may be superior, but it was Dembki's metric I was using.Elizabeth Liddle
February 3, 2012
February
02
Feb
3
03
2012
07:47 AM
7
07
47
AM
PDT
Champignon, Here goes my February 3rd resolution down the drain.
Sort order and meaning are two different things.
Thank you for boiling this down to its essence. I understand your argument more clearly now. But if this is where you would like me to admit to error, I must disappoint you. Sort order, when not derived purely from the physical properties of a thing, is meaning. It is certainly not the entire meaning of whatever is being sorted. But is absolutely is meaning. It is information attributed to a thing. Your example of sorting animal names makes no sense at all. When sorting alphabetically, the meaning of the words is irrelevant. But the "meaning," or information assigned to the letters, that A comes before B which comes before C, is essential. That meaning is assigned, not in any way emergent.
All it needs to know is that a card that matches pattern X (which happens to be an image of the 5 of hearts) goes before a card that matches pattern Y (which happens to be an image of the king of hearts). It doesn’t understand that the numeral 5 represents the number 5, it doesn’t recognize the club symbol. Indeed, it doesn’t even know that there are symbols on the cards at all.
And how does it "know" the bolded point above without an input of information, the arbitrary assignment of that relationship? That is my point in its entirety, from the beginning of this discussion until now, and it has not changed. Having re-read my very first statement to that effect, I don't see how it was unclear at all, even though you would have me polish it until you can see your reflection. And in your example above in which the computer has no information regarding the relationship between the symbols on the cards and yet somehow magically sorts them anyway, what happens when I replace 2, 3, 4, and 5 with II, III, IV, and "five?" Now how will it sort them without some new input assigning meaning to those symbols? Perhaps you've heard people say "Let's make this interesting" to describe taking something trivial and making it more significant by betting money on it. The only way left I can see to make this interesting is to count how many more times you will feign inability to comprehend this very simple point rather than address it, or better, concede that it is correct. (Other options include giving up or finding something else in my post unrelated to the point to quibble over. I suppose we could count those too.)ScottAndrews2
February 3, 2012
February
02
Feb
3
03
2012
07:27 AM
7
07
27
AM
PDT
1 2 3 9

Leave a Reply