Uncommon Descent Serving The Intelligent Design Community

Signature of Controversy: New Book Responds to Stephen Meyer’s Critics

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Signature of ControversyCritics of intelligent design often try to dismiss the theory as not worth addressing, as a question already settled, even as being too boring to countenance. Then they spend an amazing amount of energy trying to refute it. On this episode of ID the Future, Anika Smith interviews David Klinghoffer, editor of the new digital book Signature of Controversy: Responses to Critics of Signature in the Cell, featuring essays by David Berklinski, David Klinghoffer, Casey Luskin, Stephen C. Meyer, Paul Nelson, Jay Richards, and Richard Sternberg. Listen in as Klinghoffer examines the responses of these various critics in this new volume, available as a free digital book.

Click here to listen.

Comments
O/T but perhaps interesting: As many participants and onlookers will know, I have been monitoring developments in my homeland this week. Painful, horrific, but with a little hope that the shock of outright insurrection and an urban battle to put it down would perhaps trigger a serious reformation. The upshot is that the J'can security forces (with evident US backing and support) fought a short siege battle with a druggie warlord enclave, breaking in through a frontline of barricades with IEDs and entanglements, backed up by small arms. They took some 50 - 60 casualties [just one fatality, other security forces fatalities were on an ambush of police responding to a distress call elsewhere in the city . . . ballistic vests make a big difference] but broke in and through what sounds like a 200 m deep defensive zone after about 3 hours; which is decisive in this kind of fort. The insurrectionist side suffered serious losses, many of them fatal. 700 were initially detained, 200 have gone home on processing, and at least one detainee seems to have gone suicidal, threatening to jump off a roof until he was tackled and taken to hospital. So far, the warlord at the focus of the conflict is still at large, but the security leadership seem confident they can capture him; though already there has been a controversial gun-battle in an upper class neighbourhood. We shall see. But, in response to a call to a list of other suspected warlords and gang leaders to come in, most have presented themselves to the security forces. Serious recriminations are flying fast and furious over the specifics of what happened, blunders, atrocities and cover-up claims, and the wider trends and circumstances that led to the growing cancer. A cancer of druggie gangsterism, and linked drug money and protection racket [de facto taxation?] funded warlordism tied into the political systems especially through depressed communities where in effect the warlords and their retainers have been the de facto government in the society, especially the capital. (I should note, this is about 60 - 150 miles by road from the resort areas favoured by tourists, on the South coast, not the North. But of course the headlines have doubtless done serious damage to the vital tourist industry.) I hope -- and pray -- that out of this horror, a serious rethink of where the society has been headed will lead to reformation. I for one would favour a South Africa style Truth and Reconciliation Commission of Inquiry. That will be painful, and will move the older generation of politicians 9those implicated in the era of a low key civil war tied to the culmination of the cold war era's impacts in Jamaica across the 1970's and the early 1980's) off the scene definitively, and perhaps some of the younger ones too, but it will allow the land to build on the truth and a national rededication to the right. Then, maybe the nation can be rebuilt on a sounder foundation over the next generation. The announcement of an inquiry into the events of the week past will be a step towards that. GEM of TKIkairosfocus
May 29, 2010
May
05
May
29
29
2010
03:33 AM
3
03
33
AM
PDT
GP: Glad to have a productive exchange. All the best. Gkairosfocus
May 29, 2010
May
05
May
29
29
2010
02:46 AM
2
02
46
AM
PDT
KF: Thank you for the response. I think we agree in practically everything. And this discussion is certainly useful to elucidate some aspects of the problem which are not always apparent.gpuccio
May 29, 2010
May
05
May
29
29
2010
01:41 AM
1
01
41
AM
PDT
GP: An interesting contribution. I agree that immateriality of information per se (including random data strings, i.e credibly non-specific ones) does not implicate their immediate source as conscious. What I have pointed out is that the wider context is intelligent. And, it would be immediately apparent to any experienced electronic systems designer [and that is how recognition of head/tail would almost have to be done] that the system to produce a string of data from automatic coin tosses would easily run past 125 bytes [1,000 bits] of functional information. So, while I agree that the data string from coin tossing could be below the threshold for sufficient complexity to be relevant, the context that produces it will almost necessarily be past that level. That is why we would immediately and intuitively recognise it as a machine. Of course, such as string would also be non-specific, in the sense that there is no constraint as such on where it is in the config space for function as a coin toss string. CSI or FSCI relevant strings are constrained to work in ways that are required by an algorithm, or a language's requisites or the requirements of a functional network, or those of the constraints on a wireframe, etc. As to Dembski's recent work on active information and evolutionary informatics, this is in the context of such target zoning, islands of identifiable and recognisable function, in a wider config space. He is in effect asking what explains being in the hot zone? ANS: Injection of active information that enables on-average routine out-performing of random walks from arbitrary initial points in the config space. This by providing the equivalent of GPS to locate where you are and a map of where the islands are, or a beacon that draws you in by sending warmer/colder signals BEFORE your string is functional in itself, or the like. (The infamous Weasel is a classic of this.) In other cases, of course, what is done is that the relevant space is within a predefined island of function, i.e the wider search zone has been eliminated by focussing on the relatively small quantum of information to wander around in such a target zone. This for instance is how Avida fails. (So long as the config space is not large enough, and so long as you do not start from an arbitrary initial point, you are begging the key questions.) So, when I look at your:
But the point is that, being ID based on design detection, the functional specification must be recognizable in the string itself, without any help from context, for the ID reasoning to be appropriate. In that sense, I would maintain that the concept that: “As all information is immaterial, it cannot be explained by a conventional, reductionist materialist approach” is not a valid ID argument. Strings can be observed which have the objective appearance of data, even if they originated in a purely physical, non conscious context.
. . . I think that the wider context of a string is going to be highly relevant. For, as I have pointed out from the beginning of my serious participation in the ID debates, in my briefing note [as is always linked] Section A:
information-bearing messages flow from a source to a sink, by being: (1) encoded, (2) transmitted through a channel as a signal, (3) received, and (4) decoded. At each corresponding stage: source/sink encoding/decoding, transmitting/ receiving, there is in effect a mutually agreed standard, a so-called protocol . . . . [Also] at each stage noise affects the process, so that under certain conditions, detecting and distinguishing the signal from the noise becomes a challenge. Indeed, since noise is due to a random fluctuating value of various physical quantities [due in turn to the random behaviour of particles at molecular levels], the detection of a message and accepting it as a legitimate message rather than noise that got lucky, is a question of inference to design. In short, inescapably, the design inference issue is foundational to communication science and information theory . . . . [G]iven the significance of what routinely happens when we see an apparent message, we know or should know that [d] we routinely and confidently infer from signs of intelligence to the existence and action of intelligence. On this, we should therefore again observe that Sir Francis Crick noted to his son, Michael, in 1953, in the already quoted letter: "Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)." For, complex, functional messages, per reliable observation, credibly trace to intelligent senders.
So, once I see the existence of a symbolic code, and the wider context in which a digital string exists -- even if that string itself is random -- the question arises as to where the code, the conventions on messages, and the machinery arranged for setting up, sending, receiving and pulling out the meaning of said code strings came from. Such an irreducibly complex entity strongly points to art as its source, not happenstance and blind mechanical necessity. And, that is an empirical postulate, backed up by the sort of insight that Prof Sewell so often underscores: mere opening up of a system to energy and matter inflows and outflows does not make probabilistically implausible outcomes on chance and mechanical necessity from happenstance initial conditions suddenly become far more probable. In fact, the evidence from relevant classical and statistical thermodynamics [cf my appendix 1] is that mere opening up of a system joined to inflows of energy and/or mass will tend to add to the disorganisation of a system. Energy and mass flows need to be co-ordinated with and coupled to the system to fit in with its processes. Having said that, it is of course correct to note that a data string or set of strings that are not functionally specific and complex will not pass the CSI criterion to infer to design on the strength of the string alone. But, strings of digital symbols do not exist in isolation, and the FSCI inference is to be made in that light too. That wider context of a system almost invariably will strongly point to an intelligent context. Shifting contexts, if we receive a string of apparent digits from space, and we are able to decode a functionally specific sequence, that would lead to an inference to design without seeing a wider context. (As opposed to noting the regular pules of a quasar, which have order not organisation and credibly trace to a mechanism that does not in itself implicate issues of design.) However, here on earth, we can normally see that digital strings come in contexts. (You can see how I agree with your emphasis on building on specific limited domain valid results, instead of trying to erect premature grand logico-methematical schemes. Newton was in no position to erect a grand explanation for the roots of Gravitation, but he could identify and characterise its key laws. Which became the launchpad success for the career of modern science.) GEM of TKIkairosfocus
May 29, 2010
May
05
May
29
29
2010
12:43 AM
12
12
43
AM
PDT
KF: Had Rob et al objected that Meyer etc could have spoken more clearly that would be one thing . . . but that is not what we are dealing with. I agree with you that R0b was probably pursuing his own tactic, and certainly I am not here to attack Meyer in any way. Still, I feel that our duty is to clarify as much as possible our position from the point of view of what wbelieve to be true. So, if there is any confusion about important concepts, it is our duty (and interest) to debate those points objectively in order to find greater clarity. Here I do believe that an interesting discussion has come out from R0b's remarks (and I am grateful to him for that). Your last post adds very constructively to that discussion, and so I want to comment about a couple of points. 1) I rather doubt that Meyer was speaking of a random string as the prime context of his remark. The information was being discussed in the known general context of CSI, and the particular point is that information is massless [which extends to not a feature of energy as such either]. That was also my position, but the explicit citation made by R0b from the DVD is difficult to interpret in that sense. But anyway, I think that we can agree that our issue is not if Meyer has made a mistake (he certainly can, like any of us), but rather if the argument (as understood and cited by R0b): "As all information is immaterial, it cannot be explained by a conventional, reductionist materialist approach" is a valid ID argument or not. 2) About that specific point you make in your post some very interesting comments, which seem to respond to my observations in my previous post (#81). I must say that I am very happy that you raised those points, because I was in some way aware of tyhem, but I had chosen to ignore them for the moment in my post #81, for the sake of clarity. But your remarks are a good starting point for me to try to clarify (to myself, first of all) what I really think. 3) Firts of all, I know that we have always agreed to consider FSCI as the most "tractable" definition of CSI for the biologic discussion. That has been a constant theme in our contributions to this blog. While we may have never discussed explicitly the reasons for that choice, I have sometimes admitted here that my personal reason is that only for FSCI I can give a completely explicit and (for me) satisfying definition which I can consistently use in the debate. I have personal problems with the more recent approaches made by Dembski to his concept of CSI (like those indirectly referred to by R0b), not because I think they are wrong, but because I can't understand them fully. Moreover, I have the feeling that Dembski is trying to define CSI in purely logical terms, while I prefer to stick to an empirical definition whih uses the empirical concept of conscious agent to define function. I really believe that the concept of function is strictly dependent on the laws of consciousness, and on the subjectiive experience of the conscious representation of purpose, and cannot be defined in a completely objective, purely logical way. 3) That brings us to your comments: And that masslessness extends to even random strings of symbols say from flipping a coin and recording the result, but something is being missed on how we get from chance events to symbol strings, the stuff of dFSCI. I agree. But I will try to make some distinctions as your discourse develops. Tossing coins n times does not in itself generate information, it gnerates a string of events. We — intelligent, symbol using, event observing agents — collect a coded string thhttttthththhhttthhtt . . . and that string is information. The mere fact of a coin being flipped and landing is not in itself informational, the information arises in a context that assigns meaning and symbolic labels. OK, I follow your reasoning and I perfectly agree with you. What you are saying here is that data in themselves originate as objective events, and that they become data only in the conscious mind of an intelligent observer. I perfectly agree with that, and it is absolutely consistent with my concept of the central role of consciousness in any form of cognition. There is no cognition without consciousness. In that sense, there are no data withoutv a conscious agent. Maybe this is also the sense of Nelson's citation about weather data). (As an aside, I must say that I am here in a little bit of difficulty, because I have tried to download the book linked in this thread, but I could not do that because a registration was required, and the registration form would not let me register unless I gave a US state where I live, even if I specififed in the following field that I live in Italy. Anyone can help me about that? So, at present I am relying only on what has been explicitly cited in this thread.) 4) But there is another aspect to this point. We can always find objects where a string of apparent data, corresponding to objective events, can be observed, while we don't know if those data were intentionally recorded by a conscious agent with a purpose (and so, in that sense, the string is designed) or if they originated automatically from random events. To be more clear, I will go back to the example of the tossing of a coin. Let's say that a simple automatic mechanism is tossing a coin in a linear direction n times, so that the result of each toss is imprinted on some physical medium when the coin falls down and so "recorded". The result will be a physical object (the recording medium) where a string of binary symbols corresponding to a series of truly random events can be observed. Now, in this scenario we find the string, but we don't know how it originated. In other words, this is a design detection scenario. You could object that the context where the string originated must have been designed, but I would answer that its complexity is low, and therefore it could arise by chance (maybe not exactly as I have described it, but we can think of some natural context equivalent to it). So, the question is: what do we conclude, if we observe such a string? The answer is important to understand all the logic of ID. And, IMO, we conclude that we cannot know if the string is designed, because it has the formal properties of a random string. While there is certainly complexity (if the string is long enough), we cannot recognize any functional specification in the string itself. And that's perfectly correct, because, if for a moment we could know how the string really originated, we could see that it really represents true random events. So, our first inference is that the string is not functionally specified, because it exhibits no functional specification. Our second inference is that we cannot recognize it as designed. It could be designed. The string could be a series of data intentionally recorded by a conscious observer for a definite purpose (for instance, to study empirically the probability distribuhtion of the tossing of a coin). In that case, and in that sense, it would be designed. But the important point is: the informational properties of the string itself do not contain any functional specification; no function is recognizable in the string of information itself. So, even if the string was designed as the recording of data, the functional intent is to be found in how and why the string of data was recorded, that is in the recording context, and not in the informational string itself. So, as we are strict IDists, we conclude that we have no empirical evidence that the string was designed, because it could have originated in a perfectly "non conscious" context (and, in this specific case, we would be perfectly right; this would be a true negative, not a false negative). So, ID works properly under all circumstances, if correctly understood. But the point is that, being ID based on design detection, the functional specification must be recognizable in the string itself, without any help from context, for the ID reasoning to be appropriate. In that sense, I would maintain that the concept that: "As all information is immaterial, it cannot be explained by a conventional, reductionist materialist approach" is not a valid ID argument. Strings can be observed which have the objective appearance of data, even if they originated in a purely physical, non conscious context. In that case, they need not a conscious agent to be explained, and yet they may have all the formal properties that we find in true data (no functional specification, no significant compressibility, complexity). From an ID point of view, no conclusion can be made. Only if a functional specification is recognizable in the string of information itself, we can classify that string as FSCI, and infer design. And again, our inference would be correct.gpuccio
May 28, 2010
May
05
May
28
28
2010
10:36 PM
10
10
36
PM
PDT
vjtorley, thanks for the link Human Evolution? - The Compelling Genetic & Fossil Evidence For Adam and Eve - Dr. Fazale Rana - video http://www.metacafe.com/watch/4284482bornagain77
May 28, 2010
May
05
May
28
28
2010
07:58 PM
7
07
58
PM
PDT
Off-topic, but of interest to some: Ardipithecus backlash http://www.time.com/time/health/article/0,8599,1992115,00.html http://johnhawks.net/weblog/reviews/early_hominids/phylogeny/sarmiento-ardipithecus-comment-2010.htmlvjtorley
May 28, 2010
May
05
May
28
28
2010
07:21 PM
7
07
21
PM
PDT
GP: I think there is a little matter of context there, that should lend to charitable reading. Had Rob et al objected that Meyer etc could have spoken more clearly that would be one thing . . . but that is not what we are dealing with. I rather doubt that Meyer was speaking of a random string as the prime context of his remark. The information was being discussed in the known general context of CSI, and the particular point is that information is massless [which extends to not a feature of energy as such either]. And that masslessness extends to even random strings of symbols say from flipping a coin and recording the result, but something is being missed on how we get from chance events to symbol strings, the stuff of dFSCI. Tossing coins n times does not in itself generate information, it gnerates a string of events. We -- intelligent, symbol using, event observing agents -- collect a coded string thhttttthththhhttthhtt . . . and that string is information. The mere fact of a coin being flipped and landing is not in itself informational, the information arises in a context that assigns meaning and symbolic labels. (In D/RNA, the symbols are in strings of bases, in 3-letter sequences, which are dynamicaly inert: their existence does not force other things to happen mechanically. Nor is the string just happenstance. Notice how we convert to an equivalent symbolism GCAT/U, AAA AUG, etc. Function arises when the sequences are actively transcribed, and using ribosomes etc translated algorithmically into AA sequences that fold and are sent to use sites in or beyond the cell. So, we are in a very different regime from a tray of coins that get tossed and come to rest willy nilly. But, the very notion that random chance hit on so powerful a code, or its algorithmic processing, or the machines to do it, or the onward link to protein functionality at several steps remove, is on the face of it a reducito ad absurdum, once one appreciates what is required to get the information to do that. Especially when we see that FSCi has a routinely observed explanation.) Similarly, weather phenomena are widely distributed events. We parameterise and set up transducers etc that convert into a data set snapshot at a point in time, then a series which we store in accord with sets of symbols and conventions. We then set out on analysing, modelling etc. Our fingerprints as intelligent agents are all over the process. You are of course dead right on the emphasis on digital symbolic algorithmically functional information and the machinery that gives it effect. The ducking, dodging, diverting, etc show us just how on tsarget the issue is. And when one looks a patent absurdity in the face one is right to object on personal increfdulity. We KNOW a routine source of FSCI. We equally know of no chance + necessity mechanism that can credibly originate it on the gamut of the cosmos we observe. So, it is time to call in the materialistic IOUs. GEM of TKIkairosfocus
May 28, 2010
May
05
May
28
28
2010
04:56 PM
4
04
56
PM
PDT
Rob: A description in the relevant sense specifies what a string is about. It is not the string itself. Notice the two-fold description: complex -- long string, specified -- detachable, short description that puts it in a target zone. (Recall here WD's image of an arrow hitting a target vs painting a target around an arrow wherever it happens to hit.) This, I have said previously, and it is a summary of many things WD has said too. As have others. One of the reasons I have emphasised FUNCTIONAL specificity is that the specification in this case speaks for itself above and beyond rhetorical obfuscations. And, you are more and more coming across as simply looking for points to lodge a further distractive objection. To date, you and ilk have for years failed to provide the direct and simple answer that would demolish the ID claim: produce a case of functionally specific complex information that in our observation credibly comes about by blind chance and necessity. As we have had to repeatedly highlight, FSCI is routinely produced by intelligence and on massive observation that is the overwhelming -- indeed we dare say so far exceptionless cause, once we know the causal process independently. So, FSCI is a reliable sign of intelligence on simple induction. Further, a fair reading of my remarks above would have easily shown that I have consistently spoken of islands of closely related configs that constitute a target zone. Once you move away from such a zone, loss of function results. One way to do that would be to push compression algorithms too far. As I specifically pointed out on STRINGS. (Another is to simply inject random noise until the function breaks.) So we may see with for instance a text string that with some cmprsn, txt is stl frly rdble. But add enough noise -- e.g. random letter changes -- and readability vanishes. That is, we may obseervge teh islands effect directly. Similarly, say we have sentence a: >>The quick brown fox jumps over the lazy dog.>> Can you show a reasonable situation where by random changes, every step along the way being a grammatically correct meaningful sentence, a goes to say B; >>The slow red ant crawls over the lazy dog.>> I doubt it, again showing just how common the islands of function effect is. (And yet, b is a closely related island, i.e we have an archipelago.) What about changes to move to say the text of this post from sentence a? So, can we get from a hello world java program to say the source code for Open office draw, by incremental step changes at random, filtered for function, and only passing on on function, within the ambit of the observable cosmos? These sort of examples are the reason why we see just how relevant is the inference form FSCI to its routine, known explanation: intelligence. GEM of TKI PS: On discussing Dembski's CSI, I suggest you pause and read the relevant weak argument correctives, starting at 26. 27 has an interesting cite from WD, that begins:
define ?S as . . . the number of patterns for which [agent] S’s semiotic description [semiotic - meaningful, as expressed in coded symbols or glyphs] of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . .
--> See why (pardon directness) you come across as repeatedly setting up and tilting at dismissive strawmen? --> Observe as well the calculation at the foot of no 27:
. . . We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)]. To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H). Also, there are four similar all-of-one-suite hands, so ?S(T) = 4. Calculation yields ? = -361, i.e. < 1, so that such a hand is not improbable enough that the – rather conservative — ? metric would conclude “design beyond reasonable doubt.” (If you see such a hand in the narrower scope of a card game, though, you would be very reasonable to suspect cheating.)
kairosfocus
May 28, 2010
May
05
May
28
28
2010
04:32 PM
4
04
32
PM
PDT
Hi gpuccio. Nelson's article is chapter 19 in Signature of Controversy, the book linked in the OP.
Regarding compressibility, again, while I have not read everything by Dembski, I don’t think he says that compressibility is required for specification. Otherwise, he would have to believe that proteins are not specified, as their sequence is not as far as I know particularly compressible.
Dembski explains why compressibility is required for specified complexity in the last quote cited in #76. Dembski might describe the application of his concepts to a genetic sequence as follows: We can define the target not as given sequence, but as all sequences that result in useful function. We can then describe the target as "functional sequences", which fully describes the target in a compressed way. In that sense, the target is compressible. Then again, he might not explain it that way at all. Sorry I haven't responded to #75 yet. I promise I'll get to it.R0b
May 28, 2010
May
05
May
28
28
2010
04:21 PM
4
04
21
PM
PDT
R0b: Could you please give the reference about Nelson's statement you cite about weather data? As far as I can see, weather data in themselves are not specified, either before or after they are "run" through a computer program, but I would like to understand better what the context was. Regarding compressibility, again, while I have not read everything by Dembski, I don't think he says that compressibility is required for specification. Otherwise, he would have to believe that proteins are not specified, as their sequence is not as far as I know particularly compressible. What Dembski says is, IMO, that comnpressibility can be a kind of specification, because the set of compressible strings is extremely smaller than the set of all possible strings. That is true, but as I have already argued at #75, does not apply to biological information, which is specified by its function, and not by its compressibility. As I have said, specification can come in different formsand flavours, but functional specification is the only one really relevant to biological discussion. Regarding the partial compressibility of functional strings, I have nothing to add to what has already been said by KF. Obviously, a gene sequence can be partially compressible, with or without loss of fucntion. Even true random strings can be compressible in some measure. That's why I used the expression "non significant compressibility" to define dFSCI. That kind of information is scarcely compressible, but some compressibility may exist. The important thing is that the sequence cannot be generated by a simple algorithm. KF: I agree with you about the possible treatment of analogic information, but what I meant is that we have no reason not to stick to the simpler case, if that simpler case includes all that is necessary for our debate, and makes everything easier. Obviously, it is the duty of theoretical researchers to extend the discussion to the general case, but I am interested mainly to the application in biology. Regarding the massless question, you are obviously right: information has neither mass nor energy, and it is a third fundamental part of the cosmos. But still there is a fundamental difference between simple information (raw data, and similar) and specified information. On that basis, I still think that Meyer's statement, as reported by R0b, is inaccurate. I copy it here: "Now if information is not a material entity, then how can any materialistic explanation explain its origin?" What does that mean? Some form of automatic registration of a sequence of coin tossing is, I believe, information, in the sense of raw data. In that sense, it is not a material entity. But why shoud we say that we cannot give a materialistic explanation of its origin? The descritpion in physical terms of the system where the coin is tossed and the results are in some way recorded can certainly explain the data. In the same way, a physical theory of weather can explain weather data, intended as a purely passive recording of some kind of weather phenomena. But the discourse changes completely if we consider CSI (in any of its forms). CSI only exists as the product of design, and therefore of a conscious intelligent agent. Therefore, any materialistic explanation which does not take into account conscious agents cannot explain it. In other words, CSI cannot be explained by reductive materialistic theories not because it is non material, but because it is a special kind of non material information, which only consciousness can generate. There is a difference between the two statements, and I think it is a difference of some importance. I think anyway that probably what Meyer really intended is not different from what I am saying, but the fact remains that his way of saying it in that particular sentence reported by R0b IMO is not correct.gpuccio
May 28, 2010
May
05
May
28
28
2010
03:16 PM
3
03
16
PM
PDT
kairosfocus:
In short, a complex string that is more or less simply describable in a way that characterises it, without having to replicate the string in toto.
Are you saying that the description can be lossy? If not, ignore the following two paragraphs. If so, recall Dembski's definition of descriptive complexity: "φS(T) = the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T." If a lossy semiotic description were allowed, then we could always choose "thing" as our description, and φS(T) would be ultra-low every time. And we need to be careful not to confuse T with E, where E entails T but is more specific. We can always define events E that are more specific than T, but they play no role in Dembski's CSI calculation. It's T that needs to be simply describable, and that description needs to to fully describe T. Otherwise, we can always make the description as lossy as we please and get specificity for free.
As for your “without loss of function” claim, plainly that has been the principal type of CSI based system discussed at UD for years: if you disturb an information based process sufficiently, with injection of enough noise, it will stop working, so you sit in an island of function in a sea of possible but non-functional configs.
The question was not about disturbance without loss of function, it was about compression without loss of function, which you mentioned in #71.
PS: Onlookers, observe very carefully how R0b speaks of lossless compression, as he full well knows that once redundancy has been squeezed out, further compression will lead to irrecoverable information loss and destruction of function. The resistance to compression I spoke of above was in precisely that context — explicitly so — that once redundancy is squeezed out, further squeezing breaks function.
I assumed that it was a given that the compression we're talking about is lossless, so I thought you meant something more by "compression without loss of function". That's why I asked for a cite so I could read up on it, and that's why I offered two possible meanings of the phrase that I could think of. All you needed to do was tell me what you meant. The following is completely unjustified:
That artfully misdirecting choice of terms shows a pattern of reading to find dismissive rhetorical objections, not a serious attempt to understand what is being objected to. That is sad, and should be corrected.
R0b
May 28, 2010
May
05
May
28
28
2010
03:15 PM
3
03
15
PM
PDT
GP: Recall that in current physics, mass and energy are interconvertible. That a photon is massless only means that it is non-resting [must move at c in vacuo], and is a manifestation of energy. A sufficiently energetic photon can convert into massive particles, e.g. throw off an electron-positron pair through interacting with a nucleus. Information is not energy either, though it takes energy to store or transmit or retrieve information. The point that Meyer makes is still highly relevant: information has no mass, and we can add it has no ONE characteristic energy value either. That is why increasingly information is seen as a third fundamental part of the cosmos. GEM of TKIkairosfocus
May 28, 2010
May
05
May
28
28
2010
01:40 PM
1
01
40
PM
PDT
PS: Onlookers, observe very carefully how R0b speaks of lossless compression, as he full well knows that once redundancy has been squeezed out, further compression will lead to irrecoverable information loss and destruction of function. The resistance to compression I spoke of above was in precisely that context -- explicitly so -- that once redundancy is squeezed out, further squeezing breaks function. That artfully misdirecting choice of terms shows a pattern of reading to find dismissive rhetorical objections, not a serious attempt to understand what is being objected to. That is sad, and should be corrected.kairosfocus
May 28, 2010
May
05
May
28
28
2010
01:22 PM
1
01
22
PM
PDT
R0b: Please, simply look closer at your cite from Dembsky:
Descriptive complexity is likewise a form of computational complexity, but generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize the pattern.
In short, a complex string that is more or less simply describable in a way that characterises it, without having to replicate the string in toto. That is, it has a way to fit into a wider scheme, or plays a role [functions], or is somehow significant to an onlooking agent -- i.e. purposeful and/or meaningful. By contrast when Abel, Trevors and others including Meyer -- or for that matter the undersigned -- have spoken about the resistance of FSCI strings to compression, they are saying that in large part the digits are non-redundant, though some redundancy is inevitable in any language as it is unlikely that symbol sets will fulfill statistical frequencies of a a flat random distribution while being meaningful. Thus also, squeezing too hard with a compression algorithm will destroy function through destroying meaning. The simplest and most useful form of such is that the entity is or uses strings of digits that take values from an alphabet, and fulfills a linguistic or algorithmic role [which includes data structures]. That is why I have focussed on FSCI. And to GP's observation on analogue functionality, I note that analogue entities can be converted into digital ones, using the nodes, arcs and networks approach. S-t-r-i-n-g-s are the simplest case, but networks of points like a wireframe- cum- exploded diagram or a wiring diagram or a units, pipes and instrumentation diagram etc will also be convertible into a digital representation. So, once we have a functional requirement, we can specify up to some degree of fuzziness, an island of function amidst an ocean of non-functional possible configs. As for your "without loss of function" claim, plainly that has been the principal type of CSI based system discussed at UD for years: if you disturb an information based process sufficiently, with injection of enough noise, it will stop working, so you sit in an island of function in a sea of possible but non-functional configs. I refer you to Trevors and Abel,"Three subsets of sequence complexity and their relevance to biopolymeric information" (Theor Biol Med Model. 2005; 2: 29. Published online 2005 August 11. doi: 10.1186/1742-4682-2-29.), almost at random:
. . . Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization . . . . Initial sequencing of single-stranded RNA-like analogs is crucial to most life-origin models. Particular sequencing leads not only to a theorized self- or mutually-replicative primary structure, but to catalytic capability of that same or very closely-related sequence. One of the biggest problems for the pre-RNA World model is finding sequences that can simultaneously self-replicate and catalyze needed metabolic functions. For even the simplest protometabolic function to arise, large numbers of such self-replicative and metabolically contributive oligoribonucleotides would have to arise at the same place at the same time. Little empirical evidence exists to contradict the contention that untemplated sequencing is dynamically inert (physically arbitrary). We are accustomed to thinking in terms of base-pairing complementarity determining sequencing. It is only in researching the pre-RNA world that the problem of single-stranded metabolically functional sequencing of ribonucleotides (or their analogs) becomes acute. And of course highly-ordered [i.e. as opposed to both random and organised] templated sequencing of RNA strands on natural surfaces such as clay offers no explanation for biofunctional sequencing . . .
Going back a generation, we should listen to J S Wicken:
Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in. Wicken hoped that selection could be non-intelligent, of course, but once a threshold of specifically organised complexity is necessary for function, we must first pass the threshold of plausibly reaching the degree of complexity to function so that selection among competing degrees of fucntion is even possible )]
In short, the concept has been there for a generation. Functional, complex organisation and associated information are closely tied tot he point that sufficient perturbation of he organisation will disrupt function. Going the other way, until a threshold of organisation is surpassed, things will not work, even marginally. And, as I have stressed in recent weeks, when the function includes self-replication of entities that do more than simply autocatalyse [cf A & T's apt remark above on RNA world models . . . ], the premise of natural selection across competing populations, function -- based on blueprints, codes, algorithms to read and effect same -- has to exist before the selection can happen. So, R0b, I think it is clear that repeatedly reading to object has caused you to miss otherwise plain concepts and meanings, and has led you to set up and knock over strawmen. And, again, by a distractive issue, R0b, we still have yet to see an empirical counter example that shows how FSCi can arise by blind forces of chance and necessity. (The posts in this thread are proof enough that such is routinely produced by intelligence and is a characteristic sign of it.) G'day GEM of TKIkairosfocus
May 28, 2010
May
05
May
28
28
2010
01:16 PM
1
01
16
PM
PDT
kf:
Please. There is a difference between focusing on a main theme, and setting up and knocking over caricatured strawmen. And repeating now unfortunately standard strawman caricatures for years does not turn them into well warranted critiques.
If you would like me to support in detail anything I've said in this thread or any other, you need only ask. Wouldn't that be a better approach than constantly making accusations of strawmanning? We already know that we disagree on the main themes, and that isn't likely to change. I'd rather discuss nuts-and-bolts items that are somewhat verifiable and therefore stand some hope of resolution. My original point in this thread, way back in #3, was to see if the IDists here agree with Nelson that computational processes can convert unspecified information into specified information. You seem to agree with that in #34.4, but I don't know for sure. Do you agree with it?
That a string exhibiting CSI will be relatively simply describable as to purpose or function [Dembski] is not the same thing as that the string itself will be fairly resistant to compression as a string [Meyer, Abel, Trevors, et al], without loss of function.
I'm not clear on the distinction you're drawing between a description (which need not describe purpose or function, according to Dembski) and compression of the string itself. For Dembski, descriptive complexity is a generalization of Kolmogorov complexity:
Descriptive complexity is likewise a form of computational complexity, but generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize the pattern.
And Dembski is very explicit with regards to compressibility and specified complexity:
It is a combinatorial fact that most bit strings are highly incompressible. As a consequence, the specification of highly incompressible bit strings does not signify a high improbability event and thus cannot exhibit specified complexity. On the other hand, bit strings that are highly compressible constitute (on combinatorial grounds) but a minuscule fraction of all bit strings. As a consequence, the specification of highly compressible bit strings does signify a high improbability event and thus exhibits specified complexity.
And I've never seen mention of the "without loss of function" condition. Can you provide a cite, please, so I can read up on it? It seems that in one sense, compression virtually always kills function. Replace DNA, or source code, or readable text, etc. with a compression thereof and it stops functioning. But in another sense, lossless compression never kills function, because the original is always recoverable. Ciao for now.R0b
May 28, 2010
May
05
May
28
28
2010
11:52 AM
11
11
52
AM
PDT
R0b: OK, with these words of Meyer from the DVD I see better your point. I agree with you that both CSI and simple non designed data, on a computer disk, are massless, and I don't agree with Meyer that "being massless" defies materialistic explanation, even in a reductionist sense. There are many physycal realities, in the present model of physics, which are massless, and yet they would be considered part of the universe by any materialist. So, in this sense I agree with you that Meyer's statement in the DVD is inaccurate (it was not so clear to me from the paragraph in the book). And again, for me the correct statement is, on the contrary, that only CSI (or any equivalent definition) defies materialistic explanation, at least in the sense of a reductionist materialism which does not take into account consciousness. Regarding your question about CSI, my position has always been explicit on this blog. I am probably in the Meyer camp, but also in the camp of most of the bloggers here (StephenB, kairosfocus, bornagain77, and many others). But I don't believe that Dembski is in "another camp", even if his position has certainly a few peculiarities. As this is a fundamental issue, I will try to be more clear. There is a general concept of CSI, which refers to any information which is complex enough (in the usual sense) and specified. Now, while I think that we can all agree on the concept of complexity, some problems appear as soon as we try to define specification. There is no doubt that specification can come in many forms: you can have compressibility, pre-specification, functional specification, and probably others. And, in a sense, any true specification, coupled to high complexity, is a mark of design, as Dembski's work correctly affirms. But the problem is, some kinds of specifications are more difficult to define universally, and in some of them the complexity is more difficult to evaluate. Let's take compressibility, for instance. In a sense, true compressibility which cannot be explained in any other way is a mark of design. Take a string of 1000 letters, all of which are "a". You can explan it in two different ways: 1) It is produced by a system which can only output the letter "a":in other words, it is the product of necessity. No CSI here. 2) It is the output of a truly random system which can output any letter with the same probability, but the intervention of a conscious agent has "forced" an output which would be extremely rare and which is readily recognizable to consciousness. The string is designed to be highly compressible. In any case, you can see that using the nconcept of compressibility as a sign of specification is not without meaning, but creates many interpretational problems. Or, take the example of analog specified information, like the classic Mount Rushmore example. The specification is very intuitive, but you have two problems: 1) The boundary between true form and vague resemblance is difficult to quantify in analog realities. 2) It is difficult to quantitavely evaluate the complexity of an analog information. For all these reasons, I have chosen to debate only a very specific subset of CSI, where all these difficulties are easily overcome. That subset is dFSCI. A few comments about this particular type of CSI: 1) The specification has to be functional. In other words, the information is specified because it conveys the intructions for a specific function, one which can be recognized and defined and objectively measured as present or absent, if necessary using a quantitative threshold. It is interesting to onserve that the concept of functional specification is earlier than Dembski's work. 2) The information must be digital. Tha avoids all the problems with analo information, and allows an easy quantification of the search space and of the complexity. 3) The information must not be significantly compressible: in other words, it cannot be the output of an algorithm based on the laws of necessity. 4) If we want to be even more restrictive, I would say that the information must be symbolic. In other words, it has to be interpreted through a conventional code to convey its meaning. Now, in defining such a restricted subset of CSI, I am not doing anything arbitrary. I am only willfully restricting the discussion to a subset of objects which can be more easily analyzed. The discussion will be about these objects only, and any conclusion will be about these objects only. So, if we establish that objects exhibiting dFSCI are designed, I will not try to generalize that conclusion to any other type of CSI. Objects exhibiting analog specified information or compressible information can certainly be equally designed, but that's not my problem, and others can discuss that. And do you know why it's not my problem? Because my definition of that specific subset of CSI includes anything which interests me (and, I believe, all those who come to this blog). It includes all biological information in the genomes, and all linguistic information, and all software. That's more than enough, for me, to go on in the discussion about ID. So, to answer explicitly your questions: 1) The presence of CSI is a mark of design certainly under the definition I have given here (dFSCI), and possibly under different definitions. I am not trying here to diminish in any way the importance of other definitions, indeed I do believe them to be perfectly valid, but here I will take care only of mine. 2) I have no doubt that, under my definition, there is no example known of CSI which is not either designed by humans or biological information. Nobody has ever been able to provide a single example which can falsify that statement. And yet even one example would do. 3) CSI in the sense I have given is certainly an objective measure. The measure only requires: a) an objective definition of a function, and an objective way to ascertain it. For an enzyme, that will be a clear definition of the enzymatic activity in standard conditions, and a threshold for that activity. The specification value will be binary (1 if present, 0 if not). b) A computation of the minimal search space (for a protein of length n, that would be at least 20^n). c) A computation, or at least a reasonable approximation, of the number of specific functional sequences: in other words, the number of different protein sequences of maximum length n which exhibit the function under the above definitions. The negative logarithm of (c/b) * a will be the measure of the specified complexity. It should be higher than a conventional threshold (a universal threshold of 10^150 is fine, but a biological threshold can certainly be much lower). For a real, published computation of CSI in proteins in the above sense with a very reasonable method, please see: Measuring the functional sequence complexity of proteins. by Durston KK, Chiu DK, Abel DL, Trevors JT Theor Biol Med Model. 2007 Dec 6;4:47. freely available online at: www.ncbi.nlm.nih.gov/pmc/articles/PMC2217542/?tool=pubmedgpuccio
May 28, 2010
May
05
May
28
28
2010
07:54 AM
7
07
54
AM
PDT
Oops on a double negative.kairosfocus
May 28, 2010
May
05
May
28
28
2010
07:36 AM
7
07
36
AM
PDT
PS: As to the question of CSI metrics and tests, we can start with the simplest, functional bits. We have an Internet full of messages which are based on bits, and which plainly function. Routinely, these messages are the work of intelligence, and we have yet to see a serious case of such a message of suitable length -- say at or over 500 - 1,000 bits -- produced by noise or the mechanical forces at work in the equipment on the Net. (Notice, onlookers, how none of the usual objectors to the concepts of CSI or FSCI never produce the cluster of empirically credible counter-instances that would at once kill off biological ID claims based on the observation of informational macromolecules in the cell.) Similarly, we have a large scale software industry, and last I checked Bill Gates was not ordering peanuts and bananas by the carload for his programming staff, nor did he have deals with monkey breeders to bring in fresh stocks. Beyond that R0b knows or should know that the more sophisticated metrics have been used in test cases, and produce the same result, or show the stringency of the metric, e.g. on how a suspicious hand of cards will as a rule fall below the threshold for the Universal probability bound [though the Abel plausibility metric gives us a more practical threshold for such cases].kairosfocus
May 28, 2010
May
05
May
28
28
2010
07:35 AM
7
07
35
AM
PDT
F/N: Coded strings of course will as a rule have some redundancy in them, which can in part be squeezed out by compression algorithms. (A flat-random bit string will have equiprobable values in each digit, thus its Shannon info capacity metric is a maximum for a string of that length. In normal codes, symbol values are not equiprobable, e.g. in English E is much more probable than X, and Q is usually followed by U. In pictures, there will be areas of much the same colour and light level, etc.)kairosfocus
May 28, 2010
May
05
May
28
28
2010
07:22 AM
7
07
22
AM
PDT
R0b: Please. There is a difference between focusing on a main theme, and setting up and knocking over caricatured strawmen. And repeating now unfortunately standard strawman caricatures for years does not turn them into well warranted critiques. That a string exhibiting CSI will be relatively simply describable as to purpose or function [Dembski] is not the same thing as that the string itself will be fairly resistant to compression as a string [Meyer, Abel, Trevors, et al], without loss of function. As users of Zip files know, meaningful strings can be compressed somewhat, but not more than a certain limit. At the same time, we can describe a specification briefly that does not entail essentially reciting the string. As to the two PC disks in a class demonstration, Meyer is indeed correct that a blank disk weighs just as much as one with data on it, or for that matter one with random noise. Information is thus shown to be a massless, non-material property. And, when we look through the lens of information technology familiar in day to day life, we see that there is a big difference between [1] a blank, [2] a disk with noise and [3] one with say a movie on it. (Just ask any video rental store about what they would think if you were to rent a disk then return a blank or the same disk corrupted with noise.) Meyer's point as you excerpted is precisely correct but it is not fair to frame him thusly: "his reasoning has nothing to do with whether the information is specified or not, unless you think that only specified information is massless." His basic point is that information does not change the weight of a disk, so we do not look to a material property as the defining characteristic of information. Information may indeed be stored in rearranging the mass in specific ways, e.g. by changing optical properties by burning with a laser, or infusing a specific magnetic pattern on an old fashioned floppy. But the information is itself identifiable separate form any particular physical encoding, as the in-common pattern that is encoded. Also, the point is in no way inconsistent with the onward point that random unspecified noise that just happens to take up a particular configuration on disk A is very different form a movie on disk B, and in turn is different from blank disk C. A, B, and C weigh the same and have very similar chemical composition. However, the three are by no means equivalent informationally. Shifting a bit to draw out the point further: a disk with software on it will make a computer behave in a very different way from one with noise on it, as you well know. It weighs the same, but has a distinct pattern impressed on it through a convention, a code, that fits a language and an algorithm, and makes a difference to the behaviour of the computer that hosts it. As is a common experience nowadays. Where all of this becomes needlessly distractive, is that by picking at strawmen like this, the main issue -- the reality of FSCI, the difference it makes, and its importance as an empirically well warranted, commonly observed sign of intelligence -- is lost in the fuss over a side issue. Beyond a certain point, R0b, that sort of distractive, distorted, and at points consistently evasive rhetoric begins to backfire. Astute onlookers will then increasingly see that -- often, imaginary -- gnats are being strained at and fussed over, while camels are being swallowed without even a whimper. G'day GEM of TKIkairosfocus
May 28, 2010
May
05
May
28
28
2010
06:52 AM
6
06
52
AM
PDT
But I do think that Meyer and I would definitely agree that: (complex specified) “information defies materialistic explanation”. Would you?
No. The problems that I, and others, have with such statements have been discussed at great length by many people for years. Thank you for copying Meyer's words for reference. For completeness, I'll include what he said about this illustration on the Unlocking the Mystery of Life DVD: One of the things I do in my classes to get this idea across to students is I hold up two computer disks. One is loaded with software the other one is blank. And I ask “What’s the difference in mass between these two computer disks as a result of the difference in the information content that they posses?” And of course the answer is zero - none. There is no difference as a result of the information. And that’s because information is a massless quantity. Now if information is not a material entity, then how can any materialistic explanation explain its origin? Notice that his reasoning has nothing to do with whether the information is specified or not, unless you think that only specified information is massless.
And reading others’ words in that spirit is what Behe calls “charitable reading”: trying to understand what the other one really means, and not using words out of context only in order to criticize.
Yes, that's why I read Meyer's words in context, and included the context in #27. And I'm expressing what I believe to be the actual intent of his words, although, as you say, neither of us can speak for him.
Or discussing only one topic, and evading all the rest.
I didn't realize that sticking to a single topic constitutes evasion. But since you seem intent on discussing CSI, I'll note that you seem to be in the Meyer camp as opposed to the Dembski camp. For Dembski, CSI entails high compressibility, but for Meyer, it entails low compressibility. Interestingly, they both report that every CSI event of known cause is caused by design. Under which definition of CSI is that claim true, or is it true under both definitions? We should be able to answer that question by looking at the empirical data on which the claim is based. Has anyone published such data? Has anyone tested the CSI measure to see if it accurately indicates design, or if it's even an objective measure?R0b
May 28, 2010
May
05
May
28
28
2010
06:14 AM
6
06
14
AM
PDT
PS: How rationality itself becomes a casualty of reductive materialism.kairosfocus
May 28, 2010
May
05
May
28
28
2010
04:08 AM
4
04
08
AM
PDT
R0b (#48) Thank you for your comments. You ask:
Is the state of water — i.e. liquid, solid, or gas — a material property or a formal property? Is the difference between a hot dog and a magnetic tape material or formal?
I would say that the state of water is neither a purely material property nor a purely formal one. It depends partly on the structural arrangement of the molecules (which is a formal property), but also on material properties such as the charges on the constituent atoms, their distances from each other, the number of molecules per unit volume and the number of collisions they make with each other. The differences between a hot dog and a magnetic tape are obviously more material than formal. Dr. Meyer’s two tapes may be chemically equivalent (i.e. made of the same stuff), but they can never be materially equivalent. If they were, then one would be a perfect replica of the other. The position of an atom in space vis-a-vis other atoms is still a material property. The physical arrangement of the indentations on a tape or the grooves on a CD is to some degree a material property, as it can't be entirely abstracted from the underlying atoms. On the other hand, the information contained on the tape or CD can be abstracted in this way. Finally, you ask:
What is so special about information that its mere existence requires a special explanation?
I accept that we have natural explanations for certain kinds of information occurring in the natural world. I would say that specified information falls into a category of its own because it is semantic rather than purely syntactic. It has meaning. And the existence of a large amount of meaningful information in one object is not something that can be explained in terms of its lower-level properties, which are of themselves incapable of generating meaning. Additionally, the odds that a significant amount of specified information could be generated over the lifetime of the universe appear to be astronomically low. I've got to run, but I appreciate your questions. It looks like Dr. Meyer's book has provoked a lively discussion on this thread.vjtorley
May 28, 2010
May
05
May
28
28
2010
03:20 AM
3
03
20
AM
PDT
A few notes: (NECESSARY ASIDE, PARDON: Things have paused operationally on the ground, and the usual finger-pointing recriminations and general after the fact kass-kass have begun. One hopes that, rhetoric and propagandistic points scoring aside, and with due recognition of the inevitability of errors, loss and worse [why is it that suddenly Governments in the face of ruthless terrorists and narco-criminals must be perfect in forceful responses? I fear the counsels of perfection serve to excuse, encourage and enable the forces of chaos . . . ], at last Jamaica's authorities -- on both sides of the major political divides -- will excise the 50 year old cancer on the body politic before the patient dies. And along the way, I trust we will be able to deal with inevitable blunders and worse in a fairly mature fashion, understanding what is at stake.) ________________ GP: Well said. The core issue is that strings -- the basic data structure from which others are built, generally speaking -- come in three main varieties: ordered, random, informational/linguistic. Ordered ones have high redundancy: state a block, repeat n times. This reflects a simple program or in the case of say a simple polymer, mechanical forces on initial conditions. Random ones have very little compressibility, have no specificity [could as easily have been otherwise] and to reproduce you must basically copy the string. A good example is random typing: bhfuweqyti03wnfgv. Functionally specific complex strings fulfill a language or code related requirement and fir in with a process or a language situation. The text of posts or data and code for programs etc. That purposefulness and function implicate intent and deliberate adaptation of contingent means to a defined end. And so, in our experience and observation, these trace directly or indirectly to designing intelligences. Especially when we see a fair degree of complexity -- 125 bytes worth is quite enough. That much is indisputable -- on pain of blatantly absurd selective hyperskepticism. Where the real clash is, is that in DNA and for associated molecules FSCI appears in a "natural" context: the living cell. Taking FSCI as a reliable sign of intelligence, on induction from our experience, the conclusion is obvious: the cell, in its origin, is a designed artifact. This is not contentious because we have credible observation of such FSCI coming about by chance circumstances and blind mechanical forces, but because it cuts across a dominant worldview in key institutions of our civilisation [post Enlightenment, so called], where adherents have for decades dominated origins science. And, that magisterium fears the now proverbial "Divine Foot in the door." (I find the tendency to a one-sided rhetoric of fear- mongering against theism troubling especially given the refusal to attend to the many significant positive contributions of theism and theists to our civilisation [not excepting to science], and the similar refusal to acknowledge the 100+ million ghosts from experiments in radical secularism in the past 100 years.) In that context the sort of evasions, strawman arguments and attempts to impose materialism by the backdoor that seem to be the standard responses from the secularist establishment in science are quite revealing, and sad. _____________ R0b: I have but little interest in discussing who said what when in what book. The time for a history of ideas discussion is not yet. For, on the merits, there is an issue that points to a serious blind zone in institutional science, and one with potentially devastating consequences for our civilisation. (A SADLY NECESSARY ASIDE: From the days of Plato's The Laws, Bk X, the amoral, might makes right implications of evolutionary materialist philosophies let loose on the public square have been well known. Such a world view has in it no IS that can adequately ground OUGHT. Many people who rightly accept that ought is real, can see from that that the only worldview that makes sense of our status as clearly morally grounded and governed, rational creatures who can at least sometimes discover and warrant truth to the degree of credibility where "knowledge" is a reasonable term -- regardless of refusal of imprimatur by the august scientific institutions, dominated as they currently are by atheists and agnostics --- is theism in one form or another. Especially, Judaeo-Christian theism.) What I ask you to do is to seriously address the issue of FSCI, and particularly digital functional information and its root. Given what is plainly at stake on many levels -- e.g. it is evident that rationality itself is a casualty of reductive materialism -- a priori materialism or methodological assertions tantamount to such, are not good enough. Can you provide clear empirical evidence that functionally specific, complex information is a routine product of blind chance and necessity? Or that such FSCI, once originated, will improve itself tot he level of further incremetns of FSCI, without front loading active information that again traces to intelligence? Or even that significant random shifts in FSCI, preserving function all the way, on filtering for function at each step, can credibly move from one type of program to another of enhanced complexity and function where the storage capacity required for program implementing the latter function is orders of magnitude in excess of the original? So far, after years here at UD the answer has consistently been no, by dint of evasions and side-tracks. We know that, routinely, intelligence produces FSCI and that version improvements often expand the functionality and amount of information. [Office 97 fit into one CD . . . ] So, on inference to best explanation on the evidence FSCI, or whatever label you will, is a credible sign of design, Even, where we did not happen to see the design process in action, and even where we could not have been the designers. That is what you need to answer to. G'day GEM of TKIkairosfocus
May 28, 2010
May
05
May
28
28
2010
03:03 AM
3
03
03
AM
PDT
R0b: I am happy that we agree, although I am not so sure that you understand correctly Meyer's point of view. Maybe Meyer has not explained himself correctly, maybe you insist in interpreting him in a biased way. Anyway, in the ultimate sense it is not up to me to speak for Meyer. Or for you. I can only express my opinions. But I do think that Meyer and I would definitely agree that: (complex specified) "information defies materialistic explanation". Would you? That's really the only important point, if you don't stick to your interpretations of words only for the sake of argument. For reference, and for those who are reading, I copy here Meyer's words: A blank magnetic tape, for example, weighs just as much as one “loaded” with new software—or with the entire sequence of the human genome. Though these tapes differ in information content (and value), they do not do so because of differences in their material composition or mass. And mine: Now, the point IMO is simply that you cannot in any physical way define the difference between designed, functional information and truly random information only from the physical properties of the physical medium, including the form which supervenes the medium. From a purely physical point of view, functional information appears exactly the same as random information. That’s why I call it “pseudo-random”. That is probably your point, and in this form I agree with you. There is no physical or mathemathical way of distinguishing the two, without the intervention of consciousness,because it is only consciousness which defines the function and can recognize it. Only consciousness creates true functional specifications. But you can definitely recognize the difference between the two if you use a conscious being as a cognitive instrument. And this is a true difference, because conscious beings are part of reality, functions are part of reality, and it is perfectly correct to use parts of reality in our attempts to understand reality (science). Where is the difference? You should only have the care, in reading Meyer's paragraph, to understand that, in this context, he is using the word "information" in the sense of complex specified functional information (as can easily be seen by his examples, new software, the sequence of human genome). So, what Meyer is saying here is (IMO) that you cannot recognize CSI from a purely physical point of view. Which is exactly my point. And to say that complex specified information defies materialistic explanation is the same as saying that you cannot explain it without including consciousness in your map of reality. In this context, "materialistic" obviously means "materialistic reductionist", or if you prefer "eliminative materialistic" (once I learn a term I like, I do stick to it!). So, you see, when writing one often uses words without a full detailed definition, hoping that the reader will catch the right meaning from the context. That's why human language is a context dependent language. And reading others' words in that spirit is what Behe calls "charitable reading": trying to understand what the other one really means, and not using words out of context only in order to criticize. Or discussing only one topic, and evading all the rest.gpuccio
May 27, 2010
May
05
May
27
27
2010
02:09 PM
2
02
09
PM
PDT
gpuccio:
That’s correct. And so?
So we agree.
My point (and, I think, Meyer’s too) is that the same information can be “written” on different physical supports, and yet it conveys the same “form” to a conscious “reader”.
That point is obviously correct, but it's not the point Meyer was making. He presented two tapes with different information, and pointed out that the tapes have the same mass and material composition. What conclusion are we supposed to draw from that? He doesn't say explicitly in SitC, but he has presented the same illustration with computer disks elsewhere, and concluded that information defies materialistic explanation, and that this creates a fundamental challenge to materialistic evolutionary scenarios. By this same logic, if the loaded disk contains weather data, then this disproves materialistic explanations for weather.
So, the point is not in the relationship with the physical medium
But that's what Meyer was talking about, and that's the only topic I've been discussing.R0b
May 27, 2010
May
05
May
27
27
2010
12:15 PM
12
12
15
PM
PDT
kairosfocus, I think I see how we were talking past each other. In #39 I was referring to Dembski's old LCI, for which he offered a proof in No Free Lunch. I was not referring to his new LCI, which he discussed in Life's Conservation Law and said was not provable. I should have been more specific -- sorry for the confusion.R0b
May 27, 2010
May
05
May
27
27
2010
11:01 AM
11
11
01
AM
PDT
I'm just listening to Casey Luskin's podcast (Steve Matheson's Spell-Checking Gotcha Game) on his article in the book (Gotcha! On Checking Stephen Meyer's Spelling & Other Weighty Criticisms of "Signature in the Cell"). At 1'20'', he states:
Is there a reason why evolutionists so often increase the ad hominem attacks when their case is weak?
Perhaps it is the same reason why Luskin's article is printed in the section The Attack of the Pygmies: it seems to be human nature...DiEb
May 27, 2010
May
05
May
27
27
2010
05:30 AM
5
05
30
AM
PDT
R0b: Regardless of whether the data/information on the tape was produced by a conscious mind, and regardless of whether it constitutes dFSCI, the point is that it supervenes on the physical properties of the tape. That's correct. And so? My point (and, I think, Meyer's too) is that the same information can be "written" on different physical supports, and yet it conveys the same "form" to a conscious "reader". You can say that that is true of both functional information and simple data. I agree with that. But simple data have no complex functional form in themselves (they are not complex specified information), and so they cannot convey a complex fucntional meaning to the reader, while designed information can and does. So, the point is not in the relationship with the physical medium: in reality, the physical medium is not important, although some physical medium is necessary to convey information through the physical interface of consciousness (the body). But, as all software people know, the software is independent from the hardware, even if it needs some hardware medium to be written and communicated. But the real point is the objective difference between functional information and random information. I would like to mention again that we have in general 3 kinds of digital information: a) compressible (which can be generated by necessity laws through a shorter algorithm). b) truly random (no significant compressibility, but no fucntional interpretation recognizable) c) functional (pseudo-random strings, no signifcant compressibility, but which conveys, through some symbolic code, a definable function). Only c) can be recognized, with empirical certainty, as designed. Only c) is never the product of non conscious, non intelligent processes. Now, the point IMO is simply that you cannot in any physical way define the difference between designed, functional information and truly random information only from the physical properties of the physical medium, including the form which supervenes the medium. From a purely physical point of view, functional information appears exactly the same as random information. That's why I call it "pseudo-random". That is probably your point, and in this form I agree with you. There is no physical or mathemathical way of distinguishing the two, without the intervention of consciousness,because it is only consciousness which defines the function and can recognize it. Only consciousness creates true functional specifications. But you can definitely recognize the difference between the two if you use a conscious being as a cognitive instrument. And this is a true difference, because conscious beings are part of reality, functions are part of reality, and it is perfectly correct to use parts of reality in our attempts to understand reality (science). As I have already said in other threads, if functions were not an integral part of our science, why would all protein databases list the known fucntion of each protein (when it is known)? So, I am not surprised if a materialist reductionist cannot see any difference between dFSCI and random data: he denies consciousness, or at least its fundamental role in reality and in the contruction of knowledge, so for him it's perfectly consistent to deny CSI and similar concepts, and to deny the ID theory as "unscientific". That's simply the logical consequence of his cognitive bias, of refusing to acknowledge the existence and role of a very important part of reality, of refusing to admit that there is a huge part of experienced facts which he cannot even begin to explain (all conscious experiences). In the same way, it is easy to see how, for those (like me) who consider consciousness, its processes and its laws as an integral part of scientific knowledge, it is perfectly natural to recognize the objective difference between CSI and non specified information, between design and randomness, and to appreciate the inference to design in biological information as a self-evident conclusion, and as the necessary premise for any reasonable theory of life.gpuccio
May 27, 2010
May
05
May
27
27
2010
02:48 AM
2
02
48
AM
PDT
1 2 3 4

Leave a Reply