Uncommon Descent Serving The Intelligent Design Community

Message Theory – A testable ID alternative to Darwinism – Part 1

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Message Theory is a testable scientific explanation of life’s major patterns.

That claim should intrigue you. If I heard such a claim, I would nearly leap across the room to demand more details; else I couldn’t sleep that night. That is because I highly value testability, just as all scientists do, (in physics, chemistry, geology, medicine, engineering, etcetera) – and just as evolutionists do in all their court cases.

Message Theory should even intrigue evolutionists, because it offers what they repeatedly demanded from their opponents – a testable, scientific alternative to evolution. Yes, that is exactly what they demanded. In reality, the evolutionists’ response has been exceedingly superficial, falling into two categories: (1) Silence; or (2) They misrepresent Message Theory. (If you are aware of exceptions, let me know.) Therefore, my posts here will not much address the evolutionists’ response to Message Theory, since a serious response doesn’t much exist.

The creationist/ID response has been more varied, and I focus on that here. Many see Message Theory as exciting and promising. For example, Origins Magazine reviewed it saying, “I can give no greater accolade than urging that this book should now be the starting point for all of our discussions.” Phillip E. Johnson calls it “Bold and fascinating … a comprehensive theory.” Carl Wieland calls it, “Masterpiece … incredible … of immense value.” Michael Behe and many others have given glowing reviews, (see this link). To which I say, Thanks! That’s a good start.

However, some creationists/ID-ists are hesitant to investigate Message Theory, and the central reason is its claim of testability – its claim to make numerous coherent, risky, predictions about what we should see, and should not see. Unfortunately, many creationists/ID-ists do not value testability, and some aggressively dislike testability. Without knowing any details about Message Theory, we encounter their leading objection – testability.

For example, some creationists say, “Aren’t you claiming to test God?” To which I answer: No. Message Theory is about life’s data – many observations that must be explained – and Message Theory explains those observations in a testable (falsifiable, vulnerable, empirically risky) manner. It meets all the criteria for a scientific theory. A theory is tested, not God. The thought process is no different than concerning, say, the Piltdown fossils, which needed an explanation. These fossils were a hoax created by an intelligent designer – a testable explanation that no scientist disputes. We need not test the intelligent designer, (indeed, the designer of the Piltdown Hoax remains unidentified), rather we test the theory. In science we test explanations (i.e., theories); not God.

Also, deep down, many creationists want the ‘certainty of faith,’ and they are not yet comfortable with the inherent riskiness of science – they haven’t learned to balance the two types of thought: risk and certainty.

The classic creationist organizations (ICR, AIG, CRS) often do not value testability, (and sometimes they explicitly oppose testability). Instead, they use a different criterion of science; a different value system. They claim “science must be repeatable, and since origins are not repeatable, creation and evolution are equally unscientific.” They are deeply mistaken. For example, we frequently execute murderers (which is not a flimsy thing to do) based solely on scientific evidence, even though the murder is not repeatable.

Instead, repeatability is how we identify naturalistic laws (as opposed to the work of intelligent beings); therefore the creationists’ demand for ‘repeatability’ is implicitly a demand that science must be purely naturalistic and cannot include an intelligent designer. They are shooting themselves in the foot!

Thankfully the ID organizations don’t take that approach. They take a more sophisticated approach, yet they tend to undervalue testability nonetheless, (sometimes through redefining it into obscurity).

In my many discussions with my fellow creationists/ID-ists, the foremost obstacle to Message Theory is their devaluing or misunderstanding of testability. So let me pause to underscore this for my readers: If you do not value testability highly, then leave now, or you will only waste your time, and mine. Let me put it stronger: Anyone (creationist, ID-ist, or evolutionist for that matter) who cheapens testability is a danger to science, and moreover, they miss many opportunities to advance creation/ID as superior science.

Let me put my claim stronger still: Message Theory is testable science, and macro-evolutionary theory (as practiced by its modern proponents) is not. I employ testability – the same tool evolutionists use in all their court cases – to turn the tables on evolutionists.

After handling some comments, I will next discuss Message Theory proper.

– Walter ReMine

The Biotic Message – the book

Comments
Rob: Re: Is it possible for information to specify something but not be specified by something else? Read here, on lucky noise. What is in principle possible -- chance can access any configuration of a contingent system whatsoever -- under the relevant circumstances becomes so maximally improbable on the gamut of our observed universe, that -- even absent direct evidence -- we confidently infer on best, observationally anchored, explanation, that intelligence is responsible for FSCI. GEM of TKIkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
02:58 PM
2
02
58
PM
PDT
OOPS: directed and credibly UN-directedkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
02:48 PM
2
02
48
PM
PDT
Rob: There is a history there on the terminology. It starts with noted origin of Life researcher, Orgel, in 1973:
Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
Thus, even CSI had its modern discussion roots in a molecular- biological context. Dembski, recognising the general applicability of the concepts implicated in the above sort of remarks, and in light of his observations on how statistical and mathematical reasoning sought to distinguish two types of contingency: directed and credibly directed, has sought a general mathematical framework and associated models. (How successfully may be debated, but I suspect he has been significantly more successful than his detractors will admit.) Going beyond that, over the past several years at UD, some of the commenters and contributors -- going back to the sort of remarks that we read in Thaxton et al's TMLO ch 8 on Yockey, Wickens etc, have begun to use FSCI as a descriptive term for just what is stated: functionally specified, complex information, such as we see in not only DNA but computers, cybernetics and telecomms etc. In parallel with all of that, since 2005, Trevor & Able and now Durston Chiu et al are using functional sequence complexity [FSC] as a related and measurable concept, contrasted with orderly and random sequence complexity. As at 2007, 35 measured values of FSC in Fits -- functional bits -- have been published. for proteins and related molecules. As to the use of DNA sequences in a context of recognising their uniqueness -- but not necessarily having identified function -- that is simply high tech fingerprinting. When the regulatory language[s?] increasingly evidently in DNA begin to be cracked, then we will be in a position to say a lot more on FSCI in DNA. (I for one look forward to being in a position to reverse engineer the self-assembling, self-directing factory technology at work here. I can think of a lot of possible areas of application of such science and technologies! Just, this time around, please, let's keep it out of the hands of the generals!) But already, just the protein coding portions are telling us plenty. GEM of TKIkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
02:34 PM
2
02
34
PM
PDT
Also, jerry, in regards to FCSI vs. CSI, you said in another thread:
In FCSI the information under analysis is doing the specifying and is easily understood when expressed that way. In CSI the information is what is specified (the opposite of FCSI) and does not necessarily have a function nor a logical connection to anything and that is where the morass is.
So answer me this: Is it possible for information to specify something but not be specified by something else?R0b
February 19, 2009
February
02
Feb
19
19
2009
02:13 PM
2
02
13
PM
PDT
FSCI is a subset of CSI. It that hard to understand.
No, I understand. What I don't understand is that a lot of ID proponents, including Meyer and Joseph, fail to choose what you think is the best term, and it's no big deal. But when JayM mirrors Joseph's terminology, it's cause for complaint.
Just trying to improve everybody’s reading comprehension skills so this point does not come up again.
Whose reading comprehension are you trying to improve?
If Craig Venter puts his name in the DNA using some code, a different functional relationship is being used. He is using the nucleotides to specify a name while a gene is specifying a protein.
You realize that non-watermarked DNA is routinely used for identification in forensics, just as Venter's watermarks are used for identification. Does that mean that all DNA, including non-coding, has FSCI?R0b
February 19, 2009
February
02
Feb
19
19
2009
01:51 PM
1
01
51
PM
PDT
R0b, FSCI is a subset of CSI. It that hard to understand. Dembski is attempting to model all intelligent actions not just those contained in biology which are more easily modeled because they are so obvious. Meyer uses the examples of language and computer software to describe the information in DNA so it is probably best if he had used FCSI to make the distinction so the slow witted can understand better. Just trying to improve everybody's reading comprehension skills so this point does not come up again. Just so you can understand it better. If Craig Venter puts his name in the DNA using some code, a different functional relationship is being used. He is using the nucleotides to specify a name while a gene is specifying a protein. It is probably likely that a different specifying scheme is being used for a lot of the remaining part of the genome. If you do not understand this, I or someone else will spell it out in more detail so these misconceptions don't go on and get repeated.jerry
February 19, 2009
February
02
Feb
19
19
2009
01:18 PM
1
01
18
PM
PDT
jerry, so when Joseph says "specified complexity", we all know that he means FCSI. But when JayM uses the same term and its synonymous (according to Dembski and Meyer) term CSI in his response, we know that he doesn't mean FCSI. Got it. I'll remember that, lest my reading comprehension be disparaged again. Does the same rule apply to the Stephen Meyer quote in #41?R0b
February 19, 2009
February
02
Feb
19
19
2009
12:06 PM
12
12
06
PM
PDT
R0b, I think you reading comprehension skills need improvement. If JayM knew the difference and he should know the difference given all the comments that have been made on this here, the proper response is not to chastise Joseph with a meaningless comment but to say something like, Joseph, you should use FCSI and not just plain specified information. So Joseph was just fine because we all knew what he was talking about, but JayM was showing his stripes. If you did not see this, then as I said your reading comprehension needs some work.jerry
February 19, 2009
February
02
Feb
19
19
2009
11:24 AM
11
11
24
AM
PDT
jerry:
Take the comment in question above. How many times have we said that relevant to biology the concept is FSCI not CSI and yet this comment repeats the non issue again.
Perhaps your annoyance should be directed toward Joseph, to whom JayM was responding. Joseph said:
Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.
R0b
February 19, 2009
February
02
Feb
19
19
2009
11:08 AM
11
11
08
AM
PDT
Patrick, given your method of calculation, how did you apply it to ALlen's examples? i didn't see one piece of sequence data in the whole post, so how did you do your calculations? second, why don't you count each repeat of the amino acid triplet in the antarctic fish in your informational equation? if they repeat over 4 times (which they do), they will exceed 100 informational bits. my last few comments have been hung up in moderation, but I hope this one won't be. -- I was gone for a couple days, and much conversation has taken place, so I will embed my response here for future readers: In the other link I gave I noted that the "ice fish carr[ies] a partially deleted copy of alpha1 and lack the beta globin gene altogether. These deletions are inextricably linked to its lower blood viscosity..." IOW, a destructive mutation that gives a benefit in this limited environment. The number of repeats apparently required for this "functionality" is 4 repeats or 96 informational bits. AFAIK additional repeats are unnecessary duplications. As I mentioned tying function to biological information is the hard part, so I may be wrong on this and this example might require more than 100. No big deal either way. Not to mention, I suppose it could be argued that a degenerative change like this should not even count as FCSI, although I'd leave that determination to the experts. I personally believe there will be found special exceptions where 500+ informational bits can be exceeded by non-foresighted processes, and ID theory will need to account for them, but that's just my opinion. - PatrickKhan
February 19, 2009
February
02
Feb
19
19
2009
10:52 AM
10
10
52
AM
PDT
GSV #98
Can you show me the calculations to get the ‘under 100 bits’ number please? It would be very useful.
Calculating is actually very easy. The shorter version is that 2 bits are required to represent each nucleotide. The examples being touted typically consist of only 3 amino acid changes. 6 informational bits should be enough to encode each amino acid but I personally bump it up to 8 bits for ease of calculation (which should also help account for any minor data compression). Thus they're 24 informational bit indirect or direct pathways. More specifically, the trypsinogen gene in Antarctic notothenioid fish--which I think is probably the best example available at this time--consists of repeats of three amino acids. I'll copy over my English word explanation that should makes things easy to understand.
the Explanatory Filter can take multiple types of inputs (which also makes it susceptible to GIGO and thus falsification). Two are (a) the encoded digital object and (b) hypothetical indirect pathways that lead to said objects. My name “Patrick” is 56 informational bits as an object[each letter is represented by 8 bits]. My name can be generated via an indirect pathway in a GA. An indirect pathway in a word-generating GA is likely composed of steps ranging from 8 to 24 informational bits. Let’s say you take this same GA and have it tackle a word like “Pseudopseudohypoparathyroidism” which is 30 letters or 240 informational bits. It can be broken down into functional components like “pseudo” (48 informational bits) and “hypo” (32 informational bits). Start with “thyroid” (56 informational bits). For this example I’m not going to check if these are actual words, but add “ism”, then “para”, and then “hypo”. “hypoparathyroidism” is a functional intermediate in the pathway. The next step is “pseudohypoparathyroidism”, which adds 48 informational bits. Then one more duplication of “pseudo” for the target. That may be doable for this GA but what about “Pneumonoultramicroscopicsilicovolcanoconiosis” (360 informational bits) or, better yet since it’s more relevant Dembski’s work (UPB), the word “Lopado­temakho­selakho­galeo­kranio­leipsano­drim­hypo­trimmato­silphio­karabo­melito­katakekhy­meno­kikhl­epi­kossypho­phatto­perister­alektryon­opto­kephallio­kigklo­peleio­lag?io­siraio­baph?­tragano­pterýg?n” (1464 informational bits). I’m not going to even try and look for functional intermediates.
And I'd add that none exist, although the entire word consists of functional components. So someone could argue that an indirect pathway could duplicate all of them from other words and somehow assemble them into a coherent whole. Now the hard part is looking at the raw code and figuring out what biological information corresponds to which biological functionality. Never mind if the entire system is like a self-decompressing executable...so keep in mind these are estimates for the "true" informational content.Patrick
February 19, 2009
February
02
Feb
19
19
2009
10:21 AM
10
10
21
AM
PDT
jerry: You have raised a serious point. Cf here from 172 on, as a current case in point. Gkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
09:53 AM
9
09
53
AM
PDT
I just noticed "Unfortunately, as has been discussed extensively in multiple threads here, there is no rigorous definition of specified complexity that can be objectively applied to arrive at the same answer by different, independent individuals. Further, it has not been demonstrated that such information is uniquely the product of intelligence." We have been getting a lot of incredible statements from people here lately. Thank God for the moderation changes that lets in the people who have problems with ID and who demonstrate how ill informed they are about the debate. So that those who never comment and are honestly seeking information can see what one side has to offer versus the other. We have to assume that those who come here represent the best out there and as such it means the fights are easier than we thought. A great example is Allen MacNeill, an evolutionary biologist, and his thoughts on macro evolution. The people who come here cannot use the usual ad hominems or the inane arguments used elsewhere and expect to get away with them to distract from the content of their arguments. They are forced to deal with facts and logic and it is amazing to see how they dodge both. Take the comment in question above. How many times have we said that relevant to biology the concept is FSCI not CSI and yet this comment repeats the non issue again. And it is easy to assign some calculations to this information and those here using the concept bend over backwards to be conservative on the magnitude of the calculations. If one wants to question the proposition that this type of information is the product of intelligence alone then I suggest they find even one small example of something similar that is not the product of intelligence. Of course one never gets an answer to this but only that it is unfair to say it cannot be due to non intelligent causes. Well we do not actually say that. We say it is highly likely that it has an intelligent origin not that it is absolute. We say that is highly unlikely that it is due to non intelligent origins. And we have the data to show that. The objections are getting infantile.jerry
February 19, 2009
February
02
Feb
19
19
2009
09:08 AM
9
09
08
AM
PDT
To Patrick "I’ve been having discussions about macroevolution over the last month or so and the examples being highlighted would probably qualify as macroevolution under MacNeill’s definition but unfortunately from an informational basis they were all well under 100 informational bits and also did not result in IC objects." Fantastic I am having an email conversation with someone about this very subject and I need all the help I can get. Can you show me the calculations to get the 'under 100 bits' number please? It would be very useful.GSV
February 19, 2009
February
02
Feb
19
19
2009
08:37 AM
8
08
37
AM
PDT
JayM, You haven't "shot down" anything. Fortunately it has been demonstrated that such information requires agency involvement. And as you have been told empty claims of MNs do not amount to evidence. Do you think we determine artifacts by flipping a coin? Believe it or not we have tried and true design detection methods. And until someone, ANYONE can demonstrate IC or CSI coming into existence via nature, operating freely, I say it is safe to infer it cannot do so. Add to that we have direct observational knowledge of agencies bringing both into existence and the design inference is solidified. And yes, as with ALL scientific inferences, the design inference can either be confirmed or refuted with future knowledge.Joseph
February 19, 2009
February
02
Feb
19
19
2009
08:17 AM
8
08
17
AM
PDT
Mark Frank sez:
The subtle point that the argument leaves out is that both high information content and irreducible complexity are defined in terms of low probability of an alternative explanation. To see that this is true you only have to ask yourself whether you would still have high information content or irreducible complexity if you had a plausible explanation. What appears to be a positive argument is actually a negative argument in disguise.
That is incorrect. The PROOF- that is mathematical proof- is the IMPROBABILITY. The POSITIVE case comes from experience- That is every time we have observed X degree of IC and knew the cause it has ALWAYS been via an intelligent agency. CSI is the same- EVERY time we have observed CSI and knew the cause it has ALWAYS been via an intelligent agency. IOW those are POSITIVES. And in both cases we have NEVER observed nature, operating freely doing so. So yes, all that has to be done to falsify the inference is to demonstrate that nature, operating freely CAN account for it. That said all YOUR position is "we haven't observed the designer(s) in action, therefore nature did it."Joseph
February 19, 2009
February
02
Feb
19
19
2009
08:08 AM
8
08
08
AM
PDT
especially when you consider that St. Bernards are dogs as well toy poodles etc.Collin
February 19, 2009
February
02
Feb
19
19
2009
07:53 AM
7
07
53
AM
PDT
On MacNeill's page: I was first excited then thoroughly disappointed when reading it. I thought that since MacNeill is constantly active in this area he would have known of better examples that perhaps I was unaware of. Oh, well. Joseph at #44 highlighted the main issue: definitions. MacNeill starts with an overly broad definition. That definition may served well enough 20+ years ago, but we're not just considering "large-scale pattern[s] of change over time", we're looking at the specific informational basis for these patterns that we now have access to. I've been having discussions about macroevolution over the last month or so and the examples being highlighted would probably qualify as macroevolution under MacNeill's definition but unfortunately from an informational basis they were all well under 100 informational bits and also did not result in IC objects. Stripped of their informational basis, these examples become meaningless talking points that do not belong in a modern debate. BTW, Dave, why insult MacNeill over this? Just point out how he's incorrect. Venus Mousetrap #73 Already discussed before. In short, the major problem is that we don't have any indirect pathway to use as a starting basis for a hypothesis. bfast #78
If I understand ReMine correctly, he did his calculations assuming about 1.5% difference between human and chimp. The most evolutionary value I have been able to find these days is more like about 6% difference in coding DNA.
To put that in perspective here is a quote about the draft Neandertal genome:
So far the results indicate that there is a roughly eight to 12.8 percent divergence between Neandertals and human reference sequences.
Although I should note the caveat that the current sequence data representing about 63 percent of the genome. And the informational difference may only correspond to data which controls systems irrelevant to the debate at hand. As in, we don't know the exact percentage of information that defines the key differences between chimp and human. But to know that we'll first have to figure out how the biological code works in its entirety... At the same time everything I've read about Neandertals indicates that their technology, music, culture, etc. was pretty much the same as "humans". Considering the changes we see in dogs I've felt that Neandertals should probably be considered just another variant of human, sort of like how labradors and golden retrievers are both dogs. EDIT: Joseph and I discussed the repercussions of the informational divide in a previous thread.Patrick
February 19, 2009
February
02
Feb
19
19
2009
07:17 AM
7
07
17
AM
PDT
Mr. MacNeil I was in the commercial sector all my adult life so, not surprisingly, your question about why I, like you, failed to become a professor at a university is nonsensical. In the commercial sector I was quite successful and retired young to pursue less serious things like spanking evolution lecturers from Cornell.DaveScot
February 19, 2009
February
02
Feb
19
19
2009
07:03 AM
7
07
03
AM
PDT
PS: Laminar, note where there is a1-bit string, in my JK event switch example: the Q/NOT-Q outputs and internal latch.kairosfocus
February 19, 2009
February
02
Feb
19
19
2009
06:31 AM
6
06
31
AM
PDT
Jerry: Point. And the case in view -- US$ 1/2 mill for indoctrination in materialism under false, Lewontinian colour of science -- is sobering. GEM of TKIkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
06:24 AM
6
06
24
AM
PDT
Laminar: I cited the bouncy switch to show precisely the gap between the "simplicity" of on paper and in theory modelling, vs the on the ground complexities of real physical systems. The precise context is one in whi9ch someone is trying to get around the import of observing FSCI by suggesting getting shorter calling strings/ pgms, until you find a short enough one to say FSCI has "vanished." I have pointed out that that, taking the shortest possible string length, 1 bit, the realities of PHYSICAL functionality make for a lot more complexity than is evident on the surface. In short, FSCI has NOT been "disappeared." GEM of TKIkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
06:21 AM
6
06
21
AM
PDT
"How did chance create life? I say - that is not my concern. I am a chance theorist. I just look for evidence of chance." The hypocrisy of statements like this really extends the envelope of credulity. No one today is inhibiting the study of chance as an explanation of anything. And no one after hundreds of years of study has yet found any phenomenon in this world due to chance that results in the kind of complexity we see in life. Nor has anyone found this type of complexity the result of law either. What is being done is that chance as an explanation is being shoved down the throat of the students of this world as an explanation for a major issue in science and life when there is no evidence to support it. And why is this nonsense being imposed on others and why is this nonsense defended by people who come here when there is no basis for it and they can not provide any. Maybe we could have a contest to find the proper words to describe this behavior. Are you not aware of the debate? Of course you are and given that, the statement made is ultra illuminating.jerry
February 19, 2009
February
02
Feb
19
19
2009
05:56 AM
5
05
56
AM
PDT
Switch Bounce. A bouncy switch isn't a binary string generator it is a variable length bit stream generator. If you design a switch with a debounce circuit then, for the purposes of a paper model, you can treat the 'switch' as a single entity and ignore the details of how the debounce properties are implimented. From the point of view of the system receiving the switch input it is not nessasary to 'know' if the switch has a particular debounce circuit just as long as it behaves as a debounced switch. Taking your approach, you could argue that the paper model must take into account the exact structure of the switch right down to the atomic level. No two switches would be alike even if they were functioally equavalent and their subtle differences had no effect on their function. Your point about the differences between paper models and reality is important but it is not always pertinent to every aspect of every model. It is important to pay attention to the subtleties of something you are trying to model but it is also important tounderstand what is relevant to the task in hand. Taking every atom into account whenever you model something is practically impossible.Laminar
February 19, 2009
February
02
Feb
19
19
2009
03:57 AM
3
03
57
AM
PDT
PS: Switch Bounce. This is a case in point of the difference between paper models and reality. A switch is a 1-bit entity, i.e. the shortest digital string. A simple, easy case of how a short string can trigger qa much longer one, nuh? Nope. For, a switch is a mechanical device and has dynamics so that flicking it triggers physically multiple contacts across milliseconds, that can easily derange a system. (One workaround is to use conductive, soft rubber switches [nice, overdamped behaviour, no bounces] . . . and there are tradeoffs on number of expected operations int eh system's reasonable working life.) My favourite solution was to use a JK flipflop, with the switch set so that once it goes to the on state, it latches the f/f to its storage state. that is, tie J to K to NOT-Q o/p, so that it is inactive high (on system turnon, force a reset on the reset input for the f/f]. The actual mechanical switch then feeds the clock i/p. On triggering, we not only go to the 10 o/p state but latch the JK on its storage state, with J = K = 0. On handling the interrupt so triggered, reset the f/f. [This also automatically prevents a further interrupt on the switch triggering before you are ready to handle it -- and your design has to factor in "lost" inputs like that. That way you do not get out of control! [And easy case for this: IRQ handle takes a few 10's of ms, and you do not expect real signals to repeat like that, and if they do, it won't make a serious difference.] On paper, real simple. On the ground, a lot more subtle than that. GEM of TKIkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
03:22 AM
3
03
22
AM
PDT
Could you possible give me some explanation as to why I am unable to post? Was I rude? Was I disrespectful? Did I break some rule of commenting etiquette? You have my email address, so feel free to send an email explaining what I've done to get banned.KRiS_Censored
February 19, 2009
February
02
Feb
19
19
2009
01:31 AM
1
01
31
AM
PDT
I'm not sure if this is technical difficulty, but judging by DaveScot's statements about me being a sock puppet, I suspect I may have been locked out. I've tried to post the following multiple times, and it has yet to show up. If I am mistaken, please forgive me for even considering the idea that you may have caused this, DaveScot. Meanwhile, I've started this "sock puppet" to post what I had originally intended to post as a follow up as KRiS: Sorry for waiting to long to post my reply. I lost my internet connection and this is the first chance I've gotten to be online since my last post. Jerry
If someone has an hypothesis and does a test of that hypothesis and the research fails to support the hypothesis, they are failing to support their theory. If the test is repeated ten thousand times, I will go out on a limb and say that the theory is being falsified.
Unfortunately, this is exactly why a statement like "You will never see X" is not a good test of a hypothesis. To be considered actual support for the hypothesis, results must be found which agree with the statement. No results can ever agree with any statement that says "never", because the search space is all of time and space. Anything less is an incomplete search, meaning that the actual test of the statement must continue (i.e. the test hasn't finished yet, so we don't know what the final result is). The way that I worded it above sounds kind of silly, and many people would immediately discount that argument simply because it sounds so ridiculous (all of time and space...hee hee hee), so let me be a bit more exact in how I present it now. Any statement in the form "You will never see X" can be more accurately stated as "Given the set of all possible Y, no Y will be found among them which is actually X." It should be pretty clear when restated in this way that to be in agreement with the statement, and therefore supportive of the hypothesis, all of Y must be searched without finding X. Any number of searches through anything less than all of Y results in either a falsification (X is found) or a necessary continuation of the search (X is not yet found). Not finding X means the search is incomplete and therefore inconclusive. (Not sure of that? Just ask yourself, if X hasn't been found yet, can you conclusively say that it will therefore never be found? If not, then it is by definition inconclusive) Now, since the test of the statement is thus far inconclusive (assuming X has not been found yet, of course), to claim that the statement is therefore supported is to say that an inconclusive result must be considered to be supportive of the statement. This means that it must be assumed to be true unless and until it is conclusively demonstrated to be false. In other words "It's true because you haven't proven it to be false." This is the Argument From Ignorance. Now you can attempt to justify using such an argument (maybe you can claim that inconclusive is still "conclusive enough", though I think that'd be a hard sell), but you can't legitimately claim that it's not an Argument From Ignorance at all, even if you do flip it around and call it a "test". Of course, one can limit the search space to make it more manageable by using a limited set of Y, rather than the set of all possible Y. However, this necessarily changes the original statement from "You will never see X" to "You will not see X if you search through this limited set of Y". There better be a very good reason for excluding those areas which are not to be searched. For instance, when you use the fact that X has not yet been found to try and support the original statement you are essentially creating a new statement which is a subset of the original that says "You will not see X if you search through the set of Y which has already been searched." Obviously this statement is supported by the data, but that's because it is a simple statement of fact. It's not a prediction, but a post-diction. I think you'll agree that limiting Y for the express purpose of making the statement true isn't a very good reason for limiting Y.
Let's here it for Kris who has joined the ranks of nit pickers but never offer substance.
You misspelled "hear". uoflcard
It's more like "After many years of testing, not X yet, so what about Y?
What is strange to me is that 150 years is always considered such a very long time for such a test. Meanwhile the Cambrian Explosion is considered to be such an extremely short amount to time that it is almost considered a falsification on it's own. In other words, in 150 years you believe it is highly likely that we should have witnessed something that took nature the amazingly short time of only 5 million years to do. This problem is exacerbated by the fact that only a natural observation counts, since any lab or man-made experiment is automatically rejected on the grounds that it is designed by man, and therefore not an accurate test (I'm thinking of Ev as an example). So I ask you this, is it your contention that evolution should reasonably be expected to create any kind of CSI in only 150 years? If so, what would that say about the rate of evolution in general? DaveScot
KRiS is yet another sock puppet from the Panda's Thumb forum.
Actually I've never been on that forum. I view the Panda's Thumb blog from time to time, but for the most part this is the only place that I post (it's no fun debating with people that agree with you). Thank you for defending me, B L Harville and JayM. PaV
Which is th eworse argument: arguing that evidence of design implies the presence of an intelligent agent, or arguing from one's personal notions about what God can and cannot do, or, rather, what God would or would not do? I'm interested in your answer.
Obviously the first argument is the better argument. There are several arguments and ideas presented by Darwin that have been shown to be either false, or poorly argued (as you demonstrate so well). However, there are many many more arguments that are very persuasive and logically sound which also support evolution. It's a good thing he didn't allow himself to rely solely on one argument or even one type of argument.KRiS_Censored
February 19, 2009
February
02
Feb
19
19
2009
01:19 AM
1
01
19
AM
PDT
MF: @ 85: The problem is that “design” is not a hypothesis. If I proposed “chance” in the abstract as an explanation of something you would not be impressed. It is just too broad . . . . a situation where e.g grass grows a different colour on my lawn in almost exactly the outline of a US president. Is this evidence of design? At first sight you might say so because the chances of grass growing in that pattern seem very small. But then it turns out someone left a metal outline of the president on the lawn over the winter and recently removed it. 1 --> Mark, do you not see the design implication in the situation? [Cf my highlight.] 2 --> This also brings out that the inference for a given aspect of a situation, across chance, necessity and design, is not a WHODUNIT or a HOWTWEREDUN inference. 3 --> You see independent specification, check. 4 --> You simultaneously see complexity, check. 5 --> You properly infer CSI, so design, check. 6 --> The back-story comes out: someone left a metal outline of the silhouette of a US president on a lawn, triggering grass to grow in a certain way it would otherwise not have. 7 --> You then infer: well, there was no direct intent to make grass grow that way so no design. [ERROR, as (i) the key entity the metal silhouette was designed, and the presence of such design was detected. Also, (ii) detection of design does not rule out the presence of chance or natural regularities as well.] 8 --> So, while indeed, design, chance and necessity ae GENERAL CATEGORIES of causal factors, once we focus on a given situation and its aspects, using the EF, we are dealing with alternative hyptheses ansd empirical data that allows reliable discrimination between them. GEM o TKIkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
01:11 AM
1
01
11
AM
PDT
JT: Re 57:
A binary string does not reek of FSCI regardless of that fact that if you throw it at a computer it can be executed. Furthermore, when you think of a computer, think “turing machine”- an utterly simplistic device. All the optimized subsystems of a typical computer are not intrinsic to computer as a concept and can be thought of as software. Any arbitrary binary string is software as well.
Let's take this from the top: 1 --> Have you ever designed, built, debugged and trouble-shot a computer or microcontroller, from a bag of chips and sheets of paper to lay out designs, timing diagrams, a monitor pgm [what is below the level of operating systems], up to getting it to successfully interface with the real world and fulfill a real-world function? (And, no, I don't mean assembling a machine from pre-built components and pre-developed software, I mean rolling yer own from scratch.) [Obvious answer from the above: no. I have. So has DaveScot. We both got the soldering iron scars to prove it.] 2 --> Once you move from the paper world of theoretical machines (useful as they are in their place)to real physical strings of bits stored in physical media and functioning in objects that physically realise and give effect to algorithms, you will know that bit strings don't magically do anything by themselves; and that FUNCTIONING bit strings of any significant length don't appear by chance. 3 --> For an inputted bit string to trigger any functional algorithmic response, there have to be: [1] an algorithm, [2] an architecture for it to run on, [3] one or more coding languages, starting with a relevant machine code [Ever coded in mac code or had to handle a core dump: FF BC CC DF AE 06 5A . . . ?], [4] hardware capable of carrying out input interface, storage, processing and output interface, [5] coded programs that execute the algor on the target machines (or equivalent hardware, but we are focussed on the softy side) [6] data structures, [7] handshaking protocols, and [8] sequencing and synchronisation, starting with system initialisation on turnon. (I used to strongly stress to my students: get a clean robust initialisation to a known initial condition, and NEVER let the system get into an out-of control condition. Regular "sanity checks" -- hardware interrupt triggered (use a timer chip) . . . -- if the system is potentially dangerous . . . don't forget, ~ six sample-hold action points per key signal rise time if you are controlling a process . . . emergency handlers . . . ) 4 --> ANY significant mis-steps on any of these eight core components, and the function fails at some point, per Murphy [I firmly believe in the doctrine of Murphy] usually a point of maximum embarrassment. In short, the core of the system is irreducibly complex, for any given design. 5 --> Within that PHYSICALLY INSTANTIATED context, we generally have stored information, and data strings flowing in, being processed and transformed ones flowing back out. 6 --> Such strings take meaning from their structure relative to the conventions and architecture of the particular system, and as a rule are EXTREMELY vulnerable to perturbation. [NASA once had to blow up a rocket as somebody put a comma where a semicolon was required in a Fortran I think it was control program.] 7 --> Now, we may indeed have fairly short bit strings at points, that trigger big events, e.g. a switch [one bit: on/off] or a keystroke or a mouse stroke or click or the input to an A/D converter yielding a given output state or the feed in from a UART receiving a serial data string [typically 1/2 or 1 - 4 bytes in these cases or down to 1 bit]. BUT THESE TAKE MEANING ONLY IN THE CONTEXT OF THE PHASE OF ALGOR EXECUTION IN VIEW, THE SPECIFIC POINT IN THE SYSTEM THEY APPEAR AT AND THE ASSOCIATED PROGRAMS AND DATA STRUCTURES THAT INTERACT WITH THEM. 8 --> So, the cascade of co-ordinated programs where a long one is called by successively shorter ones so that the final digital string is "caused" by a much shorter one [one that lo and behold hqas a reasonably high probability, e.g a click on/off is 50-50 after5 all . .. ], is only viable in the context of an entity that as a whole requires a LOT of FSCI,and is itself irreducibly complex [IC]. The very PC you are using is an apt illustrative case in point, when you swoosh your mouse or click it or hit a key on the keyboard. 9 --> Thus, once we are dealing with an identified algorithmic context, physically functional bit strings, even short ones, given that context, reek of FUNCTIONALLY SPECIFIC, COMPLEX INFORMATION. So, pardon my itching soldering iron scars: physically functional digital strings in an algorithmic context are either FSCI themselves (which is the context under discussion) or to function require FSCI rich data strings embedded in a system that makes the inputs function. In either case, once we observe FSCI, and we know the origin story directly, we observe intelligence. And, given that we have cut off at 1,0000 bits that function, we are loking at such isolation in the relevant config spaces thatt eh whole observed universe working as a search engine, per random search strategies, will not credibly be able to find the relevant islands or archipelagos of function. On needle in a haystack grounds. FSCI, I repeat, is a strongly warranted, reliable sign of intelligence. And, a Turing Machine, once physically instantiated is anything but "simplistic." GEM of TKIkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
12:50 AM
12
12
50
AM
PDT
uoflcard, You are a hero of rational thought.Upright BiPed
February 18, 2009
February
02
Feb
18
18
2009
11:58 PM
11
11
58
PM
PDT
1 2 3 4 5

Leave a Reply