Uncommon Descent Serving The Intelligent Design Community

“Actually Observed” Means, Well, “Actually Observed”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a comment to a recent thread I made the following challenge to the materialists:

Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. [Question begging not allowed.] If you do, I will delete all of the pro-ID posts on this website and turn it into a forum for the promotion of materialism. . . .

There is no need to form any hypothesis whatsoever to meet the challenge. The provenance of the example of CSI that will meet the challenge will be ACTUALLY KNOWN. That is why I put the part about question begging in there. It is easy for a materialist to say “the DNA code easily has more than 500 bits of CSI and we know that it came about by chance/law forces.” Of course we know no such thing. Materialists infer it from the evidence, but that is not the only possible explanation.

Let me give you an example. If you watch me put 500 coins on a table and I turn all of them “heads” up, you will know that the provenance of the pattern is “intelligent design.” You do not have to form a chance hypothesis and see if it is rejected. You sat there and watched me. There is no doubt that the pattern resulted from intelligent agency.

My challenge will be met when someone shows a single example of chance/law forces having been actually observed creating 500 bits of CSI.

R0bb responded not by meeting the challenge (no surprise there) but by suggesting I erred when I said CSI can be “assessed without a chance hypothesis.” (And later keith s adopted this criticism).

I find this criticism odd to say the least. The word “hypothesis” means:

A proposition . . . set forth as an explanation for the occurrence of some specified group of phenomena, either asserted merely as a provisional conjecture to guide investigation (working hypothesis) or accepted as highly probable in the light of established facts.

It should be obvious from this definition that we form a hypothesis regarding a phenomenon only when the cause of the phenomenon is unknown, i.e., has not been actually observed. As I said above, in my coin example there is no need to form any sort of hypothesis to explain the cause of the coin pattern. The cause of the coin pattern is actually known.

I don’t know why this is difficult for R0bb to understand, but there you go. To meet the challenge, the materialists will have to show me where a chance/law process was “actually observed” to have created 500 bits of CSI. Efforts have been made. All have failed. The now defunct infinite monkeys program being just one example. It took 2,737,850 million billion billion billion monkey-years to get the first 24 characters from Henry IV part 2.

 

UPDATE:

R0bb  responds at comment  11:

That’s certainly true, but we’re not trying to explain the cause of the coin pattern. We trying to determine whether the coin pattern has CSI. Can you please tell us how to do that without a chance hypothesis?

To which I responded:

1. Suppose you watched me arrange the coins. You see a highly improbable (500 bits) pattern conforming to a specification. Yes, it has CSI.

2. Now, suppose you and I were born at the same time as the big bang and did not age. Suppose further that instead of intentionally arranging the coins you watched me actually flip the coins at the rate of one flip per second. While it is not logically impossible for me to flip “all 500 heads,” it is not probable that we would see that specification from the moment of the big bang until now.

So you see, we’ve actually observed the cause of each pattern. The specification was achieved in scenario 1 by an intelligent agent with a few minutes’ effort. In scenario 2 the specification was never achieved from the moment of the big bang until now.

The essence of the design inference is this: Chance/law forces have never been actually observed to create 500 bits of specified information. Intelligent agents do so routinely. When we see 500 bits of specified information, the best explanation (indeed, the only explanation that has actually been observed to be a vera causa) is intelligent agency.

To meet my challenge, all you have to do is show me where chance/law forces have been observed to create 500 bits of specified information.

 

Comments
Jerad,
I haven’t got much else to say though. Since it is possible to get 500 Hs flipping 500 coins by chance alone. I’m not saying I wouldn’t be suspicious. As I said my first reaction would be: do it again.
My point is that we'd have good reason to conclude that something fishy was going on even if we couldn't try it again. One trial is plenty. Here are a couple of ways of looking at it. First, the 500-flip number is arbitrary. You're thinking of it as a single trial, but you could just as easily think of it as two 250-flip trials, or four 125-flip trials. You might be falling prey to a cognitive bias that says "I can't conclude anything on the basis of a single trial", but what about four consecutive trials in which 125 consecutive flips come up heads each time? That's pretty impressive. Would you be merely "suspicious" after seeing that? The way you frame the problem could be making a difference in how conclusive the evidence seems. A second way of looking at it is to remember that we are really comparing hypotheses. The first hypothesis is "The coin and the flips are fair, and I just happened to get a special-seeming pattern by pure luck." The second hypothesis is "Something fishy is going on that caused me to get a special-seeming pattern." We're trying to determine the probability that something fishy is going on -- in other words, that the coin and the flips aren't fair. We know that the under the fairness assumption, the probability of getting a "special" pattern is only n in 2^500, where n is the number of patterns that you would consider special. Under various unfairness assumptions, the probability of getting a special pattern becomes as high as 1 (in the case of a two-headed coin, for example). So unless you've fairly exhaustively eliminated these unfairness possibilities (by careful inspection and testing of the coin, the flipping apparatus, the room, etc.) the probability of unfairness remains higher than the probability of getting a special pattern by pure chance. If you sit down and watch someone flip 500 heads in a row, you can be virtually certain that something fishy is going on. No second try needed.keith s
December 2, 2014
December
12
Dec
2
02
2014
12:22 AM
12
12
22
AM
PDT
It is rather too bad that KF has decided not to respond to the requests posted on this thread. Let's hope this is down to time commitments on his part.Jerad
December 1, 2014
December
12
Dec
1
01
2014
02:48 PM
2
02
48
PM
PDT
#150 P(T|H) is not part of any supposition of mine. It was part of a paper written by Dr Dembski. KF has eliminated that term in his restatement of Dr Dembski's conjecture. It's not up to us to explain why he did that or to help by 'providing' H. Why doesn't KF just work with P(T|H)? Aren't you interested? I would think someone with an investigative frame of mind would want to know.Jerad
December 1, 2014
December
12
Dec
1
01
2014
06:04 AM
6
06
04
AM
PDT
LoL! @ Jerad- Your position can't provide the "H", and it can't.
Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design.
I am so glad that you are not an investigatorJoe
December 1, 2014
December
12
Dec
1
01
2014
05:57 AM
5
05
57
AM
PDT
#148 Keep asking KF about P(T|H). He won't answer but it's good to keep the question active. As I said, if I got 500 Hs in a row I'd investigate. But because that sequence is just as likely as any other sequence if you don't find evidence of tampering with the fairness of the trial then try it again!!Jerad
December 1, 2014
December
12
Dec
1
01
2014
05:52 AM
5
05
52
AM
PDT
Jerad: (KF’s) restatement of Dr Dembski’s fCSI detection algorithm he did away with P(T|H) Thank you, Jerad. Yes, we understand that CSI has mutated and diversified under relaxed selection. Kairosfocus keeps talking about coins flips and combinatorials, which makes it look like a standard probability distribution. Without the extended prose, what is kairofocus's formulation? Jerad: Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design. It's sufficient to indicate some underlying cause. Whether that's design or a two-headed coin, or the effects of a large magnet, requires further investigation. Jerad: I think everyone agrees that all non-designer arguments must be examined and eliminated (if possible) before making a design inference. More accurately, you compare hypotheses by testing them to determine which is likely true. You don't have to exhaustively eliminate natural causes if you have evidence of design. They are all just competing hypotheses.Zachriel
December 1, 2014
December
12
Dec
1
01
2014
05:43 AM
5
05
43
AM
PDT
#146
Note the 10^150 ?, that’s the reason IDers chose 500 coins – it is close to their oft repeated Universal Probability Bound.
I am aware of their 'limit'. But I'm just arguing against the 'logic' of inferring design IF one was to flip 500 (or any number really) of coins once and getting all Hs. You can't 'magic' a designer out of an improbability/probability. Everyone admits it is possible to get all Hs on one trial. I think everyone agrees that all non-designer arguments must be examined and eliminated (if possible) before making a design inference. But it would be interesting to ask: Would it be fair to infer design if I got all Hs with 10 coins? 20? 100? Where exactly is the line? As I said before you can pretty much guarantee getting 20 Hs in a row by doing the following: Start with 2 million coins. Flip them all. Take out the ones that come out Ts. Repeat. By the time you get down to 1 coin left that coin should have 20 or more Hs in a row. If it were me I would carefully examine that coin but the mathematics is correct. If you use 2 million fair coins you can force 20 Hs in a row. (Thanks to Neil deGrasse Tyson for the idea.) 500 Hs in a row would take a lot more coins obviously. I'll leave it up to the readers to figure out how many.Jerad
December 1, 2014
December
12
Dec
1
01
2014
02:43 AM
2
02
43
AM
PDT
Jerad @ 143,
We don’t disagree about the mathematics. I don’t see a ‘paradox’. I think I’m just not that interested in the psychology of the situation. Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design @ 145 That could be interesting. I haven’t got much else to say though. Since it is possible to get 500 Hs flipping 500 coins by chance alone. I’m not saying I wouldn’t be suspicious. As I said my first reaction would be: do it again.
The formula for getting number of required toss is 2*(2^N – 1), where N is the number of heads/tails, so for 500 Heads in a row you need 6.5*10^150 tosses . Note the 10^150 ?, that's the reason IDers chose 500 coins - it is close to their oft repeated Universal Probability Bound.Me_Think
December 1, 2014
December
12
Dec
1
01
2014
02:00 AM
2
02
00
AM
PDT
#144
Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design.
Sure it is, but we can talk about that tomorrow. It’s bedtime for me.
That could be interesting. I haven't got much else to say though. Since it is possible to get 500 Hs flipping 500 coins by chance alone. I'm not saying I wouldn't be suspicious. As I said my first reaction would be: do it again.Jerad
December 1, 2014
December
12
Dec
1
01
2014
02:00 AM
2
02
00
AM
PDT
Jerad,
Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design.
Sure it is, but we can talk about that tomorrow. It's bedtime for me. Good night.keith s
December 1, 2014
December
12
Dec
1
01
2014
01:45 AM
1
01
45
AM
PDT
#141
I address that issue in my OP. The set of ‘special’ sequences is not the same for everybody, but as long as the set is small enough, it is significant when you flip a sequence that is already special to you. Take a look at the OP and the comments. I go into quite a bit of detail about this.
We don't disagree about the mathematics. I don't see a 'paradox'. I think I'm just not that interested in the psychology of the situation. Bottom line: getting 500 Hs on a single trial of flipping 500 coins is not a sufficient reason to infer design.Jerad
December 1, 2014
December
12
Dec
1
01
2014
01:16 AM
1
01
16
AM
PDT
#138 I agree that part of the real question is: is the game rigged? And a single roll or trial is not enough to determine that. Flipping 500 coins and getting 500 Hs is NOT a good enough reason to infer design. If I flipped 500 coins and got 500 Hs the first thing I would do is: DO IT AGAIN!! And again. And again. And again. #139 Looks like Eric and KF have left that particular battle ground. Not saying this applies to any one here or there but it always amuses me to listen to theologians who can analyse sacred texts down to the smallest minutiae but cannot follow a basic mathematical argument that goes against their belief structure. I can understand the reluctance of accepting that we (individuals and as a species) are not 'special' or 'determined'. That doesn't feel right because our whole experience of the world is from our individual perspective. We literally cannot see the world from another point of view without great difficulty. I suppose that's why out of body experiences can be so transforming. But I don't understand why it's so hard for some to grasp the immense power of cumulative selection. It's clear from human-directed breeding programs (of dogs and brassicas for example) that there is a lot of natural variation thrown up by natural reproduction. And when you filter that through generations of selection . . .Jerad
December 1, 2014
December
12
Dec
1
01
2014
01:10 AM
1
01
10
AM
PDT
Jerad,
BUT I could take the sequence of 500 Hs and Ts that my girlfriend and I generated on our first date (not really, I’m not that boring) as my special, recognisable sequence and I could say that the chances of any one else coming up with that sequence is nigh well onto impossible. I could claim that it signifies a special, once-in-a-universe moment never to be repeated.
I address that issue in my OP. The set of 'special' sequences is not the same for everybody, but as long as the set is small enough, it is significant when you flip a sequence that is already special to you. Take a look at the OP and the comments. I go into quite a bit of detail about this.keith s
December 1, 2014
December
12
Dec
1
01
2014
12:56 AM
12
12
56
AM
PDT
#138
I actually agree with the IDers on this one. While it’s true that 500 heads is no more or less probable than any other particular sequence, it is special, precisely because it belongs to a small set of sequences that we regard as special.
BUT I could take the sequence of 500 Hs and Ts that my girlfriend and I generated on our first date (not really, I'm not that boring) as my special, recognisable sequence and I could say that the chances of any one else coming up with that sequence is nigh well onto impossible. I could claim that it signifies a special, once-in-a-universe moment never to be repeated. But it doesn't make it special or different or significant except to me. 'Regarding' some sequences as 'special' is just psychology. It doesn't change the mathematics. Humans like recognisable patterns. But, in this case, the mathematics doesn't care. A pebble you find brighter and shinier is still a pebble.Jerad
December 1, 2014
December
12
Dec
1
01
2014
12:43 AM
12
12
43
AM
PDT
Jerad #137:
In fact, has any ID proponent calculated P(T|H) for any significant example? Meaning something other than situations we know can be analysed as purely deterministic, like coin tossing.
Eric Anderson is dancing around that question right now on another thread.keith s
December 1, 2014
December
12
Dec
1
01
2014
12:20 AM
12
12
20
AM
PDT
Jerad #133:
500 Hs is no different probabilistically or mathematically form any other sequence of 500 Hs and Ts. It only looks ‘special’ but it’s just as random as anything else.
Jerad, I actually agree with the IDers on this one. While it's true that 500 heads is no more or less probable than any other particular sequence, it is special, precisely because it belongs to a small set of sequences that we regard as special. I did an OP on this last year at TSZ: A resolution of the ‘all-heads paradox’keith s
December 1, 2014
December
12
Dec
1
01
2014
12:12 AM
12
12
12
AM
PDT
To be fair Zachriel and Pachyaena, I don't think KF has posted any comments in hours and hours. Not that I'm expecting him to tell you if P(T|H) is a standard probability distribution (a fairly elementary question). In his (KF's) restatement of Dr Dembski's fCSI detection algorithm he did away with P(T|H) which I take to mean he was unable to compute it for his examples. In fact, has any ID proponent calculated P(T|H) for any significant example? Meaning something other than situations we know can be analysed as purely deterministic, like coin tossing.Jerad
December 1, 2014
December
12
Dec
1
01
2014
12:10 AM
12
12
10
AM
PDT
KF, pardon but your predictable avoidance of Zachriel's question is sadly telling. Kindly do better. Is P(T|H) a standard probability distribution? Yes or no?Pachyaena
November 30, 2014
November
11
Nov
30
30
2014
10:25 PM
10
10
25
PM
PDT
kairosfocus: Z simply refuses to acknowledge that ... We simply asked a question. Is P(T|H) a standard probability distribution? The way you treat it certainly looks like a standard probability distribution. Please start your answer with a yes or no if possible.Zachriel
November 30, 2014
November
11
Nov
30
30
2014
02:08 PM
2
02
08
PM
PDT
#131
Where, let us note, over two years ago, the open invitation was put on the table to host a pro-darwinist essay that gave the framework of observation backed evidence for the ToL from the root — OOL — to the main branches and onwards the twigs
Let us not forget that no one in the evolution camp claims to have anything other than a guess regarding the origin of life problem. Let us also no forget that the ID community also cannot be specific about what the original 'life' on earth looked like. There is a difference between the camps though: the evolutionists are trying. I don't see any one in ID seriously trying. Perhaps because no one is yet clear what ID is saying regarding even the when of design. Answering once or many times would be a good start.Jerad
November 30, 2014
November
11
Nov
30
30
2014
12:52 PM
12
12
52
PM
PDT
#131
500 H is patently distinguishable and separately describable, comes from a set of similar cases such as 500 T etc, and T is immensely smaller than G.
500 Hs is no different probabilistically or mathematically form any other sequence of 500 Hs and Ts. It only looks 'special' but it's just as random as anything else.
A blind chance search hoping to find something from T or just happening on T is not credible. But, as design is a known cause of high contingency, it is easily seen save to the selectively hyperskeptical that while say an outcome from G is readily explained on chance, 500 H is best explained on intelligently directed configuration.
Again, as all given sequences of Hs and Ts are equally probable there is no justification for saying that the occurrence of a particular sequence or a group of sequences is 'better' explained by design. AND, as you cannot rule out a chance occurrence you are unjustified in making a design inference.
Beyond, Z simply refuses to acknowledge that — three years ago when the issue was brought up — it was shown that simply carrying through the log reduction of the Dembski 2005 Chi metric expression, we see that it is an info beyond a threshold metric.
I am saying that you modified Dr Dembski's original derivation and I have yet to see his approval of your restatement. If he thought you wouldn't need to calculate P(T|H) then he would have left it out himself. But he didn't.
Of course, none of this will make any impression on the determined objectors. We are dealing with zero concession selective hyperskepticism and the agit-prop of polarisation and message dominance as I well recall from dealing with Marxists decades ago.
I disagree with your interpretation of some mathematical issues, that is all.
The answer is to simply stand your ground and lay out a reasonable case.
I am stating the mathematical truths as I see them.
Where, let us note, over two years ago, the open invitation was put on the table to host a pro-darwinist essay that gave the framework of observation backed evidence for the ToL from the root — OOL — to the main branches and onwards the twigs. If that could be done it would shatter design theory and the design inference on FSCO/I as regards the world of life.
If you recall I did make a brief attempt. Anyway, you're changing the subject. I'm now just talking about a specific point of probability and making the design inference.
Let the record stand clear: no serious attempt after two and more years. That speaks volumes on the true state of the matter.
If you didn't agree with the popular books on evolution written by Drs Dawkins, Miller, Coyne, Shermer and Carl Zimmer then I don't see how I could possibly change your mind.Jerad
November 30, 2014
November
11
Nov
30
30
2014
10:45 AM
10
10
45
AM
PDT
PS: Remember, Mung just extended what we have from Orgel, drawing out how what he spoke of was indeed FSCO/I, with a metric for info added in. On seeing that, I did not notice any significant acknowledgement on the part of those who were so hotly contending the opposite. Likewise, when the strawman misrepresentation of the design inference in the rain fairies etc talking points was made, there was zero concession, zero responsiveness; cf the just linked. Take that pattern -- there are many similar cases, with KS's black knight tactics a particularly rich motherlode -- as a yardstick.kairosfocus
November 30, 2014
November
11
Nov
30
30
2014
10:06 AM
10
10
06
AM
PDT
F/n: Predictably . . . 500 H is patently distinguishable and separately describable, comes from a set of similar cases such as 500 T etc, and T is immensely smaller than G. A blind chance search hoping to find something from T or just happening on T is not credible. But, as design is a known cause of high contingency, it is easily seen save to the selectively hyperskeptical that while say an outcome from G is readily explained on chance, 500 H is best explained on intelligently directed configuration. Beyond, Z simply refuses to acknowledge that -- three years ago when the issue was brought up -- it was shown that simply carrying through the log reduction of the Dembski 2005 Chi metric expression, we see that it is an info beyond a threshold metric. Taking that as a base we can note that information is readily measurable by noting say the string of Y/N q's to specify the wiring diagram config for relevant function. Or, if you wish, statistical studies can be used. That is info is readily quantified from observations. And in the case of living forms, say the variability of AA's in known functional, fold-stable key-lock fitting proteins allows us to infer to the statistical distributions for the functional state. The 20-state, one of end gives 4.32 bits per AA locus, if we go as loose as hydrophilic/hydrophobic that gives us 1 bit. (Which, on average is way too loose.) Nevertheless take that, take 100 AA's not the 300 or so that is typical, and say only 100 proteins are required for a simplistic early cell life form. That's 10 kbits of info, an order of magnitude below what reasonable genome sizes say. But it matters not, 10 kbits is a factor of ten beyond the 500 - 1,000 bit threshold for FSCO/I. Of course, none of this will make any impression on the determined objectors. We are dealing with zero concession selective hyperskepticism and the agit-prop of polarisation and message dominance as I well recall from dealing with Marxists decades ago. The answer is to simply stand your ground and lay out a reasonable case. Where, let us note, over two years ago, the open invitation was put on the table to host a pro-darwinist essay that gave the framework of observation backed evidence for the ToL from the root -- OOL -- to the main branches and onwards the twigs. If that could be done it would shatter design theory and the design inference on FSCO/I as regards the world of life. Let the record stand clear: no serious attempt after two and more years. That speaks volumes on the true state of the matter. KFkairosfocus
November 30, 2014
November
11
Nov
30
30
2014
10:00 AM
10
10
00
AM
PDT
kairosfocus: any single outcome of a toss of 500 coins faces odds of 1 in 3.27*10^150, as the latter is the number of possible outcomes, W. chi = – log2 [ 10 ^ –120 * phi~S(T) * P(T|H) ] Is P(T|H) a probability distribution?Zachriel
November 30, 2014
November
11
Nov
30
30
2014
06:24 AM
6
06
24
AM
PDT
My prediction has been confirmed. KF, the "substance" that matters is whether you IDers can and will support your claims about CSI, dFSCI, FSCO/I, IC, etc., in life forms and all of the other things in nature that "you and ilk" claim contain CSI, dFSCI, FSCO/I, IC, etc. Fishing reels, Shakespearean sonnets, and the other man-made things that "you and ilk" trot out are already known to be designed.Pachyaena
November 30, 2014
November
11
Nov
30
30
2014
06:17 AM
6
06
17
AM
PDT
#123
Yes, any single outcome of a toss of 500 coins faces odds of 1 in 3.27*10^150, as the latter is the number of possible outcomes, W.
The set of all possible outcomes, W, from tossing a coin 500 times and recording the sequence of Hs and Ts has 2^250 elements in it. Agreed.
What you (in the teeth of repeated correction for literally years to my certain knowledge) insistently leave out is clustering of patterns of outcomes; something that is a commonplace of say statistical thermodynamics used to for instance analyse why the 2nd law of thermodynamics obtains. Indeed, my favourite intro to stat thermo-d, L K Nash, discusses just the example of coins, though it goes for 1,000.
You want to assign some special status to certain classes of outcomes. The only thing that makes some groups of outcomes more likely is the number of outcomes you have put in the groups. Whether or not there is a pattern you 'see' is irrelevant.
As the binomial theorem will instantly show, the overwhelming bulk of coin toss outcomes will be 500 coins in a near 50-50 H/T distribution, in no particular pattern, i.e. gibberish, let us call this subset G. By contrast, let us define 500 H as E in a set T of simply describable, relatively rare patterns such as 500 H, 500 T, alternating H/T, and the close like. The proper comparison is E to G, or else even T to G.
Depending on what you mean by 'near'. Again, since all possible outcomes are equally likely the probability of getting an outcome in a particular group or cluster or class depends only on how many outcomes are in that group/cluster/class. It does not depend on any 'meaning'. Also, because you have admitted that it is possible that you could get, say, 500 Hs by chance then you cannot ascribe such an outcome to design. You have to exhaust all the possible non-design explanations before you make that inference.
And the odds of being in G rather than T are utterly overwhelming.
Only because of the relative sizes of G and T.
Where, on tossing a set of 500 coins, using the 10^57 atoms of the sol system as tosser-observers for as many sets of coins, 10^14 times per s for 10^17 s, one would sample as one straw to a cubical haystack comparably thick as our galaxy. Under those circumstances, zone T (and E in it . . . ) is effectively unobservable by blind chance coin toss.
Again, only because of the relative sizes of the groupings you've defined.
Which is the whole point of Dembski’s now longstanding subset T in W discussion of Complex Specified Information in NFL.
Didn't Dr Dembski also say that you have to rule out all non-design explanations? He also said you have to calculate P(T|H)?
I predict, on track record, that you will duck, dodge, twist or brush aside and/or studiously ignore this correction.
If you compare groups of outcomes and some groups have many fewer outcomes in them than others than those groups will have much lower probability. But it's you who have picked the groups and therefore affected the relative probabilities. No outcome is more or less likely than any other outcome so any grouping of those outcomes you make are purely arbitrary and no special significance can be assigned to them.
That, would be a breath of fresh air and a sign that we are finally seeing movement beyond the bigotry and dismissive contempt of the blatant no concessions to “IDiots” policy.
Do not put words in my mouth please. I disagree with you, that is all.Jerad
November 30, 2014
November
11
Nov
30
30
2014
06:06 AM
6
06
06
AM
PDT
PS: It is already easy to see from just the genomes, that a first cell based life reasonably has 100 k - 1 mn bases, and a new body plan -- to account for cell types, tissues and organs in integrated systems -- 10 - 100+ mns. We could take the two bits per base first rule of thumb, or we could afford to be well below that (which would be implausible, AAs in proteins to achieve fold-function are not THAT flexible). It matters not, OOL and origin of body plans are well beyond what is remotely plausible for blind chance and mechanical necessity on gamut of sol system or observable cosmos on any reasonable blind search; 500 - 1,000 bits. Magically arrived at golden searches that have no observational warrant don't count. Life forms, from first cells to dozens of body plans have but one credible vera causa plausible explanation of wiring diagram, correct component in correct arrangement functionality. Design, intelligently directed configuration. Until you and ilk can provide observational evidence on OOL in a Darwin's pond, vent, comet core etc, and/or for origin of body plans that meets vera causa, that remains the undeniable reality.kairosfocus
November 30, 2014
November
11
Nov
30
30
2014
05:19 AM
5
05
19
AM
PDT
kairosfocus @ 123 I agreed with the true statement that probability of any sequence of coins is the same and UD has put up posts about 500 coins time and again. I don't see how that leads you to conclude I set up a strawman, to knock it over and claim a tainted rhetorical triumph. I haven't come across even Jerad doing anything of that sort.Me_Think
November 30, 2014
November
11
Nov
30
30
2014
05:16 AM
5
05
16
AM
PDT
Pachy, again you have failed to address substance and hope to change the subject; whilst in the above I am manifestly correct despite your dismissal. That is, it is patent that a blind sample of W of feasible scale will reliably observe G not T, for needle in haystack, sparse search reasons. Unfortunately, you have allowed hostility to design thought to blind you to what is obvious to the point of being proverbial. Thus, you inadvertently illustrate the no concession to the point of absurdity problem I highlighted. KFkairosfocus
November 30, 2014
November
11
Nov
30
30
2014
05:08 AM
5
05
08
AM
PDT
KF, why should there be concessions to "IDiots" who are constantly wrong? I have another question for you: How much CSI, dFSCI, and FSCO/I is there in a Leptodactylus fallax? Show your work. I predict, on track record, that you will duck, dodge, twist or brush aside and/or studiously ignore the questions.Pachyaena
November 30, 2014
November
11
Nov
30
30
2014
04:59 AM
4
04
59
AM
PDT
1 2 3 4 7

Leave a Reply