Uncommon Descent Serving The Intelligent Design Community

A design inference from tennis: Is the fix in?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Thumbnail for version as of 04:59, 12 June 2007

Here:

The conspiracy theorists were busy last month when the Cleveland Cavaliers — spurned by Lebron, desperate for some good fortune, represented by a endearing teenager afflicted with a rare disease — landed the top pick in the NBA Draft. It seemed too perfect for some (not least, Minnesota Timberwolves executive David Kahn) but the odds of that happening were 2.8 percent, almost a lock compared to the odds of Isner-Mahut II.

Question: How come it’s legitimate to reason this way in tennis but not in biology? Oh wait, if we start asking those kinds of questions, we’ll be right back in the Middle Ages when they were so ignorant that

Comments
I'm away for a couple of days. See you guys when I get back :)Elizabeth Liddle
July 4, 2011
July
07
Jul
4
04
2011
03:22 PM
3
03
22
PM
PDT
Dr Liddle: Observed evolutionary algorithms, like all other algorithms, are artifacts of intelligent design. They start within complex islands of function, and they work within such islands of function. GEM of TKIkairosfocus
July 4, 2011
July
07
Jul
4
04
2011
03:03 PM
3
03
03
PM
PDT
So is it your view (it is mine, actually!) that an evolutionary algorithm is, in a sense, an intelligent algorithm?
I have great difficulty in thinking of any algorithm as intelligent, in any meaningful sense of the word.
I do think that evolutionary algorithms can produce complex, specified information.
Well, let's build one and find out! I find it hard to believe that an algorithm can produce information, but I'm up for investigating the matter.
I also think it is fairly easy to demonstrate that they occur “naturally”.
I think they exist in nature. That they occur naturally is a rather loaded way to put it.
Or, indeed, for the emergence of evolutionary algorithms in the first place... The search for a search. :)
Mung
July 4, 2011
July
07
Jul
4
04
2011
12:02 PM
12
12
02
PM
PDT
That's interesting, Mung. So is it your view (it is mine, actually!) that an evolutionary algorithm i, in a sense, an intelligent algorithm? In fact, this is the core of my criticism of Intelligent Design - not that certain patterns found in nature don't indicate something that is also common to human-designed patterns, but that the common denominator is evolutionary-type search algorithms, not what we normally refer to as "intelligence", which normally implies "intention". I do think that evolutionary algorithms can produce complex, specified information. I also think it is fairly easy to demonstrate that they occur "naturally". The challenge for evolutionary theory, in the face of the ID challenge, is not, I suggest, to demonstrate that evolutionary algorithms can produce complex specified information, but can account for phenomena like the ribosome, which seem to be required for biological-evolution-as-we-know-it to take place. Or, indeed, for the emergence of evolutionary algorithms in the first place, which takes us, finally, back to Upright BiPed's challenge, to which I should return! But this has been useful, and I don't think we are quite done, yet :)Elizabeth Liddle
July 4, 2011
July
07
Jul
4
04
2011
01:03 AM
1
01
03
AM
PDT
Now, those non-intelligent processes will, I assume, include search algorithms such as blind searches and evolutionary algorithms, right?
Would you agree, that if the universe stumbled upon an evolutionary algorithm, it did so without using an evolutionary algorithm? How did it get so lucky? What sort of search did it use? Are evolutionary algorithms just widely spread throughout the search space? Having found one, how was it put to use? So no. Evolutionary algorithms require information to function. I'm not willing to cede information until we get life.Mung
July 3, 2011
July
07
Jul
3
03
2011
09:26 PM
9
09
26
PM
PDT
So, without providing an oracle...
A source of information. :) And how do we find the right oracle for the particular search? Well, maybe we can consult another oracle. It's a mystery wrapped in a riddle inside an enigma!Mung
July 3, 2011
July
07
Jul
3
03
2011
03:26 PM
3
03
26
PM
PDT
Dr Liddle: The issue is that function to reward through evolutionary algors is rare in and unrepresentative of relevant spaces. That is the context for that sharp little peak you just saw in the curve. So, without providing an oracle, you have to get TO such isolated islands. Starting inside such an island is already begging the biggest questions, and the ones most directly on the table. GEM of TKIkairosfocus
July 3, 2011
July
07
Jul
3
03
2011
01:43 PM
1
01
43
PM
PDT
Well, interesting point, Mung, but let's be careful not to conflate two separate issues. On the one hand we are looking for the distribution of patterns, produced by non-intelligent processes (however we want to define that) along those two axis, complexity and shortest-description length. That gives us our null distribution. Now, those non-intelligent processes will, I assume, include search algorithms such as blind searches and evolutionary algorithms, right? And the expectation is that evolutionary algorithms can't produce patterns that will populate the top right hand corner of the page, but that known intelligently produced patterns (e.g. The Complete Works Of Shakespeare) will, right? (Will pause here for response....)Elizabeth Liddle
July 3, 2011
July
07
Jul
3
03
2011
12:49 PM
12
12
49
PM
PDT
Elizabeth Liddle:
In other words, what sorts of processes might generate patterns that would fall in the four quadrants of that matrix?
kairosfocus:
Remember, we are dealing with cut-down phase spaces here. think of islands sticking out of a vast ocean. the issue is to get to the islands that are atypical of the bulk of the space, without intelligent guidance.
IOW, it's not so much about what can generate the pattern as it is about what can reasonably find the pattern.Mung
July 3, 2011
July
07
Jul
3
03
2011
11:44 AM
11
11
44
AM
PDT
Dr Liddle The onward linked paper explains in detail, in a peer reviewed document. Essentially, functionally specific information will not be very periodic, as a rule, but will also have some redundancy and correlations so it is not going to have the patterns of flat random configs. The peak they show has to do with how the range of specifically functional configs will be narrow relative to a space of possibilities. In short do much perturbation and function vanishes. They speak a lot about algorithmic function, but the same extends to things that are structurally functional [wiring diagrams and the like] etc. Here is a suggestion. Make an empty word doc, then use notepad or the like to inspect it, at raw ASCII symbol level. Tell us what you see. Finally, this is not a map of a mapping of function to a config space but of what metrics of types of sequence complexity would correlate to where we are dealing with function. The paper gives details. Config spaces are going to be topologically monstrous if we try to visualise as anything beyond a 3-d map or maybe a 3-d one. Remember, we are dealing with cut-down phase spaces here. think of islands sticking out of a vast ocean. the issue is to get to the islands that are atypical of the bulk of the space, without intelligent guidance. Note the 2007 Durston et al paper is giving numerical values of FSC, based on the H-metric of functional vs ground states. I extended this to apply resulting H values for protein strings, to a reduced form of the Dembski Chi metric. GEM of TKIkairosfocus
July 3, 2011
July
07
Jul
3
03
2011
11:24 AM
11
11
24
AM
PDT
OK, let's carry on here then! Obviously, I like the X and Y axes on that plot, but I'm not sure I would have placed the FSC spike quite where they do (I do realise it's just a diagram). What they seem to be suggesting is that there is a tight negative function that relates complexity to compressibility and that "FSC" is found within a particular range of complexity values (towards the upper end of the range). I'd have thought the relationship was much looser, and that if you plotted actual observed patterns on that matrix, you'd find a broadly negative correlation, but with some outliers. And I'm also suggesting, that, if I am reading Dembski aright, the Specified Complexity axis runs orthogonal to the negative best fit line, rather than being a privileged segment of that line. Still, that's sort of a quibble for now. The really important question is: under the null hypothesis of "no design" (that's my null, now!), what distribution of patterns would we expect to see? In other words, what sorts of processes might generate patterns that would fall in the four quadrants of that matrix? Contingency is important of course, and my hunch right now is that what determines how the bottom left and top right quadrants are populated is how deeply nested the contingencies are. Be back later with more thoughts.... Hope you are feeling a little more at ease this evening! We had a glorious evening last night on our boat - went a couple of miles up the river, and barbecued some chicken on the bank. It was the kind of "perfect English summer evening" that happens maybe once or twice per English summer! If we log-transform their axes so that their curve becomes a straight line (just so it's easier to envisage), what we would seem to be looking for, according to Dembski, is some data points that buck the trend, as it were, and display moreElizabeth Liddle
July 3, 2011
July
07
Jul
3
03
2011
10:58 AM
10
10
58
AM
PDT
Dr Liddle: the image thread is locked, it was just meant to hold an image. The other thread is on CSI as a numerical metric. GEM of TKIkairosfocus
July 3, 2011
July
07
Jul
3
03
2011
10:14 AM
10
10
14
AM
PDT
Very nice kf! Yes, we really do seem to be on the same page here. And yes, agreed, Mung. Shall we take this to kf's new thread? See you there!Elizabeth Liddle
July 3, 2011
July
07
Jul
3
03
2011
10:02 AM
10
10
02
AM
PDT
Gotta go to the supermarket – are you guys with me so far?
Absolutely. I think you make some really good points. The shortness of a sequence is itself a limiting factor, limiting the number of potential patterns or sets of patterns. A bit string of length 4: 2^4 = 16 But within those 16 how many sets of patterns? 0000 1111 0101 1010Mung
July 3, 2011
July
07
Jul
3
03
2011
09:46 AM
9
09
46
AM
PDT
Figure now here at UDkairosfocus
July 3, 2011
July
07
Jul
3
03
2011
07:04 AM
7
07
04
AM
PDT
PS: As noted earlier, the best way to visualise the CSI challenge is here, on a 3-d scale, as Abel et al have shown us.kairosfocus
July 3, 2011
July
07
Jul
3
03
2011
06:52 AM
6
06
52
AM
PDT
Dr Liddle: You may find this new post of interest. GEM of TKIkairosfocus
July 3, 2011
July
07
Jul
3
03
2011
06:49 AM
6
06
49
AM
PDT
Well, I've read Dembski's paper http://www.designinference.com/documents/2005.06.Specification.pdf yet again (printed it out, took it to bed with me!) and it seems to me, given that Dembski himself seems to prefer CSI as Design Detector, it's worth unpacking! And, thanks, Mung,for your endorsement of my post #137 (I still make a few East-for-West errors, but I think the principle is sound). And I'm reassured, because, putting aside for now, the null hypothesis issue (!), and just looking at the axes, it does seem clear that if we plot patterns on a 2D plot, in which one axis is some measure of "Complexity" (my East-West Axis), which Dembski defines informally as "difficulty of reproducing the corresponding event by chance" (and so a long string of characters is going to be more complex than a short string, the number of potential characters being equal), and the other is "Pattern simplicity" (my North-South axis, which I called "Compressibility) which Dembski defines informally as "easy description of pattern", then it is clear that patterns he calls "specifications" are those that are high in both (my North East corner):
It’s this combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (?R) — but not (R) — a specification
So, bear with me while I think this through aloud (as it were): It seems clear to me (again leaving aside any hypotheses) that specifications are going to be fairly rare - because in general, complexity and compressiblity ("easy description of pattern") are negatively correlated: Complex patterns (long strings of stuff with low probability of repetition i.e. drawn from a large set of possible patterns) tend not to have long descriptions, while patterns with short descriptions will tend to be drawn from a much smaller set of possible patterns. That's essentially what I was getting at in post #137. And it occurs to me that actually, the complexity axis large embraces stochastic (Chance) processes, while the shortest-description-length axis largly embraces "Necessity" (i.e. "Law-like") processes. To explain: I made "sine waves" my poster child for short-description-length strings with low complexity, ad in general these are generated by simple physical laws; similarly, I made "white noise" my poster child for very long description length strings with high complexity, and these are generated by stochastic process like radioactive decay. So we've incorporated both concepts in our 2D matrix. And, as I've said, these properties will tend to be negatively correlated. There will be very few patterns that have low complexity and long description, because even if the shortest description is the whole pattern, if that pattern isn't very long, it will still have a pretty short description. The interesting part is the other corner - patterns with high complexity (drawn from a large set of possible patterns) and relatively short shortest-descriptions. Hence the negative correlation, of course - the density of patterns will be high along the North-West:South-East axis, but rarified in the South West corner and the North-East corner. And the North East corner is where the intersting stuff is. Gotta go to the supermarket - are you guys with me so far?Elizabeth Liddle
July 3, 2011
July
07
Jul
3
03
2011
06:13 AM
6
06
13
AM
PDT
F/N: Apparently TWT does not understand that making a mafioso style cyberstalking threat is not a private matter, even if comumnicated on the assumption that the intimidation will work its fell work in private. And, he has now compounded his crime by publishing the incorrect allusion to my wife's name. Worse, he has tried to further "justify" his dragging in as hostages by implied threat, of people not connected to the issues and debates, that someone at UD published an expose on the sock-puppet MG. An expose that the author deemed going too far, apologised for and has corrected. And if TWT cared, he would have seen that I registered my objection to such outing, on learning of it the next day. His further escalation is wholly unjustified and outrageous. And BTW, the fact that the name given -- and which I X'ed out in my own headline [can't you even take a simple hint like that, TWT?] -- is incorrect, is irrelevant to the highly material point that by your personal insistence on publicising my name and now my family connexions, you have publicly painted a target around me and my family. I hope you are proud of yourself. This is continued harassment in the teeth of a public warning to cease and desist. (a requirement of some possibly relevant jurisdictions.) And even in the teeth of a situation where some of the anti evo crowd have stated warnings that this is going too far. Indeed, it is a tripping of a nuclear threshold. This is added to the dossier that will go to the authorities, as prior complaint. It is quite plain that only police force, judiciously applied, will stop you in your mad path. Good day sir. GEM of TKI PS: Onlookers, you will observe that I have not responded to the insults addressed to me. They do not deserve reply.kairosfocus
July 3, 2011
July
07
Jul
3
03
2011
04:33 AM
4
04
33
AM
PDT
F/N 2: Following up on links from my personal blog [in connexion with the threats against my family], I see where Seversky [aka MG? at minimum, of the same ilk] is still propagating the demonstrably false talking point at Anti-evo that CSI cannot reasonably be calculated. This is an illustration of the willful resistance to plain truth and patent hostility that have so poisoned the atmosphere on discussions of ID; now culminated in cyberstalking. Let me link and excerpt, slightly adjusting to emphasise the way that specificity can be captured in the log reduced form of the Chi equation. The same, that was deduced and presented to MG et al in APRIL -- with real world biological cases on the Durston et al metric of information -- and has had no reasonable response for over two months. Namely:
And, what about the more complex definition in the 2005 Specification paper by Dembski? Namely . . . . ? = – log2[10^120 ·?S(T)·P(T|H)] . . . eqn n1 How about this (we are now embarking on an exercise in “open notebook” science): 1 –> 10^120 ~ 2^398 2 –> Following Hartley, we can define Information on a probability metric: I = – log(p) . . . eqn n2 3 –> So, we can re-present the Chi-metric: Chi = – log2(2^398 * D2 * p) . . . eqn n3 Chi = Ip – (398 + K2) . . . eqn n4 4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities. 5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . . 6 –> So, the idea of the Dembski metric in the end — debates about peculiarities in derivation notwithstanding — is that if the Hartley-Shannon- derived information measure for items from a hot or target zone in a field of possibilities is beyond 398 – 500 or so bits, it is so deeply isolated that a chance dominated process is maximally unlikely to find it, but of course intelligent agents routinely produce information beyond such a threshold. 7 –> In addition, the only observed cause of information beyond such a threshold is the now proverbial intelligent semiotic agents. 8 –> Even at 398 bits that makes sense as the total number of Planck-time quantum states for the atoms of the solar system [most of which are in the Sun] since its formation does not exceed ~ 10^102, as Abel showed in his 2009 Universal Plausibility Metric paper. The search resources in our solar system just are not there. 9 –> So, we now clearly have a simple but fairly sound context to understand the Dembski result, conceptually and mathematically [cf. more details here]; tracing back to Orgel and onward to Shannon and Hartley . . . . As in (using Chi_500 for VJT's CSI_lite): Chi_500 = Ip*S – 500, bits beyond the [solar system resources] threshold, [where S has been inserted as a dummy variable on specificity, S = 1/0, according as the body of information is specific or not per empirical tests or circumstances] . . . eqn n5 Chi_1000 = Ip*S – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6 Chi_1024 = Ip*S – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a 10 –> Similarly, the work of Durston and colleagues, published in 2007, fits this same general framework. Excerpting:
Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.
11 –> So, Durston et al are targetting the same goal, but have chosen a different path from the start-point of the Shannon-Hartley log probability metric for information. That is, they use Shannon’s H, the average information per symbol, and address shifts in it from a ground to a functional state on investigation of protein family amino acid sequences. They also do not identify an explicit threshold for degree of complexity. [Added, Apr 18, from comment 11 below:] However, their information values can be integrated with the reduced Chi metric: Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond . . . results n7 The two metrics are clearly consistent, and Corona S2 would also pass the X metric’s far more stringent threshold right off as a single protein. (Think about the cumulative fits metric for the proteins for a cell . . . ) In short one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no. of AA's * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]
In the face of such a log reduction and specific real world biologically applicable calculated results, Seversky how are you still trying to push the talking point that CSI cannot be calculated and that design thinkers are not providing metrics of CSI? Is that not grounds for deeming your talking point a patently slanderous, willful, deceptive misrepresentation maintained for two months in the teeth of easy access to corrective evidence? Seversky, you have had ample opportunity to know that your claim is false, and patently so, so your propagation of a hostility-provoking falsehood in an already polarised context, is willfully misleading and incitatory. STOP IT NOW, in the name of duties of care. Onlookers, those who are so willfully poisoning the atmosphere in that way need to think again about what they are doing, and its consequences in the hands of those who are intoxicated on the rage they are stoking. Dawkins' notorious attempt to characterise those who challenge his evolutionary materialism as ignorant, stupid, insane and/or wicked, is provably slanderous and incendiary, provoking the extremism I have had to now take police action over -- notice the allusion that to bring children up in a Christian home and community is "child abuse" in the linked headline, Seversky, and ponder on the sort of dogs of war your side is letting loose. FYFI, Seversky, it is now DEMONSTRATED fact, about what the consequences of the sort of willful slanderous misrepresentation of design theory and design thinkers are. So, you and your ilk have as duty of care to correct what you have done, and to work to defuse a dangerous situation. Further irresponsible misconduct on your part -- like I linked above -- is inexcusable. And BTW, you will see from discussion above, that Dr Liddle, a decent woman, is in basic agreement with me on the nature of the inference to design. Good day, sir. GEM of TKIkairosfocus
July 3, 2011
July
07
Jul
3
03
2011
04:09 AM
4
04
09
AM
PDT
Mung and others: Footnotes; as we may know I have had to be busy elsewhere this weekend, including a first conversation with an attorney from my local prosecutor's office.
(BREAK IN TRANSMISSION: When you trip a nuke threshold TWT and Y, that cannot be taken back or whistled by in the dark; FYI, prior complaint is to be duly and formally entered this week upcoming. MF, on consultation, I will follow up with you[ let's see if we can build principles of detente. TWT and ilk [DK this evcidently includes you], a word. TWT, since you in particular evidently doan' know A from bull foot about Marxism, its death toll and its mutant varieties [and Alinsky-ism fits in here], if nukes are now in play, you have to find a way not to blow up the world. And BTW, TWT, FYFI on what seemed to excite you into tripping the nuke wire by trying an outing stunt: soteriological inclusivism is a view -- popularised by CS Lewis in Chronicles of Narnia -- that per the principles of judgement by light in Rom 2 etc, God welcomes the sincere and penitent of all backgrounds; no comfort of course to those who willfully resist the truth or the right they know or should know. Amplifying, FWIW, I fully expect to see some Muslims, some Buddhists, some animists etc rubbing shoulders with some Jews and some Christians as we walk the proverbial golden streets. So, your sophomoric fulminations and imagined justification for making a mafioso style threat against my family evaporate, exposing your bigotry, want of basic research on what you rage against in ignorance, leaving behind only the aggravating factor of anti-religious hostility on the base charge of cyberstalking.)
Now, the EF is first concerned with the question of contingency. Things that under similar initial conditions run out under deterministic dynamics of mechanical necessity per some differential equation model or another, are following lawlike regularities. BTW, I appreciate that this is a limiting case of a chance process as a random variable that is always 1 is technically still a distribution; necessity can thus be enfolded into chance. But that is a tad pedantic, pardon. And, it cuts across the insight of the EF that it is high contingency that leads us to infer -- on massive empirical base -- to chance or choice. And, yes, one may build a tri-nodal form of the case structure of the EF. But, it seems simpler and plainer to just do SPECIFIED AND -- logical operator sense -- COMPLEX BEYOND A THRESHOLD. This also emphasises the point that the thing must be jointly -- and simultaneously on the same aspect -- specified AND complex beyond the bound in question to be credibly inferred as best explained on choice not chance contingency. Also, there is indeed a model by Abel et al that visualises a three dimensional frame with particular reference to random, ordered and functional sequence complexity. This is the framework for Durston et al's metric of FSC, which I have slotted into the log-reduced form of the Dembski et al metric. In doing that, I took advantage of the generally relaxed attitude of practising engineers to information metrics: there's more than one way to skin a cat-fish. But once skinned they fry up real nice and tasty.
(BREAK IN TRANSMISSION: Hint, this is a theological allusion, that Christians are not strictly bound by OT codes and precedents, but filter these foreshadowings through the love-bomb of the cross and the resurrection, i.e. Christ-centred principles of righteousness, justice and compassionate concern for all apply; even when we must correct sins and deceptions, and when we must bear the instruments of deterrence, or defence of the civil peace of justice in response to wrongdoing, ranging from the schoolyard bully to the international terrorist or aggressor. And, FYI, turn the other cheek is about how we are to handle insults and minor provocations [notice, having warned, these I ignored -- but you insisted on attacking those I am sworn or duty bound to protect up to and including with my life]; criminal conduct towards or abuse of those we are sworn to defend in a governmental role, starting with family, is a different problem and calls forth a wholly different pattern. Cf. here how Paul handled the conspirators who tried to assassinate him, accepting a guard of 70 horse [probably archer-lance men] and 200 spearmen to take him to safe custody in the Roman Capital on the coast, and also ultimately appealing on citizenship rights to Caesar's judgement seat -- probably to Burrus, Nero's mentor before he went mad. Update to today's technology -- an air-mobile armored cavalry troop with heavy artillery and air support on call -- and you get a picture of the sharp difference between proper response to a slap on the face and warranted response to criminal actions; in Paul's case conspiracy to assassinate, verging on armed rebellion. Those 40 conspirators intended to cut their way though the Roman guard to kill Paul, and in the hills between Jerusalem and the coast, there could have been armed bands justifying so strong an escort as ordered by the equivalent of a Colonel. So, Christians are fully justified to appeal to proportionate civil, police and military response by the state or relevant institutions and organisations in defence of the innocent and the civil peace of justice. And, to work with or in such organisations. Further to all this, lower magistrates have a duty of interposition where higher authorities have gone wrong and are abusing the sword to abuse the innocent. Hence, the Dutch DOI of 1581 and other state and academic documents and events, leading to the US DOI of 1776 and onwards.)
So, bearing in mind the issue of contingency, we can then see how CSI and the EF are both formally equivalent in force and complementary in how we seek to understand what the design inference is doing. And yes, search is a reasonable frame of thought, as Marks and Dembsky are now profitably exploring using the concept of active information. GEM of TKIkairosfocus
July 3, 2011
July
07
Jul
3
03
2011
03:20 AM
3
03
20
AM
PDT
Surely configurations of matter and energy which exhibit the capacity for intelligence and design are ubiquitous in the solar system. A search for just one should be a simple thing for unguided, non-intelligent, materialist-physical forces to carry out with success. Or not.Mung
July 2, 2011
July
07
Jul
2
02
2011
06:43 PM
6
06
43
PM
PDT
That's an interesting conundrum DrBot. We'll have to see if we can work it out. :) At first blush I'd say it begs the question of whether the human capacity to design is the result of purely materialist physical forces. If there's some mechanical law at work it's indistinguishable from chance, so i don't know why we'd think there was some physical law that our design abilities are based on. Do you dispute that human designers serve as an intelligent cause? It seems that you have to at least accept that much or you'd object to the design inference on those grounds. Do you believe the material world, all that is physical/material is intelligent?Mung
July 2, 2011
July
07
Jul
2
02
2011
03:37 PM
3
03
37
PM
PDT
kairosfocus:
This is a case where once information is a possibility, you already are implying high contingency.
Excellent point. High information content requires high contingency. I also like how you phrase the issue in terms of a search for the zone of interest. Given all the material resources at the disposal of the universe/solar system since it's inception, would we expect a search to land here? So we can frame the hypotheses in terms of a search. And we can combine the concepts of search and information. One can ask much information a search would require to find the item of interest. It could definitely help to think in terms of a search, and the information required for the search, and that is for sure the direction Dembki/Marks have taken.Mung
July 2, 2011
July
07
Jul
2
02
2011
02:54 PM
2
02
54
PM
PDT
I’m not sure I understand the question.
Not a problem, I'm getting used to it ;) Dembski is defining anything that arises from the operation of the physical world as chance. We are capable of design so if our ability to design is a result of the operation of the physical world then Dembski is including design under the category of chance. IF our design abilities are based on physical law then any comparison of human design with biology is a comparison of chance (our ability to design) with biology (an unknown origin).
chance so construed characterizes all material mechanisms.”
DrBot
July 2, 2011
July
07
Jul
2
02
2011
02:54 PM
2
02
54
PM
PDT
Hi Lizzie, I think the EF is easier on the eye. But I also want to understand how the CSI calc works into things as presented in Dembski's 2005. For an example of the first stage of the EF see again my post as far back as #29. I think I was trying to make two points. The first thing we try to eliminate is necessity. But the other option is not chance, it's contingency. And that is because contingency includes both chance and choice (as kf likes to put it - and I think is a good way to put it as well). I really liked your description in 138. A different mental picture from the EF, but an image nonetheless.Mung
July 2, 2011
July
07
Jul
2
02
2011
02:46 PM
2
02
46
PM
PDT
Mung, I quite agree that those possibilities are not mutually exclusive. But the two hypothese tested have to be, under Fisherian hypothesis testing. No matter. We are now "cooking on gas" as they say around here :) Cool. kf - yes, thanks for the correction re Shannon Information. Yes, I meant total bits not mean bits. Do we agree then, that if we plot the complexity of a pattern (in bits) along one axis, and the compressibility (in some units to be decided!) along the other, then, if we plot patterns found in nature along these two axes, the two will tend to be negatively correlated? But that there will be a bell curve through a section cut at right angles to the negative slope? And that CSI patterns will those towards the top right hand corner? (I wish I could post a plot - I'll try to host somewhere and post a link).Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
02:30 PM
2
02
30
PM
PDT
Hi DrBot. I'm not sure I understand the question. Are you essentially asking whether if we were to write a computer program and save it to disk and then run the program, that we would have to attribute the design of the program to chance because it had been embodied in a physical medium and been run using a purely material mechanism? Or are you asking about if we were to design a program that could itself design programs?Mung
July 2, 2011
July
07
Jul
2
02
2011
02:30 PM
2
02
30
PM
PDT
Excuse me Elizabeth, by why do you continue to refer to the EF as a two stage process (see 145)? Dembski:
Given something we think might be designed, we refer it to the filter. If it successfully passes all three stages of the filter, then we are warranted asserting it is designed.
regardsMung
July 2, 2011
July
07
Jul
2
02
2011
02:23 PM
2
02
23
PM
PDT
"Indeed, chance so construed characterizes all material mechanisms." And if intelligence and intentionality (the ability to design) can be embodied in a purely material mechanism then design is also chance?DrBot
July 2, 2011
July
07
Jul
2
02
2011
02:01 PM
2
02
01
PM
PDT
1 2 3 7

Leave a Reply