Uncommon Descent Serving The Intelligent Design Community

The Chronicle says of Gonzalez “a clear case of discrimination”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The Chronicle of Higher Education has a balanced article on Iowa State’s refusal to tenure Guillermo Gonzalez.

Advocate of Intelligent Design Who Was Denied Tenure Has Strong Publications Record
By RICHARD MONASTERSKY

At first glance, it seems like a clear-cut case of discrimination. As an assistant professor of physics and astronomy at Iowa State University, Guillermo Gonzalez has a better publication record than any other member of the astronomy faculty. He also happens to publicly support the concept of intelligent design. Last month he was denied tenure.

Read More

Comments
Atom: From my view, it is more a question of what helps you see it better. But that's important, too. GEM of TKIkairosfocus
June 12, 2007
June
06
Jun
12
12
2007
12:30 AM
12
12
30
AM
PDT
Thanks for the prolonged discussion GEM. As I mentioned, you did allude to all the pieces of the puzzle, but I guess I needed it put together in the way I did for it to "get to the point" if you would. I think having the concept of naive specifications helps answer the question quickly and elegantly. I especially needed to see the contingency/asymmetry of non-naive specifications (FSCI, etc), versus the equiprobable symmetry of all naive specifications. Anyway, thank you for your patient input! And jaredl, thanks as well.Atom
June 11, 2007
June
06
Jun
11
11
2007
08:12 AM
8
08
12
AM
PDT
Hi again Atom & Jaredl (& onlookers): First, I see Jaredl bows out -- fare thee well, friend. (And, I note that we are here dealing with the messy world of fact-anchored, defeatable reasoning to best explanation, not the ideal world of perfect proofs that cannot be disputed by rational agents. Indeed, in a post-Godel world, not even Mathematics rises to that exalted state of grace. Much less, science. Indeed, I think that Paul put it aptly 2,000 years ago: "we walk by faith and not by sight" the question being, which faith-point we accept and why . . . Thus, my linked discussions above. Philosophy is a meta issue in all of these discussions; that is a mark of paradigm shifts and scientific revolutions. Indeed, Lakatos reminds us that there is a worldview core in any paradigm, and that it is surrounded by a belt of scientific theories/ models. So, we need to be philosophically literate in order to be scientifically literate in a world in which sci revos are coming at us fast and furious.) Now, on further points: 1] Atom: when we hit an isolated, already defined subset (even if implicitly defined, as in the case of functional configurations or “Is Sequence A” vs. “Not Is Sequence A”) agency is the best explanation. True, but it is not always the correct one, giving us an inconsistent method (in some cases comes to correct conclusion, but using the same mathematical reasoning, comes to the wrong conclusion with Sequence A.) Precisely -- as with all empirically anchored science and all statistical reasoning on hypotheses that is open to errors of inference. That is, we are here up against the limits of the class of reasoning in general, not a defect of the particular hypothesis/ explanation/ model/ theory. Thus, the point that scientific reasoning is provisional, and open to correction in light of further observation and/or analysis. Thence,t he issue that we ought not to apply an inconsistent standard of evaluation, i.e we must avoid selective hyperskepticism because we don't like the particular explanation that by reasonable criteria is "best." [But also note that this means that the inference to design is just the opposite of a "science stopper"!] 2] We can ask “What are the chances that we’ll hit a sequence that is part of an extremely isolated subset?” The answer is, of course, one. There is always a naive specification that applies across every sequence equally, thus telling us nothing about the probability of hitting that sequence. In short, if we throw the figurative i-sided dice a certain number of times, we will get an outcome of some sort out there, which will be a unique sequence within the set. This leads tot he point I made previously that, on condition of reliable observation, the probability of observing a given outcome accurately, having rolled the dice so to speak is pretty nearly 1. 3] other than the naive specification set, does a sequence belong to any additional specification set? This is where the additional level of contingency comes into play. These are the specifications that are important, if for no other reason, because they are contingent; a sequence does not have to be part of any additional specification set. So if it is, then its matching this new, non-naive specification set does have relevance for probability calculations, since it is asymmetrical across all possible sequences of that length. That is, where there is an independent and relevant specification (other than this is a particular member of the configuration space), then it is possibly hard to hit by chance. Such specifications inter alia include: being biofunctional as a macromolecule within the DNA-controlled, information-based cellular machinery and algorithms, being a flyable configuration of aircraft parts, being a constellation of laws and parameters leading to a life-habitable cosmos relative to the observed biofunctions of cell-based life, etc. When it is actually hard to hit by chance, sufficiently so [i.e., e.g beyond the Dembski-type bound] that relevant probabilistic resources are exhausted, then that "actually hard" becomes so inferior an explanation relative tot he known behaviour of agents,that agent action is now the best explanation. But of course, the reasoning is defeatable. 4] if it is, then its matching this new, non-naive specification set does have relevance for probability calculations, since it is asymmetrical across all possible sequences of that length. In effect we are back to the same conclusion, but by a bit of a roundabout. However, multiple pathways or ways of expressing an argument often help to provoke understanding and acceptance. I do not see anything to object to of any consequence in your solution to this point. And, as just noted, multiple substantially equivalent pathways of expression are often helpful. GEM of TKIkairosfocus
June 11, 2007
June
06
Jun
11
11
2007
01:53 AM
1
01
53
AM
PDT
...There is always a naive specification that applies across every sequence equally...
That should read: There is always a naive specification that can be applied to any sequence, thus all sequences equally belong to at least one isolated subset, making inclusion in such a naive implicit set meaningless and irrelevant to probability calculations. (Namely , because of the equiprobable nature of such inclusion across all sequences.)Atom
June 10, 2007
June
06
Jun
10
10
2007
10:54 PM
10
10
54
PM
PDT
Now that I feel that everyone involved understands the issue, I can lay out what I think is a clean solution to the difficulty. (If I am mistaken, or my solution fails to answer some difficulties, please don't hsitiate to speak up.) As I pointed out before, we have our counter-example of Sequence A, which can be any N-bit binary number. It is implicitly specified, being part of a rare (one member) subset. More rare than many FSCI configurations, in fact. Sequence A is also a member of ever more exclusive subsets, beginning with all the sequences that share the same first digit (either begin with a 1 or 0), to those that share the first two digits, etc, which forms a powerset of all possible digit matches that this sequence could be a part of. (This may be confusingly worded, but I trust you understand the idea behind it...it is late.) Now, even though it is implicitly specified and the only isolated member of a subset (i.e. "Is Sequnce A"), so are all other N-bit sequences. (This may have been alluded to earlier by GEM, but not focused on; I think this is what solves the problem, after much thought.) Every sequence on N bits has a powerset of specifications that it matches. So we can call these naive specifications since they hold across all sequences. In this way, we are dealing with a meta-level of contingency. We can ask "What are the chances that we'll hit a sequence that is part of an extremely isolated subset?" The answer is, of course, one. There is always a naive specification that applies across every sequence equally, thus telling us nothing about the probability of hitting that sequence. But other than the naive specification set, does a sequence belong to any additional specification set? This is where the additional level of contingency comes into play. These are the specifications that are important, if for no other reason, because they are contingent; a sequence does not have to be part of any additional specification set. So if it is, then its matching this new, non-naive specification set does have relevance for probability calculations, since it is asymmetrical across all possible sequences of that length. I'll leave it at this. If there are any questions or concerns, or if an example is needed to understand what I'm trying to say, let me know.Atom
June 10, 2007
June
06
Jun
10
10
2007
10:49 PM
10
10
49
PM
PDT
Of course, where did those handy laws come from.
The Intelligent Designer, IMO.
But more on point, the issue on the specification of proteins etc is that we have mutually interacting complex, functionally specified entities forming a coherent integrated system that is sensitive to random perturbation
Granted, all the properties and interactions isolate that subset and make it vastly smaller than the set "Not Is FSCI". This was not at issue, I already granted that the subset "IS FSCI" is vastly smaller than the set "Not Is FSCI"
i.e is isolated in the resulting truly vast config space.
It is dwarfed by "Not Is FSCI", but so is "Is Sequence A" by "Not IS Sequence A". Both are implicitly rare and isolated, the second example moreso.
That co-adapted functionality is independent of the chemical forces that would have had to form the first such molecules in whatever prebiotic soup, under evo mat scenarios.
Again granted, and not at issue. (I'll assume this is for the benefit of the lurkers.)
So, we see the problem of hitting a complex fine-tuned, coadapted design by chance, relative to doing so by agency. The former is vastly unlikely, the latter on experience not at all so.
This is only to say that when we hit an isolated, already defined subset (even if implicitly defined, as in the case of functional configurations or "Is Sequence A" vs. "Not Is Sequence A") agency is the best explanation. True, but it is not always the correct one, giving us an inconsistent method (in some cases comes to correct conclusion, but using the same mathematical reasoning, comes to the wrong conclusion with Sequence A.)
Of course, in this case, we can assign probability numbers using the known laws of the physics and chemistry involved, and the digital nature of the macro-molecules.
Yes, we can also assign probabilities in the case of Sequence A, my N-bit binary number we hit by chance.Atom
June 10, 2007
June
06
Jun
10
10
2007
12:52 PM
12
12
52
PM
PDT
I'm sorry; I don't know how much more plainly I can put things, and it hasn't helped, so I'm bowing out.jaredl
June 10, 2007
June
06
Jun
10
10
2007
11:56 AM
11
11
56
AM
PDT
PS: Okay, here is a link on the issues of comparative difficulties and inference to best explanation etc. Here, on the worldview roots of proof, as well.kairosfocus
June 10, 2007
June
06
Jun
10
10
2007
03:32 AM
3
03
32
AM
PDT
Hi Atom and Jaredl: Following up. (BTW, please note I am not primarily seeking to "persuade" but to point out what is sound or at least credible relative to factual adequacy, coherence and explanatory power. Dialectic, not rhetoric -- I am all too aware (having lived through the damaging result of multiple economic fallacies in action here in the Caribbean) that a sound and well-established argument is often the least persuasive of all arguments.) On some points: 1] Jaredl: I have stated elsewhere that cosmological arguments for design are necessarily vacuous, using Dembski’s formulation of CSI as the sole legitimate criterion for detecting design. Why should we accept that claim at all? After all, people were inferring accurately to design long before Dembski came along. And, indeed,t he very term, complex specified information came up out of the natural state of OOL research at the turn of the 80's. Dembski has provided one mathematical model of how CSI serves as a design detection filter, not the whole definition. As to the issue of cosmological ID, this is at heart a sensitivity argument, i.e as Leslie put it, we have a looong wall, and here in a 100 yd stretch there is just this one fly [other portions elsewhere may be carpeted with flies for all we know or care]. Then, bang-splat, he is hit by a bullet. Do we ascribe that to random chance or good aim, why – on a COMPARATIVE DIFFICULTIES basis. The answer is obvious – and breaks through multiverse type arguments. The issue is not proof beyond rational dispute to a determined skeptic, but which underlying explanation is better, given issues over factual adequacy, coherence, and explanatory power: ad hoc vs simple vs simplistic. And BTW, that comparative difficulties relative to alternate live option start-points for explanation, is how we get away from vicious circularity in a world in which all arguents in the end embed core faith commitments. 2] Atom: it doesn’t work for novel proteins and cell types. They were never physically instantiated (that we’re aware of) before they were actually made. Given a set of physical/chemical laws, the specifications for all proteins exist implicitly, in a mathematical sense. Of course, where did those handy laws come from. But more on point,t he issue on the specification of proteins etc is that we have mutually interacting complex, functionally specified entities forming a coherent integrated system that is sensitive to random perturbation – i.e is isolated in the resulting truly vast config space. That co-adapted functionality is independent of the chemical forces that would have had to form the first such molecules in whatever prebiotic soup, under evo mat scenarios. So, we see the problem of hitting a complex fine-tuned, coadapted design by chance, relative to doing so by agency. The former is vastly unlikely, the latter on experience not at all so. Of course, in this case, we can assign probability numbers using the known laws of the physics and chemistry involved, and the digital nature of the macro-molecules. 3] we seem to find the cosmological argument intuitively compelling, even though it is necessarily vacuous, using Dembski’s work as a norm. Again, it cannot be shown that things could [not] have been otherwise. I have bolded the problem. There is a logical category error at work: arguments by inference to best explanation on a comparative difficulties basis are the precise reverse of proofs relative to generally agreed facts and assumptions: EXPLANATION --> OBSERVATIONS, vs FACTS ETC --> IMPLICATIONS. Science in general works by the former, as does philosophy; that is why conclusions are provisional and subject to further investigation or analysis. As one result, Dembski is happy to accept the point hat the inference to design by elimination is possibly wrong – as are all Fisher-style statistical inferences. But, if you use a selectively hyperskeptical criterion to reject design inferences you don't like while accepting a science that is in fact riddled with such defeatable inferences in general, you are being inconsistent. 4] we cannot know whether there is more than one face to the universal die. Hence, this answer fails to provide empirical grounds for a probability assessment. Same problem again. The reasoning is already known to be defeatable, but please provide empirical evidence before inferring that “defeatability in principle” implies “defeated in fact.” (The Laplacian principle of indifference is a generally used tool for probability assignments, and the possibility that the universe may be radically different in the abstract is irrelevant to the calculations of provisional probabilities, deriving therefrom relative to the world of credibly observed fact.) 5] Neither of these claims sufficies to produce the necessary demonstration of low-probability necessary to infer design utilizing Dembski’s criterion of CSI as the norm. Again, we are looking at provisional inferences to best explanation [what science is about], not attempted demonstrative proofs. Within the context of such, we have produced a probability number that is relevant to what we credibly know – as opposed to whatever we may wish to speculate. So, to overturn the reasoning, one should provide reason to doubt the probability assignment relative to the empirical data, not the abstract possibility that things may be other than observed. [For, as Lord Russell pointed out, after all it is abstractly possible that the world as we experience and remember it was created in a flash five minutes ago, not as whatever we think we know about it. The two worlds are empirically indistinguishable.] GEM of TKIkairosfocus
June 10, 2007
June
06
Jun
10
10
2007
03:27 AM
3
03
27
AM
PDT
Thanks for pointing that out. It doesn't satisfy. Here's the relevant portion(s), with some emphasis added:
You can't objectively assign "probabilities": First, the argument strictly speaking turns on sensitivities, not probabilities-- we have dozens of parameters, which are locally quite sensitive in aggregate, i.e. slight [or modest in some cases] changes relative to the current values will trigger radical shifts away from the sort of life-habitable cosmos we observe. Further, as Leslie has noted, in some cases the Goldilocks zone values are such as meet converging constraints. That gives rise to the intuitions that we are looking at complex, co-adapted components of a harmonious, functional, information-rich whole. So we see Robin Collins observing, in the just linked:"Suppose we went on a mission to Mars, and found a domed structure in which everything was set up just right for life to exist . . . Would we draw the conclusion that it just happened to form by chance? Certainly not . . . . The universe is analogous to such a "biosphere," according to recent findings in physics. Almost everything about the basic structure of the universe--for example, the fundamental laws and parameters of physics and the initial distribution of matter and energy--is balanced on a razor's edge for life to occur. As the eminent Princeton physicist Freeman Dyson notes, "There are many . . . lucky accidents in physics. Without such accidents, water could not exist as liquid, chains of carbon atoms could not form complex organic molecules, and hydrogen atoms could not form breakable bridges between molecules" (p. 251)--in short, life as we know it would be impossible." So, independent of whether or not we accept the probability estimates that are often made, the fine-tuning argument in the main has telling force.
The bolded simply repeats my point: we seem to find the cosmological argument intuitively compelling, even though it is necessarily vacuous, using Dembski's work as a norm. Again, it cannot be shown that things could have been otherwise.
Can one assign reasonable Probabilities? Yes. Where the value of a variable is not otherwise constrained across a relevant range, one may use the Laplace criterion of indifference to assign probabilites. In effect, since a die may take any one of six values, in absence of other constraints, the credible probability of each outcome is 1/6.
The crucial point of disanalogy is that we cannot know whether there is more than one face to the universal die. Hence, this answer fails to provide empirical grounds for a probability assessment. What is required is evidence that the laws of nature are contingent.
Similarly, where we have no reasopn to assume otherwise, the fact that relevant cosmological parameters may for all we know vary across a given range may be converted into a reasonable (though of course provisional -- as with many things in science!) probability estimate.
Here, Kairosfocus is assuming the very point at issue.
So, for instance, the Cosmological Constant [considered to be a metric of the energy density of empty space, which triggers corresponding rates of expansion of space itself], there are good physical science reasons [i.e. inter alia Einsteinian General Relativity as applied to cosmology] to estimate that the credible possible range is 10^53 times the range that is life-accommodating, and there is no known constraint otherwise on the value. Thus, it is reasonable to apply indifference to the provisionally known possible range to infer a probability of being in the Goldilocks zone of 1 in 10^53. Relative to basic principles of probability reasoning and to the general provisionality of science, it is therefore reasonable to infer that this is an identifiable, reasonably definable value. (Cf Collins' discussion, for more details.)
Neither of these claims sufficies to produce the necessary demonstration of low-probability necessary to infer design utilizing Dembski's criterion of CSI as the norm. I apologize for the lengthy citations, but I feel it necessary to show why I find Kairosfocus's arguments unpersuasive.jaredl
June 8, 2007
June
06
Jun
8
08
2007
10:31 AM
10
10
31
AM
PDT
GEM, I guess one could draw the line at: your specification was physically instantiated at least once before the event, thus making it a true pre-specification. I'd agree with this. But then it doesn't work for novel proteins and cell types. They were never physically instantiated (that we're aware of) before they were actually made. Given a set of physical/chemical laws, the specifications for all proteins exist implicitly, in a mathematical sense. But again we could not explicity list out all possible AA sequences, even beginning at a certain finite length. And again, with these we are also noticing the events after the fact. So it seems your demarcation criterion would cut both ways. Jaredl, intersting thought. Kairosfocus did go through your objection in his always-linked article, you may read it and see if it satisfies you.Atom
June 8, 2007
June
06
Jun
8
08
2007
10:10 AM
10
10
10
AM
PDT
"instinctive mental processes..."jaredl
June 8, 2007
June
06
Jun
8
08
2007
09:22 AM
9
09
22
AM
PDT
I have another issue. I have stated elsewhere that cosmological arguments for design are necessarily vacuous, using Dembski's formulation of CSI as the sole legitimate criterion for detecting design. One cannot perform the required probability calculation - it cannot be shown that things could be otherwise. As far as we know, the set of possible natural laws contains only one element. Why, therefore, do we infer design anyway? I'm going to suggest something here. Dembski, in crafting his explanatory filter, attempted to capture in philosophical and mathematical symbolism the instinctive each of us actually executes in inferring design. Clearly, the filter is missing something, for cosmological arguments seem compelling, even lacking the necessary probabilistic analysis. Could it be that mere algorithmic compressibility is, in fact, a reliable hallmark of design?jaredl
June 8, 2007
June
06
Jun
8
08
2007
09:21 AM
9
09
21
AM
PDT
Hi Atom & Jaredl: I follow up: 1] A: the sequence already exists, independently, prior to my flipping of the coins: if you arrange every 500 bit sequence from 000…000 to 111…111 you’ll find it in there Actually, you cannot do this sort of search by exhaustion within the bounds of the known universe: there are 2^500 arrangements, or ~ 3.27*10^150 arrangements, more than the number of quantum states in the observed universe across its lifetime. There simply are not the physical resources to do it. This is a key part of the problem -- you cannot exhaust the possibilities, so you have to target a subset of the abstract configuration space. A chance process by definition has no reason to prefer any one zone in the space over another, ands so it is maximally unlikely to successfully find any state that is a predefined target. --> This is the reason why you see so many speculative attempts to project a quasi-infinite unobserved wider universe as a whole, to provide enough "credible" scope for the cumulative odds to shorten. (Of course, that is a resort to the speculative and metaphysical, not the scientific, and so it is inferior tot he inference that we have an observed universe out there and the odds relative to what we see and can reasonably infer are as we just outlined.] 2] you’ll also find the two categories “Is Sequence A” and “Not Is Sequence A” are already automatically defined as soon as Sequence A exists Of course, with the "paint the target around the sequence that happens to fall out" instance, the sequence A EXISTS after the fact, not independent of tossing the coin. The odds of A given observation of A are just about 1, as pointed out already. Where theings get interesting, is when we observe that we cannot lay out the configuration space in the observed physical world, i.e we cannot exhaust the arrangements. Then, we sample that conceptual not physical space, identifying ahead of time say an ASCII sequence of the opening words of the US DOI [or say Genesis 1:1 in KJV] Roll the coins, and bingo, that's just what turns up, 1 in 10^150 or so. Not very likely at all! [Except, by a conjurer's trick that we didn't know of; i.e agency.] FOr the real world case of cells, the config spaces are much, much higher in scale. E.g. a typical 300-monomer protein can be arranged in 20^300 ~ 2.04*10^390 ways, and a 500k DNA strand can be arranged in ~ 9.90*10^301,029 ways. To get to life, we face an utterly incredible search on multiple dimensions. 3] Since Sequence A is just a 500 digit binary number, we know it exists independently of my event and has always been a member of a relatively small subset. So the independence criteri[on] is met Sequences from 000 . . to 1111 . . . abstractly exist independent of being actualised materially. That is not the problem. The problem is to get to a targetted functional sequence that is isolated in the abstract space, in the physical world without exhausting probabilistic resources. As we just saw, that is just not reasonable in the gamut of the observed universe. Of course, having tossed the coins and converted X1, X2, . . . X500 into a particular instantiated sequence, one may ex post facto paint the target around it, but that is after the hard part was already done, selecting out of the abstract set of possibilities some particular case at random. Where the independent and functional specification become important is as just pointed out -- a meaningful pattern not an arbitrary one. (And we happily accept false negatives; we do not need a super decoding algorithm that automatically identifies any and all possible meaningful sequences, just cases where we know the functionality/meaningfulness already. Actually, we have such an "algorithm" in hand, once one accepts that God is, but that is irrelevant to our case!] 4] algorithmically compressible strings form a tiny subset of all possible strings Yes, and in a context where tiny is relative to a config space that cannot be wholly instantiated in the gamut of the observed universe. So, we must always abstract out a small subset of it within our reach, and lo and behold we hit the mark! [Now we have a choice: chance or agency. If I see coins laid out in a sequence spelling out Gen 1:1 in part in KJV, in ASCII -- note the compression/ simple describability here! -- I will infer to agency with high confidence!] 5] if we want to root it in an objective mathematical basis, it seems the independently existing, unlikely, relatively tiny subset member Sequence A (or any such sequence) would become a problem. The problem is inherently about the intersection of the ideal mathematical world with the empirical one. That is always the issue with inference testing, and a part of the price you pay for that is that you have less than 100% confidence in your conclusions. Nor it that provisionality a novelty in the world of science. Scientists, in short, live by faith. So do Mathematicians, post Goedel, and so do philosophers. So does everyone else. The issue is which faith, why -- and selective hyperskepticism does not make the cut. GEM of TKIkairosfocus
June 8, 2007
June
06
Jun
8
08
2007
01:05 AM
1
01
05
AM
PDT
[B]eing part of a relatively tiny set....jaredl
June 7, 2007
June
06
Jun
7
07
2007
10:51 AM
10
10
51
AM
PDT
It is precisely that feature which enables us
Which feature?Atom
June 7, 2007
June
06
Jun
7
07
2007
10:01 AM
10
10
01
AM
PDT
Algorithmic complexity does play a role, but I always assumed it was because algorithmically compressible strings form a tiny subset of all possible strings. I thought it was this being part of a relatively tiny set was what made them special. It is precisely that feature which enables us, on Fisher's approach to statistical hypothesis testing, to reject known chance hypotheses as explanations for the phenomena at issue in the face of low probability. The fact that algorithmic compressibility combined with low probability is a reliable indicator, in our experience, of the action of intelligent agency is what lets us go from ruling out all known chance hypotheses to inferring design.jaredl
June 7, 2007
June
06
Jun
7
07
2007
09:34 AM
9
09
34
AM
PDT
Addendum: Above I wrote:
unless we change it to mean “specified by an intelligence beforehand” (which we can never rule out, since we don’t know what every intelligence has specified.)
I just re-read that and realize it is irrelevant to the discussion at hand. The filter allows for false negatives, so "not being able to rule it out" doesn't matter; we want to know if we can rule it in.Atom
June 7, 2007
June
06
Jun
7
07
2007
07:58 AM
7
07
58
AM
PDT
Ok, my only response would be this: 1) Yes, I am painting the target after the fact. That was admitted up front, so I don't see how it solves the issue (unless the act of my pre-specifying actually changes what can happen). But the sequence already exists, independently, prior to my flipping of the coins: if you arrange every 500 bit sequence from 000...000 to 111...111 you'll find it in there, and you'll also find the two categories "Is Sequence A" and "Not Is Sequence A" are already automatically defined as soon as Sequence A exists. 2) Since Sequence A is just a 500 digit binary number, we know it exists independently of my event and has always been a member of a relatively small subset. So the independence criteria is met...unless we change it to mean "specified by an intelligence beforehand" (which we can never rule out, since we don't know what every intelligence has specified.) 3) Algorithmic complexity does play a role, but I always assumed it was because algorithmically compressible strings form a tiny subset of all possible strings. I thought it was this being part of a relatively tiny set was what made them special. We could just say "Well, intelligences are the only causes for algorithmically compressible contingent complex events" and not seek a further justification for why this is so in probability theory, which would make CSI a merely empirical observation. But if we want to root it in an objective mathematical basis, it seems the independently existing, unlikely, relatively tiny subset member Sequence A (or any such sequence) would become a problem. True, I have not defined what Sequence A is, but that is because it can be any sequence, and the problem would still exist.Atom
June 7, 2007
June
06
Jun
7
07
2007
07:46 AM
7
07
46
AM
PDT
Atom & Jared: Thanks. The Fisher vs Bayes article and the other chapters are well worth the read. Maybe I should highlight my own three-sentencer from above:
The issue is, what are the relevant probabilistic resources. When an event [which is independently and simply describable, i.e specified] falls sufficiently low relative to those resources and a “reasonable threshold,” [i.e. it is complex] it is rational to reject the null — chance — hypothesis. In cases where contingency dominates, the alternative to chance is agency, i.e once we see contingency [outcomes could easily have been different], we are in a domain where one of two alternatives dominates, so to reasonably eliminate the one leads to a rational inference to the other as the best current explanation.
Okay GEM of TKIkairosfocus
June 7, 2007
June
06
Jun
7
07
2007
01:20 AM
1
01
20
AM
PDT
Thanks jaredl and GEM. I'll digest these today, and probably respond tomorrow.Atom
June 6, 2007
June
06
Jun
6
06
2007
07:45 AM
7
07
45
AM
PDT
I just want a two or three sentence answer as to why low-probabilities matter in some cases (can be used to reject hypotheses), but in other cases we see even more unlikely events happen. Low probabilities matter when the outcome is algorithmically compressible. Algorithmically compressible (easily describable) outcomes with low probability relevant to any known chance hypothesis relevant to the production of the event have uniformly been the products of intelligent agency where the causal history has been fully known. Therefore, confronted with an example of a low-probability event which happens also to be algorithmically compressible, one has epistemic warrant for inferring intelligent agency, rather than chance alone, produced the event. Or, if you want, read a much longer version of these three sentences here.jaredl
June 6, 2007
June
06
Jun
6
06
2007
04:45 AM
4
04
45
AM
PDT
PD Here is my online discussion briefing note on selective hyperskepticism, a descriptive term for a fallacy of skepticism that Simon Greenleaf highlighted well over 100 years ago.kairosfocus
June 6, 2007
June
06
Jun
6
06
2007
04:00 AM
4
04
00
AM
PDT
Atom: A few follow up points: 1] we flip a coin 500 times. It makes a random sequence (Sequence A), that we did not specify beforehand. There are two sets that exist: “Is Sequence A” and “Not Is Sequence A”, with the former having one member, the latter having (10^150) - 1 members. Looking at the math, it is more likely that we would have hit a member of “Not Is Sequence A” by chance than hitting a member of “Is Sequence A”. What has happened here is that first, you are in effect painting the target around where you hit, ex post facto. That is, there is an internal dependence. The probability of an outcome given an observation of that outcome is a function of the reliability of the observation, not the process that may have generated that outcome. In this case, practically certain. That is very different from the probability of getting to any sequence at random in the set of outcomes for 500 coins tossed. Again, if any sequence will do, the a target is ~ 10^150 outcomes wide and the probability is in effect 1. [The world could come to an end suddenly before the coins settle . . .] Before the toss, the odds of any one outcome are ~ 1 in 10^150. After the toss, the odds of an observed outcome being what it is are ~1. But now, if the outcome is INDEPENDENTLY specified and rare, i.e functional in some way independent of making an observation of whatever string of H's and T's comes up AND hard to get to by chance, then we have a very different target. And therein lieth all the significance of SPECIFIED and FUNCTIONALLY SPECIFIED. That is, the chance-null-hyp elimination filter [you have a real pair of alternatives: chance and agency] is based on TWO tests, not just one -- the outcome is significant and specific, and rare enough in the probability space that it is hard to get to by chance relative to the available resources. 2] we still hit a member of “Is Sequence A”, regardless of that low-probability. The probabalistic resources are the Universal Probability Bound resources, namely every chance in the universe, for all time. Even with those resources we wouldn’t expect to have hit our sequence This underscores the problem just highlighted. You have a target made up after the fact, where the circumstances have changed and depending on how the circumstances have changed. Step 1, we have a set of coins. We toss. Step 2, we see what the outcome happens to be and say, aha, it is 1 in 10^150 or so that we get there to this specific sequence. Step 3, but that's amazing, we have no reason to think we would get to this particular sequence! Now, of course, what has happened is that we have a target set of ~ 10^150. Any outcome will do. We do the experiment of tossing,a nd a particular outcome occurs. But to do that, we have moved the universe along, and here, to a new state in which we have a particular outcome of a coin toss, whatever arbitrary member of the set we happen to observe. What is the probability of getting that outcome in the after-the toss state of the universe? Pretty nearly 1 if observation is reliable. In short you are not comparing likes with likes. If you had predicted that we would get a sequence of 500 H-T choices that spells out in Ascii the opening words of the US Declaration of independence, tossed and got them, that would be a very different ballgame, but that is not what you have done. 3] we still got a member of the “Is Sequence A” set, regardless of the low probabilities Sequence A asa specific string of H's and T's did not come into existence until after the toss. If we go back in time tot he point before the toss,a nd define Sequence A as "any one particular outcome of H's and T's, X1, X2, . . . X500; then, that has probability ~ 1. We toss,a nd since the universe did not wink out in the meantime, we see a particular sequence, lo and behold. Now we can fill in the blanks of 500 X's, to get say httthhthttttthhhh . . . or whatever. 4] What troubles me is that CSI arguments appear to boil down to the following basic form: “Low-probability events that are part of a relatively minute subset (specified/functional states) of all possible events (all states) will not be expected to occur by chance.” (S2) What happens is that you seem to be missing the point that the set in the target zone of functionally specified outcomes is set independently of the outcome of the coin tosses in prospect. And, the scope is such that with all the atoms of the observed universe serving as coins, and the succession of quantum states serving as tosses, not all the quantum states in the observed universe across its lifetime suffice to give enough tosses to make it likely in aggregate to access the functional,integrated, multiple-component states of interest by chance-driven processes. But, by contrast, agents routinely produce such systems by intent and skill. That is we have a known cause of such FSCI, vs a case where chance trial and error based searches [for want of functional intermediates sufficiently close in the configurational space: first functional state is isolated, and the others are too far apart to step from one to the other by chance] to get to the functional states from an arbitrary start-point. 5] Either we can rule chance out (as in S2), or we cannot. If chance is a viable option, regardless of low-probabilities or probabalistic resources (as in S1), then we cannot rule out chance explanations. But we do rule them out, both in practice and in statistics. Furthermore, doing this WORKS. Here, we see the point that there is a difference between adequacy of warrant and proof beyond rational dispute. Of course, once we have a config space, it defines a set of possibilities that can in principle be accessed by chance-driven processes. But -- and here Fisher et al captured common-sense and gave it structure, there is a point where [risk of error notwithstanding] it is reasonable to infer that an event was caused by intent not chance, with a certain level of confidence that can often be quantified. This is as you note a common technique of scientific work and statistical inference testing. To reject it in a case where it is not convenient for worldview or agenda to accept what the inferences say, while accepting it where it suits is of course self-servingly irrational. You can get away with it if you have institutional power, but hat does not make it a reasonable thing to do. Hence, my remarks on selctive hyperskepticism. GEM of TKIkairosfocus
June 6, 2007
June
06
Jun
6
06
2007
03:53 AM
3
03
53
AM
PDT
Thanks GEM, I agree that UD is nice for the fact that we can discuss matters with those who are also fellow IDers that happen to have questions in a calm forum. The first part of your post was a great overview of the argument for design. I appreciated your always linked essay as well, especially the micro-jet thought experiment. I thought you hit the nail on the head there. But my issue asks a more fundamental question, I think. First, I do agree that FSCI has never been shown to be the result of anything other than agency. I also agree that from the data, agency is the best explanation. (We can agree to those points and be finished with them.) Ok, now we get to the last part of your post:
THe issue is, what are the relevant probabilistic resources. When an event falls sufficiently low relative to those resources and a “reasonable threshold,” it is rational to reject the null — chance — hypothesis.
Take my initial example again: we flip a coin 500 times. It makes a random sequence (Sequence A), that we did not specify beforehand. There are two sets that exist: "Is Sequence A" and "Not Is Sequence A", with the former having one member, the latter having (10^150) - 1 members. Looking at the math, it is more likely that we would have hit a member of "Not Is Sequence A" by chance than hitting a member of "Is Sequence A". But we still hit a member of "Is Sequence A", regardless of that low-probability. The probabalistic resources are the Universal Probability Bound resources, namely every chance in the universe, for all time. Even with those resources we wouldn't expect to have hit our sequence. (If it isn't unlikely enough for you, just flip the coin 1000 times instead, thereby clearing this hurdle by a good margin). But we still got a member of the "Is Sequence A" set, regardless of the low probabilities. Therefore, low-probability events that are part of a relatively minute subset of all possible events can still happen by chance. (S1) What troubles me is that CSI arguments appear to boil down to the following basic form: "Low-probability events that are part of a relatively minute subset (specified/functional states) of all possible events (all states) will not be expected to occur by chance." (S2) Do you see the problem? Either we can rule chance out (as in S2), or we cannot. If chance is a viable option, regardless of low-probabilities or probabalistic resources (as in S1), then we cannot rule out chance explanations. But we do rule them out, both in practice and in statistics. Furthermore, doing this WORKS. But it still seems like a problem to me to do so, without justification, since we can demonstrate S1 true.Atom
June 5, 2007
June
06
Jun
5
05
2007
08:15 AM
8
08
15
AM
PDT
Hi again Atom: One of the healthy signs on the quality of UD is the fact that one is as likely to be exchanging with a person on the same side as one on the other side of the major dispute. In short we are looking at people who are not playing party-liner games. Okay, on the key points: 1] RM + NS On long and sometimes painful experience, I have learned that it is wise to deal with likely side issues if one is to responsibly address a matter. [Sorry if that suggested that you were likely to make that argument -- I had lurkers in mind.] 2] it is unlikely to randomly find a functional state given the scarceness of functional states vs. non-functional ones. To me, this appears to be one super probability argument. I am actually adverting to the classic hypothesis-testing stratefy, commonly used in inferential statistics: --> We have a result that is credibly contingent, so it is not the deterministic product of specified dynamics and initial conditions, with perhaps a bit of noise affecting the system trajectory across the space-time-energy etc domain. --> Contingent outcomes are on a general observation, produced by chance and/or agency. [Think of the law of falling objects, then make the object a die: the uppermost face is a matter of chance, unless and agent has loaded or at least incompetently made the die.] --> The null hypothesis, is that it is chance produced, or at least chance dominated. Thence, we look at where the outcome comes out relative to the probabilities across the relevant space of possible outcomes and appropriate models for probabilities of particular outcomes. (That may be a flat distribution if we have maximal ignorance, or it may be any one of the bell or distorted bell shaped distributions, or a trailing-off "reverse J" distribution, or a rising S distribution [sigmoid] or a U-distribution, etc.) --> In the relevant cases, we are usually looking at clusters of microstates that form a set of observationally distinguishable macrostates. Typically, there is a predominant cluster, which defines the most likely -- overwhelmingly so -- probable outcome. [Look at the microjet thought experiment I made in my always linked: the diffused, scattered at random state has overwhelmingly more accessible microstates, than a clumped at random state, and that in turn than a functionally specified configured flyable micro-jet. Available random forces and energy are therefore maximally unlikely to undo the diffusion, much less configure a functional system.] --> Such is of course physically possible and logically possible, but so overwhelmingly improbable that unless one can rule out agency by separate means, on observing such an improbable outcome, the null hypothesis is rejected with a very high degree of confidence. [Here, we are using 1 in 10^150 as a reasonable threshold on the probabilistic resources of the observed universe.] --> The alternative hypothesis, is agent action. We know, even trivially, that agents routinely generate FSCI beyond the Dembski type probabilistic bound [which BTW is far more stringent than the 1 in 10^40 or 50 etc often used in lab scale thermodynamics reasoning]. Indeed, there are no known exceptions to the observation that when we see directly the causal account for a case of FSCI, it is the product of agency. In short, it is not only possible but likely for agents to produce FSCI. (Observe, too, how probabilistic resources come into play: if we are able to genrate sufficient numbers of runs in a sufficiently large contingency space, what is imporbable on one run becomes more probable on a great many runs. As a trivial example, condoms are said to be about 90% likely to work. So, if we use condoms in high-risk environments 10 times, the chances of being protected "everytime" fall at the rate 0.9^n ~ 35% for ten tries. In short, exposures can overwhelm protection. But when the number of quantum states in the observed universe across its lifespan are not sufficient to lower the odds-against reasonably, that is a different ballgame entirely.) --> So, we infer that FSCI in the relevant cases beyond the Dembski bound is credibly the product of agency. --> So compelling is this case, that objection to it is by: [a] trying to speculate on a quasi-infinite, quasi-eternal wider unobserved universe, and/or [b] ruling that on methodological naturalistic grounds, only entities permissible in evolutionary materialist accounts of the cosmos may be permitted in "science." --> The first is an open resort to speculative metaphysics, often mislabelled as "science," and refusing to entertain the point thsat once we are in metaphysics, all live options are permited at the table of comparative difficulties. Once Agents capable of creating life and/or cosmi such as we observe are admitted to the table, the explanation by inference to agency soon shows vast superiority. --> The second also fails: it is a question-begging, historically inaccurate attempted redefinition of Science. [The consequences, as with the sad case of Mr Gonzalez, are plain for all to see.] --> Now, of course, such reasoning is provisional: we could incorrectly reject or accept the null hyp, and must be open to change in light of new evidence. [So much for the idea that the inference to design is a "science stopper" . . .] That is a characteristic property of science, and indeed, knowledge claims anchored to the messy world of observed fact in general: moral, not demonstrative [mathematically provable], credibility of claims. 3] I just want a two or three sentence answer as to why low-probabilities matter in some cases (can be used to reject hypotheses), but in other cases we see even more unlikely events happen. THe issue is, what are the relevant probabilistic resources. When an event falls sufficiently low relative to those resources and a "reasonable threshold," it is rational to reject the null -- chance -- hypothesis. In cases where contingency dominates, the alternative to chance is agency, i.e once we see contingency [outcomes could easily hav ebeen different], we are in a domain where one of two alternatives dominates, so to reasonably eliminate the one leads to a rational inference to the other as the best current explanation. This inference is of course as just noted, provisional. But, since when is that peculiar to the inference to design as opposed to chance as the cause of FSCI beyond the Dembski-type bound? [So, to make a special-pleading objection to this case, is to be selectively hyperskeptical, relative to a lot of science and statistics!) GEM of TKIkairosfocus
June 5, 2007
June
06
Jun
5
05
2007
02:50 AM
2
02
50
AM
PDT
Yeah. I'm not arguing for RM+NS, this is one of those things I view as an irrelevant distraction for my question. (I can understand the motivation for wanting to clarify that, but I am not arguing for Darwinian assumptions, so I'd rather not bring them up if possible.) I am simply asking a question about low probability. What can the low-probability of an event, by itself, tell us about the chance occurance of that event? Not much, you might say. Which is fine. But then we use a probability argument for 2LoT type of arguments and for CSI arguments: it is unlikely to randomly find a functional state given the scarceness of functional states vs. non-functional ones. To me, this appears to be one super probability argument. I just want a two or three sentence answer as to why low-probabilities matter in some cases (can be used to reject hypotheses), but in other cases we see even more unlikely events happen. It isn't to be argumentative; it is to strengthen my own pro-ID argument.Atom
June 4, 2007
June
06
Jun
4
04
2007
02:38 PM
2
02
38
PM
PDT
Atom A third point, and a bit of a PS: 3] Likelihoods . . . Note that the issue of probabilities here is hat of relative to a chance-dominated process. That is we are looking at in effect likelihoods relative to different causal factors. Relative to a chance-driven (or at least dominated) model, what is the likelihood of A? If it is below 1 in 10^150 or so, we have excellent reason to reject the chance-dominated model. But, now too, since we know that FSCI is routinely generated by agents -- indeed is typically regarded as a signature of agency -- then the likelihood of such FSCI on the model of agent action is much higher. So, absent interfering worldview asumptions and assertions, the reasonable inference is that we don't get THAT lucky. Thus, the decision to reject the null [chance driven or dominated] model. 4] But Natural Selection is not a chance process . . . This is a notorious and fallacious objection. First, differential reproductive success does not apply to the prebiotic world. Second, here is Wiki-- calling a hostile witness -- on Natural Selection:
Natural selection acts on the phenotype, or the observable characteristics of an organism, such that individuals with favorable phenotypes are more likely to survive and reproduce than those with less favorable phenotypes. If these phenotypes have a genetic basis, then the genotype associated with the favorable phenotype will increase in frequency in the next generation. Over time, this process can result in adaptations that specialize organisms for particular ecological niches and may eventually result in the emergence of new species.
The "more likely to" reveals the chance-driven nature of the NS half of RM + NS, the first half being by definition chance based. Of course the "can" and "may" are intended by Wiki in a different sense than is properly empirically warranted, i.e destruction or elimination of antecedent information leads to specialisation and reproductive isolation. Information loss is warranted, information creation on the scale of say 250 - 500 base pairs [or at least 1 in 10^150 . . .] is not. GEM of TKIkairosfocus
June 4, 2007
June
06
Jun
4
04
2007
02:17 PM
2
02
17
PM
PDT
Hi Atom: I will be brief,being on my way out the door. We should distinguish two different things: 1] Probability of occurrence of an event, given observation This has to do with the reliability of our observations, which generally speaking is not 100%. 2] The inference tot he cause of an event, given its observation. Having factored in 1, we now look at the issue of where did something come from. Life is observed, pretty nearly certainly, indeed, with practical certainty. It has a certain nanotechnology. Where did it come from? Well, we have three potential suspects: chance and/or law-like natural forces and/or agency. Law? These do not produce contingency, so they do not dominate but of course may be involved in the mechanisms. Chance? When accessing a given macrostate is maximally unlikely, it is not credible though not absolutely or logically or physically impossible for the event to come about by chance. Agency? We know agents exist and routinely generate events that exhibit FSCI. So confident are we that such does not, to nearly zero probability happen by chance,t hat it is the essence of the 2 LOT, i.e we move form lower to higher thermodynamic probability macrostates as the overwhelming trend. This is backed up by massive observation. So, on inference to best explanation anchored by empirical observation, agency is the better explanation. And, those who reject it inthe relevant cases, as we just saw on the alleged reasons for denying tenure to Mr Gonzalez, do so by smuggling in inappropriate worldview considerations that lead them to selective hyperskepticism and thus begging the worldviews level question. Hope that brief note helps GEM of TKIkairosfocus
June 4, 2007
June
06
Jun
4
04
2007
01:04 PM
1
01
04
PM
PDT
Hey GEM, Thanks for the reply. Let me try to draw this out a bit further, we'll see where it goes.
Your sequence A is, on a before the fact basis, a target, with odds of ~ 1 in 10^150, i.e it is almost impossible to hit by chance relative to the available opportunities in the observed universe.
The target is either close to impossible to hit (we probably will not see it in the lifetime of the universe, the low probability hinders it from occurring) or not (we will see it, regardless of the probabilities involved.) Without specifying it, we saw that it occurred. So it was possible to hit, regardless of any probability involved. The specific probability of that sequence (sequence A) occuring was still 1 in 10^150, there were still two categories ("Is A", "Not Is A", with the second category swamping the number of compatible states in the first), and yet this low probability did not hinder that outcome from occurring. The probability was the same in both cases. Then in the second case, we specify it beforehand, and now we will not expect it to occur. Why not? The probabilities in both cases are the same, whether we were aware of them or not. In one case, the low probability event can occur (it did, we witnessed it) but in the second we say that it probably will not. Someone will say "Why not? Low-probability? Two-categories, with associated macro-states?" knowing that these were the same mathematically in both cases. The only difference was that we subjectively "selected" the sequence in the second case...but this shouldn't affect the underlying mathematics or outcomes (unless we're getting into QM type of observer effects...) Anyway, I don't rule agency out (obviously). I just want to be able to say "This did not occur by chance due to low probability and category (microstate-macrostate) structure." But I can't as long as a counter-example matching those criteria is evident (as in my example.)Atom
June 4, 2007
June
06
Jun
4
04
2007
07:52 AM
7
07
52
AM
PDT
1 2

Leave a Reply