Uncommon Descent Serving The Intelligent Design Community

Oh, you mean, there really is a bias in academe against common sense and rational thought?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Jonathan Haidt decided, for some reason, to point out the obvious to a group of American academics recently, that they are overwhelmingly modern materialist statists (liberals).

He polled his audience at the San Antonio Convention Center, starting by asking how many considered themselves politically liberal. A sea of hands appeared, and Dr. Haidt estimated that liberals made up 80 percent of the 1,000 psychologists in the ballroom. When he asked for centrists and libertarians, he spotted fewer than three dozen hands. And then, when he asked for conservatives, he counted a grand total of three.

“This is a statistically impossible lack of diversity,” Dr. Haidt concluded, noting polls showing that 40 percent of Americans are conservative and 20 percent are liberal. In his speech and in an interview, Dr. Haidt argued that social psychologists are a “tribal-moral community” united by “sacred values” that hinder research and damage their credibility — and blind them to the hostile climate they’ve created for non-liberals.

Why anyone would bother pointing that out, I don’t know. It’s not a bias against conservatives, anyway; it’s a bias against rationality, which they don’t believe in. Our brains, remember, are shaped for fitness, not for truth. Indeed, these are the very people who channel Barney Rubble and Fred Flintstone for insights into human psychology, and anyone who doubts the validity of such “research” should just shut up and pay their taxes, right?

Well, his talk had attracted  John Tierney’s attention at the New York Times (February 7, 2007), who drew exactly the right conclusion (for modern statists and Darwinists):

“If a group circles around sacred values, they will evolve into a tribal-moral community,” he said. “They’ll embrace science whenever it supports their sacred values, but they’ll ditch it or distort it as soon as it threatens a sacred value.” It’s easy for social scientists to observe this process in other communities, like the fundamentalist Christians who embrace “intelligent design” while rejecting Darwinism.

[ … ]

For a tribal-moral community, the social psychologists in Dr. Haidt’s audience seemed refreshingly receptive to his argument. Some said he overstated how liberal the field is, but many agreed it should welcome more ideological diversity. A few even endorsed his call for a new affirmative-action goal: a membership that’s 10 percent conservative by 2020. The society’s executive committee didn’t endorse Dr. Haidt’s numerical goal, but it did vote to put a statement on the group’s home page welcoming psychologists with “diverse perspectives.” It also made a change on the “Diversity Initiatives” page — a two-letter correction of what it called a grammatical glitch, although others might see it as more of a Freudian slip.

I have friends here in Canada who make bets on when the Times will finally, mercifully shut down.

Meanwhile, Megan McArdle weighs in at Atlantic Monthly, driving home the shame:

It is just my impression, but I think what conservatives want most of all is simply recognition that they are being shut out. It is a double indignity to be discriminated against, and then be told unctuously that your group’s underrepresentation is proof that almost none of you are as good as “us”. Haidt notes that his correspondence with conservative students (anonymously) “reminded him of closeted gay students in the 1980s”:

He quoted — anonymously — from their e-mails describing how they hid their feelings when colleagues made political small talk and jokes predicated on the assumption that everyone was a liberal. “I consider myself very middle-of-the-road politically: a social liberal but fiscal conservative. Nonetheless, I avoid the topic of politics around work,” one student wrote. “Given what I’ve read of the literature, I am certain any research I conducted in political psychology would provide contrary findings and, therefore, go unpublished. Although I think I could make a substantial contribution to the knowledge base, and would be excited to do so, I will not.”
Beyond that, mostly they would like academics to be conscious of the bias, and try to counter it where possible. As the quote above suggests, this isn’t just for the benefit of conservatives, either.

All together now, class, spell W-I-M-P.

Someone else writes

I have a good friend–I won’t name out him here though–who is a tenured faculty member in a premier humanities department at a leading east coast university, and he’s . . . a conservative! How did he slip by the PC police? Simple: he kept his head down in graduate school and as a junior faculty member, practicing self-censorship and publishing boring journal articles that said little or nothing. When he finally got tenure review, he told his closest friend on the faculty, sotto voce, that “Actually I’m a Republican.” His faculty friend, similarly sotto voce, said, “Really? I’m a Republican, too!”

That’s the scandalous state of things in American universities today. Here and there–Hillsdale College, George Mason Law School, Ashland University come to mind–the administration is able to hire first rate conservative scholars at below market rates because they are actively discriminated against at probably 90 percent of American colleges and universities. Other universities will tolerate a token conservative, but having a second conservative in a department is beyond the pale.

All together now, class, spell the plural, W-I-M-P-S.

Oh, heck, let me be honest, not snarky: Nothing stops the Yanks from freeing themselves from this garbage unless my British  mentor is right, and I hope he isn’t: Americans are happy to be serfs, but they don’t like being portrayed in the media as hillbillies.

So whenever the zeroes they all gladly pay taxes for threaten to do just that, they promptly cave.

If I die tonight, I want this on the record: If I couldn’t be a Canuck and managed to bear the unbearable sorrow, I’d be a true Yankee hillbilly and proud of it. Do you think we Canucks have so far stood off the Sharia lawfare crowd, with all their money and threats, by worrying much what smarmy (and sometimes vicious) tax burdens think?

Comments
Dala, pardon, I see now, Gkairosfocus
February 11, 2011
February
02
Feb
11
11
2011
06:04 AM
6
06
04
AM
PDT
Dr Bot: The point of FUNCTIONAL specificity, is, first, observe function. If there is no function, the criterion simply does not apply. Design thinkers will happily look at a complex item that there is no grounds for assigning functional specificity, and saying: no grounds for inferring specificity on function so, bye bye. As it turns out, there are a great many cases where we do observe function and specificity that is information-rich. So, when I look at your:
it may be the case that FCSI is contingent on sub units that exhibit complexity or regularity . . . . The functional snowflake is contingent on a general crystalline structure (regularity) but in a specific configuration. The general structure can result from chance and necessicity so the specific configuration is contingent on the general structure – If your experiment designs out the ability to generate any snow flakes you have also designed out any way of it generating a functional snow flake.
. . . this is irrelevant. Any material entity will exhibit behaviour constrained by law [made of atoms, which behave in a lawlike way], and most will also show some stochastic patterns, even something so simple as surface roughnesss. The issue is, is there an aspect of function that is crucially dependent on specific information, and in turn is requiring 1000+ yes/no decisions to specify it? For instance, the snowflake cam machine thought exercise, in the first part, there was a specific dependency. So we have a right to infer that the cams in question, though of an unusual material, were designed. (The machine as a whole would by obvious manifest signs of design be just that.) You will further observe that I specifically took note of the crystalline structure. [Which I BTW note extends to say Steel -- do you want to imply seriously that a steel gear, cam or car part, because it is made of a crystalline material, cannot be inferred to as designed unless the LOGICAL POSSIBILITY of chance is ruled out ahead of time? I again invite you to look at the Abel plausibility bound and the basic premise of the second law of thermodynamics, statistical form. Oterwise you are inferring that the watch on your arm cannot be inferred to as designed unless the logical possibility that a volcano spurted it out is ruled out absolutely. We are here dealing with reasonable empirically well founded warrant, not answering to every twist and turn of selective hyperskepticism.] In a dendritic snowflake, the crystalline structure forces a hexagonal form. But, the relevant aspect is that dendrites can grow on the branches in many ways, and we here imagine that someone is able to control that precisely enough to make a cam out of a snowflake. Such a pattern can then in principle be made use of in a cam. And, if we see a machine that uses a snowflake as a cam, where the precise shape is specific on function, i.e function depends strongly on the precision of the shape (just try playing around with the shape of the gearing in your watch to see what I mean), then we have good reason to infer the design of the cam. Now the last part of the cite moves to a classic red herring. The test exercise as discussed is designed to test the specific aspect of phenomena, where we ALREADY see FSCI, and we must ask, where can such FSCI, per empirical observable tests, come from. We already know from the very fact of a config space that any config is possible in principle. But the material issue is whether the typical configs accessible on chance can reasonably get you to an island of function on essentially random walk trial and error from an arbitrary start point. The answer of the empirics, as wella s the analysis of such random walks in such a spac e, is that there is not a basis for searching enough out of 10^300+ configs to have any confi8dence that we can catch a functional config. This grounds the inference that since we routinely see FSCI coming from intelligence and only from such, its presence is a good sign of design as cause. The fact that you have joined with Dala in resorting to bare logical possibility as a resort, is telling. GEM of TKIkairosfocus
February 11, 2011
February
02
Feb
11
11
2011
06:03 AM
6
06
03
AM
PDT
Another example just occurred ... Apply your experiment to Mt Rushmore, could the action of wind and water (and gravity and temperature fluctuations) produce erosion patterns like the presidential faces - Obviously not! - but if you want to design an experiment to demonstrate this then it is no good if your experiment doesn't allow ANY erosion patterns AT ALL. Hypothesis: Erosion can't produce the faces at Mt Rushmore. Experiment: A system that does not model erosion. Observation: Faces do not appear. Are you getting any closer to understanding this important point?DrBot
February 11, 2011
February
02
Feb
11
11
2011
06:00 AM
6
06
00
AM
PDT
kairosfocus:...Then, think about how your resort to try to support evolutionary materialism You missunderstand me, kairosfocus. I was replying to markf's claim that evolution by random mutation and natural selection cound be falsified by knowing the age of the earth. My point was that there is no way to falsify a theory which claims that something happened by a random event (or a chain of random events.) Can such a theory be called scientific?Dala
February 11, 2011
February
02
Feb
11
11
2011
05:19 AM
5
05
19
AM
PDT
KF, Try looking at this from the top down. If you take a system with FCSI and break it apart (destroy function) you end up with a collection of non functional bits, some of which exhibit patterns that are emperically observed to result from chance and necessity but which are non-random. In other words it may be the case that FCSI is contingent on sub units that exhibit complexity or regularity. Your experiment will not generate complexity or regularity so it will not - BY (accident of) DESIGN - produce FCSI. The question of whether FCSI can be generated by anything other than intelligence is not actually at issue here - and I note you continue to claim falsely that I'm making a concession when I say I believe that random systems can't generate FCSI: I never claimed that they could so it is not a concession to say I don't believe they can - The issue is the formulation of your experiment: By creating a system that does not produce patterns observed to be the result of chance + necessicity in nature you cannot then claim that its failure to generate patterns observed in nature (but which have only been observed to result from intelligence) is proof that chance and necessicity cannot generate these patterns. All you have done is specify a system that does not produce patterns observed in nature. In order for me to accept the proposed experiment as valid you need to demonstrate that FCSI is not contingent in any way on any form of complexity or regularity observed to result from natural forces.
Now, let us imagine a system with a snowflake cam bar that is functionally specific and complex. That is, if the dendrites are wrong, even by a small amount, the system will not work.
Yo are inadvertently illustrating my point here - The functional snowflake is contingent on a general crystalline structure (regularity) but in a specific configuration. The general structure can result from chance and necessicity so the specific configuration is contingent on the general structure - If your experiment designs out the ability to generate any snow flakes you have also designed out any way of it generating a functional snow flake. Ergo: If your system cannot produce patterns observed to result from C+N in nature it cannot be used as a test to see if other patterns observed in nature are the result of either C+N or design. Far from being, as you accuse, a strawman red herring, this is a critical flaw in your proposed proof. Note that no goal posts have been moved here, my position is the same (if a little more refined) as it was at the start - Your hypothesis is fine and I do not object to it one bit, I take issue with the proposed experiment because it is flawed.DrBot
February 11, 2011
February
02
Feb
11
11
2011
05:17 AM
5
05
17
AM
PDT
Dala: Re: With “randomness” there is no such thing “not enough time” . . . . There is no limit to what a random mutaion could create in the next generation. Logical/physical possibility is not a reasonable criterion for an empirical claim. If that were so, the second law of thermodynamics in particular would collapse, as it rests on the statistical balance of accessible clusters of micro states, thence the likelihood that a system will move to a more or less likely cluster across time. (Cf the discussion here, in context.) Indeed, it could be argued that ANY pattern we think was a result of mechanical necessity was simply a matter of chance to date. Your claims boil down to a rejection of scientific reasoning on observed facts. Please see the discussion of Abel's universal plausibility bound, here. Also, the disucssion on the design inference explanatory filter and inference to law, chance or design on aspects of a given phenomenon, here. Then, think about how your resort to try to support evolutionary materialism reveals itself as undermining the very basis of science itself. GEM of TKIkairosfocus
February 11, 2011
February
02
Feb
11
11
2011
03:22 AM
3
03
22
AM
PDT
Is the claim that complex life arose by random mutation (and natural selection) falsifiable? (cut)...This is more or less Darwin’s initial proposal. At least two possible observations were offered shortly afterwards that would have falsified this proposal. (a) The earth is not old enough (Lord Kelvin). However, it turned out it was old enough. With "randomness" there is no such thing "not enough time". In a VEEEEEERY unlikely mutation, a frog could turn into a human in the next generation. Giving yourself more time just makes things "more probable"; you have more chances to do the improbable. (b) Inheritance is blended not particulate. However, it turned out it was particulate. Again, this makes no difference whatsoever. There is no limit to what a random mutaion could create in the next generation. (2) That random variation that we observe between parents and offspring plus natural selection was responsible for all of diversity of life we see today. This has been falsified. That we have observed other ways that diversity arises doesent mean that randomness didn't create the diveristy we observe today. All you have falsified the theory that only randomness produces change. And no matter how much you try you will never be able to falsify the theory that randomness creates SOME of the variations either. Again, with randomness, it's all about what we accept as likely or unlikely. You cannot falsify it in any way.Dala
February 11, 2011
February
02
Feb
11
11
2011
02:39 AM
2
02
39
AM
PDT
F/N: I must also pause to correct the attempt to undermine the design inference explanatory filter by setting up a strawman misrepresentation that incorrectly suggests that it is prone to false positives. When in fact, it goes out of its way in willingness to accept false negatives to be as sure as empirical tests can be that when it rules positively, it does so reliably. 1 --> First, Dr Rob presumably knows that ANY data structure can be converted into a suitable cluster of networked string data elements [often by using pointers as control elements to navigate around the network] 2 --> As a direct consequence, an analysis of FSCI on string data structures is without loss of generality, i.e.: 3 --> since we can convert a given 3-D, timelined organised functionally integrated system into a set of structured yes/no decisions, we can apply the 1,000 bit threshold, FSCI test to complex functional systems that implicitly store information in how they are organised and synchronised. (The cam bar as a programming element is a case in point, including our thought exercise snowflake cam bar.) 4 --> With that in mind, let us look at how we would observe an organised system based on selection of key aspects such as of course the cam bar. 5 --> Now, let us imagine a system with a snowflake cam bar that is functionally specific and complex. That is, if the dendrites are wrong, even by a small amount, the system will not work. (Let us imagine, it is something like a complex cloth weaving loom.) 6 --> So, we can map the dendrites as data storage elements and synchronise them on whatever serves as the controlling clock. 7 --> Soon, very soon, we would surpass 1,000 bits of information stored in the implicit set of yes/no decisions to shape the snowflakes so they would function correctly in the integrated system. 8 --> Let us apply the aspects based filter, with the snowflake cam bar as the aspect of interest:
a: is the precise, functionally specific shape explained by mechanical necessity? (No, the flake forms in a hexagonal shape as constrained by laws linked to the nature of the H2O molecule and its constituent atoms, but the dendrites have freedom to take a particular location and length.) b: Is the shape that functions in a precisely integrared sysrem explained by chance, i.e freedom to take up any config in the space of configs, and simply getting this one by the luck of the draw? (No: the shape sits in an island of function that is at least 1,000 yes/no basic decisions deep, which cannot reasonably be searched out by random walk based searches on the gamut of the observable cosmos.) c: is it credibly explained by purposeful choice? (yes, as the precision of the shape allows an integrated system to work.)
9 --> So, if we saw a loom controlled by a snowflake based cam bar, we would be reasonably confident that the shape of the snowflakes in that case were not accidental, once we saw that we have isolated islands of function dependent on the particular shape beyond 1,000 yes/no decisions. 10 --> If we saw a similar machine where any shape of snowflake within a wide range would produce a pattern, we would not be justified to think the shape of the snowflakes was a matter of purposeful choice. 11 --> Instead, we would most likely conclude that the machine would have been designed to incorporate the necessity plus chance elements of the snowflake's shape, presumably to create artistically unique weaves of cloth. (Notice how we have shifted the aspect we are considering here.) GEM of TKIkairosfocus
February 11, 2011
February
02
Feb
11
11
2011
01:17 AM
1
01
17
AM
PDT
So, to complain that the test for whether FSCI would result from an infinite monkeys test is not going to replicate a stratigraphic layering pattern is to inject an utter irrelevancy, a red herring. The evidence, which you concede, is that FSCI is not a credible product of undirected chance plus necessity. A stratigraphic column, a very different thing, is a credible product of chance plus necessity, as can be empirically observed and verified. It produces complexity of particle patterns in a cementitious matrix, like the complexity of a lump of granite, and a fairly simple banding pattern of layers as variations in circumstances shift the layering. (E.g. between what look to be mud flow layers of rounded off rocks by the rod cut by Govt HQ here, there is a 2” or so layer of what looks like a fine cement plaster, probably due to a very powerful pyroclastic flow event; the event that wiped out St Pierre Martinique in May 1902 put just 3/4” of fine ash on the ground. Thereafter, the mud flow aggregations continue in thicker layers of several feet each.) but, there is no tight coupling between the complexity and the order, nor is the resulting pattern functionally specific. You have compared bananas with mangoes. Similarly, the same error occurs here:
Your own experiment would not produce a pattern analogous to a snowflake but you claim that its inability to produce a pattern analogous to DNA or a cell is evidence that they are not the product of natural processes. By your own argument, as a result of your own experiment, it would appear that snowflakes are the product of design and cannot be the result of chance + necessity – Why should I accept the result of an experiment that would consistently produce false positives when evaluated with known empirical test cases
1 --> The exercise of trying to generate linear strings showing functional sequence complexity by chance plus necessity without choice through an infiite monkeys exercise is of course utterly different from the circumstances that produce a 3-dimensional, hexagon-symmetry branching pattern snowflake. 2 --> This is bananas and mangoes.
By way of utter contrast again, D/RNA and protein molecules are exactly string structures, produced by chaining of contingent monomers into configurations that are functional because of the complex sequence that exploits chemistry and other factors [such as 3-D shape so that there is a key-lock fitting effect for D/RNA], just as text strings in this post are functional because they are composed of special sequences of letters in a linear data array that follow certain rules of symbolism and meaning.
3 --> Just so, with D/RNA, the specific sequence of GCAT/U monomers specifies a relevant protein sequence (and carries out associated regulatory controls), based on symbolic meaning of elements in succession in a string structure. 4 --> There are cellular machines that then use this structure to assemble protein molecules step by step. 5 --> Such strings of amino acids then fold (by themselves or with aid of more machines) to structures in certain patterns relevant to their onward function as cellular machinery that are deeply isolated in the config space of such sequences. 6 --> That onward configuration is indeed based on physical-chemical forces, but those forces are being exploited based on the information coded in strings. So, the strings must be explained. 7 --> And, the infinite monkeys test is a valid test for the possibility of getting to meaningful strings by chance plus necessity. Noting, that the sequence on the string is on the whole NOT constrained by chemistry, i.e. the D/RNA sequence is not driven by bonding or related forces. Any one of the GCAT/U can be succeeded by any other of the GCAT/U in the same string. 8 --> In the case of DNA, the two inter-wound strings in the helix are key-lock complementary, but that is from one string to the corresponding member of the other string, not in the sequence of the one string. (The key-lock complementarity is in fact the basis for the information storage, similar to how the ridges and valleys in a Yale-type key and lock store functional information, or how cams on a cam bar store functional information; as was used in many mechanical systems as a sort of usually analogue programming.) 9 --> So, the infinite monkeys test is precisely relevant to the material question of generating specifically functional complex, information-bearing data strings by chance and necessity without choice as a material factor. 10 --> The tests show, as noted, that what is of order 10^50 configs, is searchable within relevant resources, but what is of order 10^300 or more is a very different matter. 11 --> This evidence from tests that had the threshold been at the old level form statistical thermodynamics in the days before modern computers, were at about 10^50 WOULD HAVE BEEN FAILED, further supports the conclusion that when we test for the origin of functionally specific strings by chance and necessity, we are doing a relevant test. 12 --> Irrelevant cases that are not about FSCI and would not create functionally specific complex organisation embedding implicit FSCI – a complex cam bar would be a good case in point [and in principle we could make such a cam bar from a controlled set of snowflakes!] – are irrelevant. 13 --> But let us build on the snowflake cam bar example as a further thought exercise. Here, we imagine a bar of cams that in sequence would control complex machinery to carry out a co-ordinated task, based on a follower that then transfers the stored information to the mechanism. Unless the shape of the elements is precisely controlled and synchronised, such a unit will fail in operation. (And of course the 1,000 basic yes/no decisions threshold to get to the functional system will be passed rather quickly in this case.) 14 --> So, he claim you made is based on a red herring led out to a strawman, by way of moving goal posts. ______________ GEM of TKIkairosfocus
February 11, 2011
February
02
Feb
11
11
2011
12:38 AM
12
12
38
AM
PDT
Dr Bot: I follow up overnight. First, MF's initial challenge has been that the core assertions and claims of design theory are untestable. In response, I set up the construct FSCO/I, especially in the form dFSCI. If it can be shown that -- in any reasonable situation, under any reasonable circumstances traceable to chance and necessity without intervention of purposeful choice, on the gamut of our observed cosmos, FSCO/I spontaneously originates, then design theory is falsified. Why is that? Fundamentally, because design theory is a theory of origin of specifically functional, complex organisation and associated information. So, if this sort of thing can be shown to routinely or credibly rarely but plausibly originate by chance contingency acting with the laws of mechanical necessity without design, the central claims of design theory would lose credibility. Complex Specified information is about the sort of thing we find in FSCI (as is brought out here), and irreducible complexity is a form of FSCO, which is associated with implied FSCI, through the C1 - 5 factors identified and discussed here. The infinite monkeys type test, in the form of a zener noise ckt flattened out with a PRBS counter and register ckt, is a convenient way to produce the combination of chance and necessity in action that would potentially search a config space for spontaneous FSCI. In practical tests it is proved that things in config spaces of order 10^50 or so, are reachable. But things in spaces of order 10^300, on empirical and analytical grounds, are not; at least in empirical terms; logical-physical possibility is note to be conflated with empirical plausibility and credible observability. (Hence,the Abel-type universal plausibility bound.) This is an important first empirical test. It is therefore credible to conclude that where we see FSCO/I -- with the tight coupling of specificity of function and complexity in the sense of islands of function in large config spaces set by contingencies of order 1,000 or more basic yes/no decisions, its best current explanation on empirical observations and associated analysis is choice, rather than chance and/or necessity. Now, too, we need to distinguish the constructs: (i) randomness, (ii) order and (iii) organisation, as Abel and Trevors did in their discussion paper as previously linked on three types of sequence complexity. This reflects a key remark by J S Wicken, in 1979, that has come up several times in the ongoing ID Foundations series here at UD: ____________________ >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in. Also, since complex organisation can be analysed as a wiring network, and then reduced to a string of instructions specifying components, interfaces and connecting arcs, functionally specific complex organisation [FSCO] as discussed by Wicken is implicitly associated with functionally specific complex information [FSCI]. )] >> _____________________ Similarly, and as already excerpted above, Orgel in 1973 distinguished:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added. Crystals, of course, would by extension include snow crystals, and order enfolds cases such as vortexes, up to and including hurricanes etc. ]
I have extended the point on crystals to include the complex case of the dendritic snowflake, by observing that the part that is simple, symmetric and orderly traces to forces of necessity keyed to the structure of the H2O molecule, and the complex branching structure traces to specific happenstance of atmospheric conditions at the moment of forming by riming etc. Now, you have objected that something like stratigraphic layering is not explained on chance alone, but by chance and necessity in concert. Then, you have extended this to the claimed power of chemistry in the biologically relevant context. (Oddly, when I took up that context starting with Darwin's still little electrified pond full of salts etc, you have now objected that my remarks are irrelevant to your concerns. But, as TMLO Chs 7 – 9 discusses, that is precisely relevant to the issue of whether chemistry and associated thermodynamically driven kinetics, can account for the origin of life with its complex functional organisation. Cf my always linked note App 1 here.) Stratigraphic layering of course, is a case of chance plus necessity in action, giving rise to a complex pattern of particles in layers, typically driven by hydrodynamic sorting and settling mechanisms, or similar mechanism in a volcanic deposition episode, whether by pyroclastic flow or by lahar etc – very relevant to where I sit as I type this. This is not a case of FSCO/I; as Joseph has pointed out: there is not a tight, functional coupling between the specific configuration of elements and complexity. If the pattern of currents and suspended particles had been different for the moment, a different rock pattern would have been deposited and that would be that, any pattern would more or less do as well in very broad limits. Stochastically dominated, chance contingency, not choice, is the best explanation. And, the explanatory filter's verdict would be just that. [ . . . ]kairosfocus
February 11, 2011
February
02
Feb
11
11
2011
12:38 AM
12
12
38
AM
PDT
Dr Bot: Please read here and onward on three varieties of sequence complexity. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
02:35 PM
2
02
35
PM
PDT
Something like a dendritic snowflake has a ordered hex structure, and a variable system of dendrites, but the complex part is chance and the ordered part is too simple and constrained to be informational.
I don't think I'm going to get anywhere with this but here goes. Your own experiment would not produce a pattern analogous to a snowflake but you claim that its inability to produce a pattern analogous to DNA or a cell is evidence that they are not the product of natural processes. By your own argument, as a result of your own experiment, it would appear that snowflakes are the product of design and cannot be the result of chance + necessity - Why should I accept the result of an experiment that would consistently produce false positives when evaluated with known empirical test cases? Your last two posts about OOL and FCSI are irrelevant to my point, which you continue to avoid dealing with. Your experiment would not demonstrate what you claim it can demonstrate - you need to devise a better experiment, that is all.DrBot
February 10, 2011
February
02
Feb
10
10
2011
01:41 PM
1
01
41
PM
PDT
#55 Dala Sorry - I did not address the claim that life started by a random event. This is not falsifiable and is not a scientific hypothesis. All scientific OOL hypotheses specify more detail.markf
February 10, 2011
February
02
Feb
10
10
2011
09:57 AM
9
09
57
AM
PDT
#21 Dala Is the claim that complex life arose by random mutation (and natural selection) falsifiable? Is the claim that life started by a “random event” falsifiable? Yes. Differentiate two proposals: (1) That random variation that we observe between parents and offspring plus natural selection was responsible for the vast majority of diversity of life we see today. This is more or less Darwin's initial proposal. At least two possible observations were offered shortly afterwards that would have falsified this proposal. (a) The earth is not old enough (Lord Kelvin). However, it turned out it was old enough. (b) Inheritance is blended not particulate. However, it turned out it was particulate. (2) That random variation that we observe between parents and offspring plus natural selection was responsible for all of diversity of life we see today. This has been falsified. Genetic drift and small amount of Larmarckian inheritance have been shown to play a role - although it is debated to what extent.markf
February 10, 2011
February
02
Feb
10
10
2011
09:56 AM
9
09
56
AM
PDT
PS: As has been pointed out from Orgel onward, ordered, repeating structures do not store significant quantities of functional information. They are simply too constrained. Something like a dendritic snowflake has a ordered hex structure, and a variable system of dendrites, but the complex part is chance and the ordered part is too simple and constrained to be informational. Conceivably, we could control the micro atmospheric conditions and store information in the dendrites, but that would be a case of design. Uncontrolled stochastic contingency is what we call chance. Choice driven contingency is what we call design.kairosfocus
February 10, 2011
February
02
Feb
10
10
2011
09:42 AM
9
09
42
AM
PDT
Dr Bot: I note that there is above, an analysis of where the contingency comes from in the systems that blend chance plus necessity. Not the necessity [which is the source of natural regularities or the predictability and controllability of an engineered system] but the chance component. There is a myth that through accumulating small chance steps with a filter based on necessity, one can climb the easy slope up the back end of Mt Improbable. (Indeed, GAs do searches on much smaller config spaces; where the configs have different degrees of function and we can compare with an objective function and climb to better performance.) The problem with that myth is that it starts way too late, as I pointed out already: fitness slopes with possibility of improvement are WITHIN islands of function. The root problem is to first get to shores of an isolated island of function in a truly large config space. On evidence relevant to the living cell, the simplest independent life forms are going to use 100,000 to 1, mn bits or so worth of DNA, and they are most likely going to be at the upper end. (The organisms at the lower end are parasites of one kind or another.) Until you are at that threshold you do not have a credible metabolising entity with a self-replication facility. 1,000 bits, the FSCI cutoff is two to three orders of magnitude below that on bit depth, and hugely below it on config space; which is an exponent of bit depth. Going beyond that, you will note that I have pointed out that to get to the novel, multicellular body plans we are looking at 10's - 100's of Mbits, dozens of times over. I almost don't need to note that the window shown by the Cambrian on the usual timeline is about 10 MN years. Again, the tests are that when novel DNA is put into ova for diverse species that have sufficiently divergent body plans, development begins on the host ova plan, then fails when the DNA to write required proteins is missing. In short the organisation of the host cell as a whole is a part of the relevant bio-information. So, a test that looks for 1,000 bit chunks of functional information on chance then onward would string these together, is a reasonable test. The problem is, we know on analysis and experience that a config space of 10^300 or so cells is not sufficiently searchable on the gamut of our observable cosmos to make a difference. And, the real ones credible for first life start well beyond that. If you want to start in something like a space of 10^50 configs, you need to justify that empirically in a context relevant to OOL, then OO body plans. Remember, 25 ASCII characters worth of info is what you are talking about there, equivalent to 3 - 4 typical length English words. How much of an algorithm or program can you specify in that much space? What sort of wiring network could you specify in that much space? Do you see why the FSCI threshold test is a significant one? GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
09:36 AM
9
09
36
AM
PDT
And notice I have in mind that the zener noise goes to a PRBS counter, one that uses XOR feedback links and feed forward links, etc to get a pseudo-random chain.
Point taken - I had forgotten about this extra step, you have random noise being put through a mechanism. Unfortunatly it still doesn't get past the basic problem.
Likelihood of FSCI as shown practically nil.
Likelyhood of ordered repeating structures like crystals and sediment as shown is also practically nil.
So, any test by which FSCI could potentially be generated, is a legitimate test of the design inference, through testing the infinite monkeys model.
I disagree for reasons already stated, but I'll try again from a different angle. We can invent many randomly based systems, with some mechanical necessity, that might potentially generate FCSI but with a probability so low as to say that we can rule it out as occuring reasonably by chance. Your system is one example. This in its self is not enough to demonstrate that natural forces cannot generate FCSI, all you demonstrate is that your system can't generate FCSI. In order to use it as evidence of other systems not generating FCSI you need to show why it applies in the general case, rather than the specific one. I'm afraid you haven't done this for the simple reason that your system doesn't generate patterns that we KNOW are the result of chance+necessity in the natural world. The way your system works rules out any high probability of producing non FCSI non-random patterns of the kind found in nature - this implies the possibility that we have ruled out by accident of design the possibility of it also producing FCSI - in other words we can't rely on this system as a reasonable model of nature so we can't draw any inferences about nature from it. any test by which FSCI could potentially be generated is not a legitimate test of the design inference. Only tests which we can show are directly relevant to the systems in question.
Along the way, you actually admit the central point: chance is not a credible seed for FSCI.
I've never actually disputed it, I don't believe that the kind of order found in biology can come about without intelligent design, I'm just arguing that the specific test you proposed is flawed.
And in any case this is as Joseph has aptly pointed out, a red herring led away to a strawman, creating the false and misleading, ad hominem laced impression that I do not know what I am talking about on this topic.
I believe that your proposed experimental proof is flawed, you are claiming that criticising your ideas is an attack on you personally. Does your criticism of me also constitute an ad-hom? I don't believe that critiquing each others ideas and claims constitutes a personal attack. I don't see how we can have any kind of reasoned debate if you believe that me disagreeing with you is uncivilized. I also don't see why a valid criticism is a straw man or a red herring? Sorry but this sounds like a rhetorical dismissal to me.DrBot
February 10, 2011
February
02
Feb
10
10
2011
08:58 AM
8
08
58
AM
PDT
Dr Bot: Pardon, but how does a computer work to physically execute instructions again, but by precisely organised mechanical necessity? And notice I have in mind that the zener noise goes to a PRBS counter, one that uses XOR feedback links and feed forward links, etc to get a pseudo-random chain. That way the non-flat distribution of the Zener goes into a flattening mechanical subsystem [a digital circuit that is actually deterministic but since it is randomly seeded [and maybe randomly clocked too] the output is flattened random distribtution], producing a flat random output, flat enough to be used in commercial systems. Chance plus necessity. Likelihood of FSCI as shown practically nil. And in any case this is as Joseph has aptly pointed out, a red herring led away to a strawman, creating the false and misleading, ad hominem laced impression that I do not know what I am talking about on this topic. As we started way back above: once any chance plus necessity system without input of active information, can genrat5e FSCI, the design inference on FSCI as a sign of design, is finished. So, any test by which FSCI could potentially be generated, is a legitimate test of the design inference, through testing the infinite monkeys model. Along the way, you actually admit the central point: chance is not a credible seed for FSCI. So, since mechanical necessity is not a source of high contingency, chance plus necessity would have to depend on chance to try to get to islands of function. But those islands of function by virtue of the scope of the config spaces of 1,000 bits or more, are beyond the credible reach of our observed cosmos. And so, we are left with one credible candidate cause for FSCI, the observed one: design. Thanks. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
07:26 AM
7
07
26
AM
PDT
KF, I'll try this one more time then I'll give up. Lets go back to the start of this specific point of discussion and try to avoid red herrings and other distractions:
To falsify the design inference, simply produce a case where, in your observation [or that of a competent observer], it is reliably known that chance plus mechanical necessity produces at least 1,000 bits of functionally specific complex information, as could be done by an implementation of the infinite monkeys situation. (Cf this recent UD thread (and onward threads in the ID Foundations series) on that subject.) A simple way would be to set up a million PCs with floppy drives and use zener noise sources to spew noise across them. Then, every 30 s or so, check for a readable file that has 1,000 or more bits of functional information. That is 125 bytes, not a lot. If you do ASCII + a checksum bit per character, that is 125 letters of coherent text that functions linguistically or algorithmically or on some data structure. 125 letters is about 20 words of English worth. This has been put on the table explicitly, many many times.
I'm addressing a specifric point in this post of yours:
To falsify the design inference, simply produce a case where, in your observation [or that of a competent observer], it is reliably known that chance plus mechanical necessity produces at least 1,000 bits of functionally specific complex information ... A simple way would be to set up a million PCs with floppy drives and use zener noise sources to spew noise across them. Then, every 30 s or so, check for a readable file that has 1,000 or more bits of functional information.
There is no mechanical necessity in your example - just random noise. As I have stated (it is not a concession because I never stated otherwise) that random noise is not FCSI, nor are many other patterns that are the result of chance plus mechanical necessity. You are suggesting a way to falsify the design inference using a random noise generator, I am pointing out that if you make a design inference on something other than FCSI, for example sedimentary layers of crystal structures, then the inference can't be falsified with your example - therefore it, as a proposed experiment, is flawed.DrBot
February 10, 2011
February
02
Feb
10
10
2011
06:48 AM
6
06
48
AM
PDT
Joseph: Or, with chance and necessity without design doing so. The snowflake is a classic case on this, cf my online discussion here. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
06:30 AM
6
06
30
AM
PDT
PPS: Please note, Dr Bot: the first post specifically addresses chance and necessity, which includes their joint action; such as in a snowflake where the specificity is from necessity and the variations are from chance but hey are not coupled to provide informational function -- a point that has sat for years in both the UD weak argument correctives and the always linked note I have through my handle. Necessity does not lead to high contingency, and chance does not lead to high contingency with functional specificity; the two acting together give a case where the complexity is chance and the specificity is necessity but hey are not coupled to give informational function. Only design on our observation and analysis will do that. I trust this is clear and specific enough. Going back to the FSCI origin by chance test, the aspect of complexity is on the contingency, only chance or design explain high contingency so the test for the one is a test for the other, if chance fails to make FSCI.kairosfocus
February 10, 2011
February
02
Feb
10
10
2011
06:27 AM
6
06
27
AM
PDT
DrBot:
If the failure of a random number to generate FCSI is proof that natural processes can’t produce FCSI then the failure of a random number generator to produce anything other than randomness would be proof that natural processes can’t produce crystals, sedimentary layers etc.
Please explain. The failure of random number generators to produce FCSI is evidence that random, undirected processes cannot produce FCSI. There isn't any FCSI in sedimentary layers nor is there any FCSI in crystals. So there isn't any issue with random, undirected processes producing them.Joseph
February 10, 2011
February
02
Feb
10
10
2011
06:19 AM
6
06
19
AM
PDT
Dr Bot: Re: random number generators produce random numbers, they are not good analogies for chemical processes so they do not provide any evidence that chemical processes can’t generate FCSI. 1 --> The issue here is chaining polymers, where the specific sequence is what is functional [perhaps after folding etc], for RNA, DNA and proteins. 2 --> In each of these cases, the chaining spine does not particularly constrain sequence, and indeed Dean Kenyon conceded his Biochemical Predestination thesis of 1969 in his preface to the first ID theory book, Thaxton et al The Mystery of Life's Origin on exactly this point. 3 --> So, the issue is a highly contingent chain, and how does that chain get to be as it is. For highly contingent facets of a phenomenaon, the two key causal factors are chance and intelligence. 4 --> Mechanical necessity, under a given initial condition, produces the same result, hence Laplace's Demon. (Chaos gets its unpredictability from tiny variations that block the setting up of exact initial conditions, fed into a nonlinear noise amplifier so to speak. That is from chance variations.) 5 --> So, the issue IS the source of the contingency in digital strings that are functionally specific and complex. chemistry just clicks the chain together, and it may have something to do with folding and function once folded. 6 --> So, your concession -- and concession it is -- that chance is not a credible source of FSCI, immediately points strongly to the other main empirically warranted source of high contingency, design. 7 --> Mechanical necessities of the chemistry are clicking the chain together and are expressed in folding and function, but they are not shaping the chain. The chain sequence [standard stacking links] shapes the function, not the other way around. 8 --> In short, your concession implies inference to design, as necessity will not do what you want. (And indeed, if the laws of physics and thence chemistry programmed life into the cosmos, that would have very very big cosmological fine tuning implications, given what we already know about the fine tuning of physics.] GEM of TKI PS: have you read the first post in the ID foundations series? The second one?kairosfocus
February 10, 2011
February
02
Feb
10
10
2011
06:19 AM
6
06
19
AM
PDT
KF
Thanks for the decisive concession:
I said:
I’ll make the point again – all you get out of random number generators is random numbers, you don’t get FSCI, and you also don’t get any other patterns
That isn't a concession, it is what I have been saying all along - random number generators produce random numbers, they are not good analogies for chemical processes so they do not provide any evidence that chemical processes can't generate FCSI. This is ALL I am saying, I am not claiming that chemical processes CAN produce FCSI, just that the example you were proposing - random number generators not producing FCSI as proof that nature doesn't produce FCSI - is a flawed example because random number generators don't produce other patterns that we KNOW nature DOES produce - crystal structures, sedimentary layering etc ...) If the failure of a random number to generate FCSI is proof that natural processes can't produce FCSI then the failure of a random number generator to produce anything other than randomness would be proof that natural processes can't produce crystals, sedimentary layers etc.
now, the problem you need to address is how the DNA sequences and assocated functional polymers that come together in the living cell originated and configured themselves functionally by chance plus necessity, then how novel dna on the order of 10?s of millions of bases originated by similar chance plus necessity to get us to novel body plans.
Why do I need to address this? I'm not making any claims in this domain - I'm just pointing out that your proposition about random number generators is emperically flawed - It is a bad argument that can easily be shown to be irellevant and as such doesn't help the case for ID - THIS is why you should either revise it, or witdraw it. I'm trying to give you some constructive feedback to help improve the strength of your arguments.DrBot
February 10, 2011
February
02
Feb
10
10
2011
06:01 AM
6
06
01
AM
PDT
Thanks, Denyse. I enjoyed this post. We happen to be living at the end of a highly theory-centered age. Einstein’s crack about common sense is legendary and applauded by the Laputans at the academy. We love our theories, and we love ‘em simple. Natural Selection produces the species. Sexual repression causes unhappiness. Property is the root of all social evil. E=mc2. A love of theory and liberalism go hand-in-hand. Liberals, after all, are people who are chronically unhappy with the way things are. They claim to be able to produce a radical transformation of being by negating existing values, but what they actually produce is nothingness. Darwinism is a “metaphysical research program” just surely as Relativity can never be used as a practical tool of physical measurement, since it negates the dimensions that make measurement possible. We can only hope—and pray—that the 150 year-old siege against common sense is entering its twilight phase. It has already lasted longer than either Rationalism or Transcendentalism. Common sense tells us that nature was designed. Common sense will probably prevail in the end, if for no other reason than that Modernism has outlived its welcome and lost all vitality. The very fact that the liberals are now overwhelming entrenched in the academy is actually a hopeful sign. There is nothing human nature hates more than something that is stale and used up.allanius
February 10, 2011
February
02
Feb
10
10
2011
05:49 AM
5
05
49
AM
PDT
markf:
However, what is falsifiable (and has been falsified in some cases) is **specific** claims about how biological diversity arose.
What has been falsified? It is a given that the processes espoused by the theory of evolution have never been observed constructing functional multi-part systems. So what do you have?Joseph
February 10, 2011
February
02
Feb
10
10
2011
05:46 AM
5
05
46
AM
PDT
F/N: MG cf FAQs here and top right this and every UD page, UD's being a weak arguments corrective.kairosfocus
February 10, 2011
February
02
Feb
10
10
2011
05:44 AM
5
05
44
AM
PDT
Dr Bot: Thanks for the decisive concession:
I’ll make the point again – all you get out of random number generators is random numbers, you don’t get FSCI, and you also don’t get any other patterns
As I have cited, that is indeed so. now, the problem you need to address is how the DNA sequences and assocated functional polymers that come together in the living cell originated and configured themselves functionally by chance plus necessity, then how novel dna on the order of 10's of millions of bases originated by similar chance plus necessity to get us to novel body plans. Remember, as already noted, the relevant forces usually cited are chance variation plus natural selection [= differential reproductive success], leading to descent with modification to the level of novel body plans: CV + NS --> DWM, with NBP (Cf my discussions here and here, including the issue of getting to a self-replicating cell form chemicals in a pond or the like prebiotic environment.) GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
05:42 AM
5
05
42
AM
PDT
MathGrrl:
This is an excellent example of why it is essential to clearly state one’s hypothesis and make testable predictions entailed by that hypothesis that would serve to refute it if they fail. Doing this for ID, and documenting it in a FAQ, would eliminate the “ID is not science” claim immediately. I would think that a number of people here would be interested in doing that work.
I did just that- provided a testable hypothesis for ID- well I copied it from a referenced book. Now if you don't like it then perhaps you could provide a testable hypothesis for your position so we can compare. Any time you or Dr Bot want to produce such a hypothesis for comparison would be good- the sooner the better.Joseph
February 10, 2011
February
02
Feb
10
10
2011
05:41 AM
5
05
41
AM
PDT
Dr Bot: Pardon, but the relevant point on sedimentary layers is that they are complex but not specified. Nature acting through chance plus necessity is fully capable of getting to complex outcomes, such as snowflakes, vortices and sedimetnary layers etc etc. These invariably have a situation where the complexity and any specificity they have are decoupled. In the relevant cases, linguistic information on digital symbols, or algorithmic information that functions prescriptively, as in DNA, the complexity and specificity are tightly coupled. That is how we get to islands of isolated function in a config space. Which I duly noted on. Notice how the flowchart on aspects of phenomena or objects specifically addresses specificity AND complexity. The infinite monkeys type test, by zener fired random generator or other means, is a test of getting to FSCI by chance plus necessity. As the results already cited show, things of the order of 1 in 10^50 or so are feasible on chance plus necessity, but the relevant threshold needs to be of order 1 in 10^300 or so. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
05:35 AM
5
05
35
AM
PDT
1 3 4 5 6 7

Leave a Reply