Uncommon Descent Serving The Intelligent Design Community

Oldies but baddies — AF repeats NCSE’s eight challenges to ID (from ten years ago)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent thread by Dr Sewell, AF raised again the Shallit-Elsberry list of eight challenges to design theory from a decade ago:

14 Alan FoxApril 15, 2013 at 12:56 am Unlike Profesor Hunt, Barry and Eric think design detection is well established. How about having a go at this list then. It’s been published for quite a while now.

I responded a few hours later:

______________

>>* 16 kairosfocus April 15, 2013 at 2:13 am

AF:

I note on points re your list of eight challenges.

This gets tiresomely repetitive, in a pattern of refusal to be answerable to adequate evidence, on the part of too many objectors to design theory:

>>1 Publish a mathematically rigorous definition of CSI>>

It has long since been shown, objections and censorship games notwithstanding, that reasonable quantitative metrics for FSCO/I and so for CSI, can be built and have been built. Indeed Durston et al have used such to provide a published list of values for 15 protein families.

>> 2 Provide real evidence for CSI claims >>

Blatant, all around you. But, a man convinced against his will is of the same opinion still.

Just to pick an example {–> from the list}, a phone number is obviously functionally specific (ever had a wrong number call?) and — within a reasonable context [though not beyond the 500 bit threshold] complex.

>> 3 Apply CSI to identify human agency where it is currently not known >>

FSCO/I is routinely intuitively used to identify artifacts of unknown cause, as IIRC, WmAD has pointed out regarding a room in the Smithsonian full of artifacts of unknown purpose but identified to be credibly human.

>> 4 Distinguish between chance and design in archaeoastronomy >>

The pattern of Nazca lines or the like, fit within the nodes-arcs pattern and collectively exhibit FSCO/I similar to other complex drawings. The 500 bit threshold is easily passed. If you want to contrast odds of a marker wandering randomly in a random walk, the difference will be trivial.

In short this is a refusal to use simple common sense and good will.

>> 5 Apply CSI to archaeology >>

Just shown, this is a case or repeating much the same objection in much the same context as though drumbeat repetition is capable of establishing a claim by erasing the underlying fallacies. Being wrong over and over and over again, even in the usual anti-design echo chambers, does not convert long since corrected fallacy into cogent reasoning.

>> 6 Provide a more detailed account of CSI in biology
Produce a workbook of examples using the explanatory filter, applied to a progressive series of biological phenomena, including allelic substitution of a point mutation. >>

There are book-length cogent treatments of CSI as applied to biology [try Meyer’s SITC for starts {{ –> . . . I know, I know, this was published 2009, six years after the “challenge,” but AF is raising it in 2013, TEN years after the challenge}}], and that is not enough for the objectors, there will never be enough details.

Similarly, the objection starts within an island of existing function and demands a CSI based explanation of a phenomenon known to be well within the threshold of complexity. This is a strawman tactic.

>> 7 Use CSI to classify the complexity of animal communication As mentioned in Elsberry and Shallit (2003: 9), many birds exhibit complex songs. >>

What?

Is there any doubt that bird or whale songs or bee dances for that matter are long enough and complex enough to be FSCI? That they function in communication? That we did not directly observe the origin of the capacities for such but have reason to see that they are grounded in CSI in the genome and related regulatory information expressed in embryological development that wires the relevant nerve pathways?

So, are you demanding a direct observation of the origin of such, which we do not have access to and cannot reasonably expect, when we do have access to the fact that we have indications of FSCO/I and so raise the question as to what FSCO/I is a known reliable, strongly tested sign of as best causal explanation?

>> 8 Animal cognition
Apply CSI to resolve issues in animal cognition and language use by non-human animals. >>

Capacity for language, of course, is biologically rooted, genetically stamped and embryologically expressed. So it fits into the same set of issues addressed under 7 just now.

Repetitive use of fallacies does not suddenly convert them into sound arguments.

Nor, can one reasonably demand solutions to any number of known unresolved scientific problems as a condition of accepting something that is already well enough warranted on reasonable application of inductive principles. That is, it is well established on billions of test cases without significant exception, that FSCO/I is a reliable sign of design as cause.
____________

To suddenly demand that design thinkers must solve any number of unsolved scientific questions or the evidence already in hand will be rejected, is a sign of selective hyeprskepticism and a red herring tactic led away to a strawman misrepresentation, not a case of serious and cogent reasoning. >>

=========

(*And yes, AF, I am modifying French-style quote marks to account for the effect of the Less Than sign in an HTML-sensitive context. No need to go down that little convenient side-track again twice within a few days. Especially, as someone by your own testimony apparently living in a Francophone area.)

NB: BA77’s comment at 17 is worth a look also. Let’s clip in modified French style, that he may clip and run that readeth:

>> Mr. Fox, it seems the gist of your eight ‘questions’ from ten years ago is that you doubt whether or not information, as a distinct entity, is even in the cell? In fact I remember many arguments with neo-Darwinists on UD, not so many years back, who denied information, as a distinct entity, was even in the cell. Is this still your position? If so, may I enlighten you to this recent development???,,,

Harvard cracks DNA storage, crams 700 terabytes of data into a single gram – Sebastian Anthony – August 17, 2012
Excerpt: A bioengineer and geneticist at Harvard’s Wyss Institute have successfully stored 5.5 petabits of data — around 700 terabytes — in a single gram of DNA, smashing the previous DNA data density record by a thousand times.,,, Just think about it for a moment: One gram of DNA can store 700 terabytes of data. That’s 14,000 50-gigabyte Blu-ray discs… in a droplet of DNA that would fit on the tip of your pinky. To store the same kind of data on hard drives — the densest storage medium in use today — you’d need 233 3TB drives, weighing a total of 151 kilos. In Church and Kosuri’s case, they have successfully stored around 700 kilobytes of data in DNA — Church’s latest book, in fact — and proceeded to make 70 billion copies (which they claim, jokingly, makes it the best-selling book of all time!) totaling 44 petabytes of data stored.
http://www.extremetech.com/ext…..ingle-gram

That DNA stores information is pretty much the mainstream position now Mr. Fox,,,

Venter: Life Is Robotic Software – July 15, 2012
Excerpt: “All living cells that we know of on this planet are ‘DNA software’-driven biological machines comprised of hundreds of thousands of protein robots, coded for by the DNA, that carry out precise functions,” said (Craig) Venter.
http://crev.info/2012/07/life-is-robotic-software/

That information is a distinct entity in the cell is pretty uncontroversial Mr. Fox, so why the list of eight questions? The only question that really matters is can purely material processes generate these extreme levels of functional information? Perhaps you would like to be the first Darwinist on UD to produce evidence that material processes can produce enough functional information for say the self assembly of a novel molecular machine?>>

The much underestimated and too often derided BA77  continues at 18:

>> Mr. Fox, as to the fact that a cell contains functional information, I would like to, since Dr. Sewell approaches this from the thermodynamic perspective, point out something that gets missed in the definition of functional information in the specific sequences of DNA, RNAs, and proteins. There is a deep connection between entropy and information,,

“Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]

“Bertalanffy (1968) called the relation between irreversible thermodynamics and information theory one of the most fundamental unsolved problems in biology.”
Charles J. Smith – Biosystems, Vol.1, p259.

Demonic device converts information to energy – 2010
Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.
http://www.scientificamerican……rts-inform

And what is particularly interesting about this deep connection between information and entropy is that,,,

“Gain in entropy always means loss of information, and nothing more.”
Gilbert Newton Lewis – preeminent Chemist of the first half of last century

And yet despite the fact that entropic processes tend to degrade information, it is found that the thermodynamic disequilibrium of a ‘simple’ bacteria and the environment is,,,

“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong
http://books.google.com/books?…..;lpg=PA112

Moleular Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
http://www.astroscu.unam.mx/~a…..ecular.htm

Moreover we now have good empirics to believe that information itself is what is constraining the cell to be so far out of thermodynamic equilibrium:

Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
Excerpt: It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
http://journals.witpress.com/paperinfo.asp?pid=420

Does DNA Have Telepathic Properties?-A Galaxy Insight – 2009
Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn’t be able to.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
http://www.dailygalaxy.com/my_…..ave-t.html

In fact, Encoded ‘classical’ information such as what Dembski and Marks demonstrated the conservation of, and such as what we find encoded in computer programs, and yes, as we find encoded in DNA, is found to be a subset of ‘transcendent’ (beyond space and time) quantum information/entanglement by the following method:,,,

Quantum knowledge cools computers: New understanding of entropy – June 2011
Excerpt: No heat, even a cooling effect;
In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
http://www.sciencedaily.com/re…..134300.htm

And yet, despite all this, we have ZERO evidence that material processes can generate even trivial amounts classical information much less generate massive amounts transcendent ‘non-local’ quantum information/entanglement,,,

Stephen Meyer – The Scientific Basis Of Intelligent Design
https://vimeo.com/32148403

Stephen Meyer – “The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question.”

Verse and Music:

John 1:1-4
In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. In him was life, and that life was the light of all mankind.

The Afters – Every Good Thing – Lyric Video
http://www.youtube.com/watch?v=FY2ycrpbOlw >>

Joe puts in a good knock at 25:

>>Earth to Alan Fox,

Neither you, Shallit, Elsberry nor the NCSE need concern yourselves with CSI. That is because all of you can render CSI moot just by stepping up and demonstrating that blind and undirected processes can account for what we call CSI.

It is that simple- demonstrate blind and undirected processes can produce CSI and our argument wrt CSI, falls.

However seeing that you all are nothing but cowards, you won’t do that because that means actually having to make a positive case. And everyone in the world knows that you cannot do such a thing.

The point being is that your misguided attacks on ID are NOT going to provide positiove evidence for your position. And only positive evidence for blind and undirected processes producing CSI is going to refute our arguments. >>

I picked back up from BA77 at 26:

>> BA77: The connexion between entropy and information is indeed important. I like the expression of it that runs like: the entropy of a body is the average missing info to specify the exact microstate of its constituent particles, that exists if what one knows about the system is the thermodynamic macrostate defined by its macro-level thermodynamic properties. This of course implies the degree of freedom or lack of constraint on the particles, and links to the situation where a rise in entropy is often linked to a rise in disorder, a degradation of availability of energy.  >>

_______________
And, dear Reader, what do you think AF’s answer is, several days later on this the 19th of April in this, The Year of Our Risen Lord, “dos mil trece” [= 2013]?

Dead silence, and heading off to other threads where he thought he could score debate points.

(In short, he raised dismissive talking points and stayed not for an answer. Sad.)

Let us hope that headlining the above will at least allow others who need and want such, to find a reasonable summary answer to the NCSE talking points. END

PS: Dembski and Luskin have responded at one time or another to the S-E team, try here and here (part II here; complete with with AF popping up here at no 3).

Comments
Kairosfocus posted this:
There is no sleight of hand involved.
The sleight of hand involves continuing to offer various reformulations of CSI, all of which have a fatal flaw built in. When will you face up to the problem that scientists have been pointing out to IDists for the past ten years: unless you can specify the frequency distribution of the population you are sampling, then you can draw no valid conclusions about the probability of a "tail" event.timothya
April 21, 2013
April
04
Apr
21
21
2013
01:51 AM
1
01
51
AM
PDT
Flint (via Joe): At this stage, to erect that sort of strawman in the teeth of easily accessible and abundant evidence to the contrary is not only a loaded strawman argument but one rooted in willful disregard for duties of care to truth and fairness; in the hope of profiting by a misrepresentation or outright untruth being perceived as true. In one sharp, short word: lying. Are you even willing to face the point that the reason why 500 bits is chosen as the FSCO/I threshold is rooted in the search capacity of a solar system of 10^57 atoms, and 10^17 s, which is of the order of the typical timeline proposed since the big bang. (And many YEC's don't like that either, though Humphreys [sp?] has a model in recent years of a 15 BY cosmos with a young earth.) Immoral rhetorical stunts like you just pulled go to character at this point, and add to the cumulative picture that is building up regarding the patterns of behaviour of far too many objectors to design theory. And, other denizens of TSZ and similar sites, harbouring such tactics (and worse) and letting them stand by failing to police yourselves, is enabling behaviour. KFkairosfocus
April 20, 2013
April
04
Apr
20
20
2013
11:10 PM
11
11
10
PM
PDT
Joe (& TSZ): There is no sleight of hand involved. Sampling theory is well known and is routinely used to characterise distributions on the known properties of such sampling in light of the law of large numbers [i.e. a large enough sample, often 25 - 30 in cases relevant to bell type distributions -- the far tails are special regions and tend not to be picked up, we have had the discussion about dropping darts from a ladder to a paper cut-out long since (years and years ago . . . ), you are just not going to easily hit the tail by chance with reasonable numbers of "drops" . . .] and related results. Indeed, this theory and its needle in the haystack result is what lies behind the statistical form of the second law of thermodynamics. The basic point is a simple as the point of the proverb about (blindly) searching for needles in haystacks. Namely, if there is ever so much stack and ever so little needle, it is going to be very hard to find the needle in the stack. In this case, with the known rate of fast chemical reactions [taking down to ~ 10^-14 s for ionic rxns], the known scope of the solar system [~ 10^57 atoms, ~ 10^17 s as a reasonable acceptable per argument lifetime to date, etc] we can see how much searching can be done, and the sample size is extremely generous as a result. But 2^500 possibilities for 500 bits [coins will do as one coin is a 2-sided die in effect, storing 1 bit of info in which face is up] is well beyond this, 3.27 * 10^150 possibilities. By way of rough illustration, a one straw sized sample to a cubical haystack as thick as our galaxy. Sampling theory tells us the overwhelmingly likely -- all but certain -- result of a blind sample on this scope: we have only a right to expect to see a reflection of the bulk of the distribution, not very unusual items. Think, a very large sack of beans, in which there is just one golden bean. Stick in hand at random, and pull out a handful. Do you EXPECT to see the golden bean? Not if you are rational. That is, what is strictly logically possible may be so deeply isolated in the field of possibilities, that it is effectively empirically unobservable. This in fact, at far less compelling level, is the basic logic behind the classic statistical hyp testing approach that infers that far tails are special regions unlikely to come up in reasonable sized sample. So, if we see what would be far tail popping up when it should be unobservable, we have reason to infer that this is not by chance alone. This brings us back to the basic problem of Darwin's warm little pond or the like. We know the physical and chemical forces and statistics that are at work, well understood for coming on 100 years now. Where, there are no signs of physics and chemistry being programmed to assemble life molecules and arrange them in requisite patterns relevant to cell based life function. (Where, if there were, that would be a strong clue that he cosmos was designed, for the best explanation of such an astonishing result would be that the laws of the cosmos would embed a life program. It's bad enough that to get the sort of cosmos we have that is a suitable stage for C-chemistry aqueous medium cell based life, we are looking at astonishing fine tuning.) And, the components and structures required to get you even close to code using self-replicating, metabolising, cell based life with encapsulation just simply are a golden bean search on steroids. Nor can gleefully jumping up and down and shouting Hoyle's "fallacy" help. The threshold problem comes way way way before we get to what Hoyle used as a striking example. Sir Fred spoke of tornadoes assembling Jumbo jets from aerospace junkyards, by way of rhetorical flourish. He could as well have spoken of assembling one d'Arsonval galvanometer based instrument on the panel on the flight deck. As a matter of fact, getting the right nut and bolt together and bolting it up to the right torque in the right place in a pile of galvanometer parts is already a stiff challenge for the tornado. And I would never trust an electrical "circuit" assembled by a tornado! Such examples are scaled up to macro-level familiar forces and components. The real challenge is at molecular level, but the point still holds. To correctly clump and assemble the right components for complex, functionally specific molecular nanotech systems relevant to cell based life -- which I must remind, are code using [with code storage based on molecular sequences in D/RNA] -- must address serious issues of high contingency, energetically unfavourable components, cross-interference, chirality, and more. Absent a priori evolutionary materialism backed up by the heavy hand of censorship and expulsion, the reasonable person would long since have concluded that the best explanation (per the empirically reliably known source of the FSCO/I involved) for something like what we see in the living cell is design. Like it or lump it, design sits at the table as of right, not grudging sufferance. KFkairosfocus
April 20, 2013
April
04
Apr
20
20
2013
11:00 PM
11
11
00
PM
PDT
The TSZ ilk are moaning about some alleged "slight of hand" wrt calculating probailities, when in fact their entire position is nothing but slight of hand as it obvioulsy doesn't have any evidentiary support. And nice to see they still can erect strawman after strawman:
Remember that the Creationist Model holds that everything was poofed in a single atomic event. All mass, energy, physics and chemistry and everything — including life, which has not changed since it all got poofed up 6000 years ago.
Leave it to a dork named "Flint" to come up with that garbage. And as is typical "Flint" doesn't reference that bit of crap.Joe
April 20, 2013
April
04
Apr
20
20
2013
07:56 PM
7
07
56
PM
PDT
Well, there is an evolutionary mechanism, but I didn't mention it by name in #24, because I wanted people to think through the issue before coming to a conclusion. As we look for a mechanism, I think it is helpful to hark back to the basic forces operating in nature and then work upward to see if we can identify an actual mechanism. In evolution's case (and, yes, Virginia, I am talking about evolution as it is generally understood, meaning purely naturalistic evolution), there is no force of nature, no causal element, that can produce what we see in biology. Thus, the grand, overarching, be-all and end-all evolutionary mechanism is this: Chance That is it. Once agency is rejected, and given that necessity cannot, by definition, produce the kinds of systems we see in biology, we are left with only one option. Chance.* There are a couple of other basic ways to describe the same mechanism: Particles bumping into each other. Accidental collisions over long periods of time. Or as I have often summarized many alleged evolutionary 'explanations:' Stuff Happens. ----- * We could have an interesting angels-on-the-head-of-a-pin discussion about what is meant by "chance" and whether "chance" really means "necessity in ways that we don't understand." Regardless, the only available alternative is what is typically known as chance. There is nothing more.Eric Anderson
April 20, 2013
April
04
Apr
20
20
2013
07:13 PM
7
07
13
PM
PDT
"The evolutionists can rant and rave all they want about molecules like DNA and proteins and enzymes, and processes like inheritance and duplication and recombination and mutation, but we all know those things are just nature operating freely. And nature certainly isn’t a mechanism."
There's the mechanism: right there creationists! And and also here. Who can say we don't have a mechanism? Machines cause evolution, duh. And if you want proof, evolution caused humans and humans make machines, therefore evolution makes machines, which are the mechanism.Chance Ratcliff
April 20, 2013
April
04
Apr
20
20
2013
06:20 PM
6
06
20
PM
PDT
And don’t try to answer by referring to general concepts like drift, bottlenecks, convergence, punctuation, population genetics, punctuation, and so on. Those are attempts to describe, label or calculate what is going on. But they aren’t the actual mechanisms.
Well put, Eric. I think you've done a great job exposing why neo-Darwinism is doomed. It doesn't have a mechanism to explain biological diversity. The evolutionists can rant and rave all they want about molecules like DNA and proteins and enzymes, and processes like inheritance and duplication and recombination and mutation, but we all know those things are just nature operating freely. And nature certainly isn't a mechanism. So evolutionists, where's the mechanism? Silence. That's what I thought. You can't provide mechanisms because your all atheist-materialist-Darwinists so you don't believe in them.lastyearon
April 20, 2013
April
04
Apr
20
20
2013
05:42 PM
5
05
42
PM
PDT
Eric @27, agreed on both counts. I've watched several of the Information Theory videos. They appear to be good material, especially for kids.Chance Ratcliff
April 20, 2013
April
04
Apr
20
20
2013
01:39 PM
1
01
39
PM
PDT
Thanks, Chance. I'll have to check those videos out. One has to be a little careful with Khan about evolution/ID stuff, as it isn't quite up to speed and perpetuates many of the common misconceptions, but he seems to be pretty good with general math, computing information and so forth. He has certainly done a great service with his academy.Eric Anderson
April 20, 2013
April
04
Apr
20
20
2013
01:26 PM
1
01
26
PM
PDT
kf @23:
They have hacking events for kids????
Yeah, back in my misspent youth when I was programming on the early Apple II machines, and later doing PASCAL, and also HexDec programming* on a Burroughs L9911-200 "Mini" Computer** (which used large sheet punch cards, by the way), the word "hacking" always carried a negative connotation. A "hacker" was a bad guy. Nowadays it is apparently a good word. A lot of the coding events at the schools and universities are called "hacking" events. :) ----- * Now for those who haven't tried HexDec programming, that is a tedious job. Natural language source code has been such a huge boon to the whole enterprise of programming. In some ways I feel bad that my kids haven't had to go through the exercise of poring over HexDec printouts for hours to track down a stray character. There is a lot of value in having to look 'under the hood,' so to speak, rather than just operating at a high level. ** The Burroughs (so-called) "Mini" was a 5' long beast that weighed about 1000 lbs and pulled 16.8 amps at 120v. All with an impressive 8kb memory! It had three types of information storage: stripe ledger, cassette tape, and punched paper.Eric Anderson
April 20, 2013
April
04
Apr
20
20
2013
01:13 PM
1
01
13
PM
PDT
Journey Into Information Theory It's well produced and very basic. There's also one for cryptography: Journey Into Cryptography.Chance Ratcliff
April 20, 2013
April
04
Apr
20
20
2013
12:53 PM
12
12
53
PM
PDT
Lizzie via Joe @18: kf has given an excellent response. However, I'm also curious about the "Darwinian and other material mechanisms." What mechanisms would that be? I'm not being facetious. I sincerely would like to know what mechanism is proposed. We have the following fundamental forces: gravity, electromagnetism, the strong and weak nuclear forces. We have the laws of physics. We have subsidiary mechanisms, like chemistry. What mechanism is proposed? Darwinism (as Darwin proposed it) certainly doesn't include any proposed mechanism for biological novelty. At most, what he did was propose a way for biological novelty to be preserved after it is already made. (In fact, natural selection is just a label applied after-the-fact to forces not understood, but we'll leave that discussion for another time.) Natural selection certainly is not a 'mechanism' for producing biological novelty and complex specified information or systems. So, which mechanism is being proposed? Pure chemical necessity? Errors in copying nucleotides? These have been pretty well reviewed and we have a good sense as to the probabilities, which is where criticisms of materialistic evolution often rightly focus. And don't try to answer by referring to general concepts like drift, bottlenecks, convergence, punctuation, population genetics, punctuation, and so on. Those are attempts to describe, label or calculate what is going on. But they aren't the actual mechanisms. So, pray tell, what mechanism is proposed as the source of biological novelty?Eric Anderson
April 20, 2013
April
04
Apr
20
20
2013
12:52 PM
12
12
52
PM
PDT
EA: They have hacking events for kids???? KF PS: Punched paper tape.kairosfocus
April 20, 2013
April
04
Apr
20
20
2013
12:47 PM
12
12
47
PM
PDT
Regarding Neil via Joe and KF: I just got back about an hour ago from the country's primary Computer History Museum. We were dropping our son off at a big all-day hacking event for teens, so we took the opportunity to look around the museum. Among all the other great exhibits, I saw several examples of early computers using punch-card based information storage and retrieval. Neil's comment about the player piano punch roll demonstrates that he does not understand computing and does not understand information. I'm afraid the design inference will be very difficult for someone to understand if they can't even recognize information storage when they see it.Eric Anderson
April 20, 2013
April
04
Apr
20
20
2013
12:40 PM
12
12
40
PM
PDT
EL (via Joe): Nope. The pivotal issue is sampling theory, not probability distributions. In essence, as has been pointed out over and over and over again, but ignored, when one takes a relatively small sample of a large population, one only reasonably expects to capture the bulk, not special zones like the far tails or isolated and highly atypical zones. This is like the old trick of reaching a hand deep into a sack of beans and pulling out a handful or two to get a good picture of the overall sack's contents. When we have config spaces for 500 bits or more, we are dealing with pops of 3.27 * 10^150 and up, sharply up. The atomic resources of the solar system working at fastest chemical reaction rates and for the scope of the age of the cosmos, would only be able to sample as one straw to a cubical haystack 1,000 LY thick, about as thick as our Galaxy. The only thing we could reasonably expect to pick up on a blind sample of such scope, would be the bulk. Here, straw, and not stars or solar systems etc. Where also, the other thing that you have long, and unreasonably, refused to accept is that once we deal with specifically functional configs of sufficient complexity, the constraints of proper arrangement of the right parts to achieve function confine us to narrow and very unrepresentative zones of the space of possibilities. Islands of function for illustrative metaphor. All of this has been accessible long since but you have refused to listen. I will simply say that by looking at sampling theory without having to try to get through a thicket of real and imaginary objections to probabilistic calculations, we can easily and readily see why it is unreasonable on the gamut of the solar system (or for 1,000 bits the observed cosmos) to expect to encounter FSCO/I by blind chance and mechanical necessity. Where also of course the thresholds of complexity chosen were chosen exactly for being cutoffs where the idea that chance and necessity would be reasonable would become patently ridiculous. It just turns out that even 1,000 bits is 100 - 1,000 times fewer bits than are in the genome for credible first cell based life. And, that is the pivotal case as this is the root of the suggested Darwinian tree of life. Where, precisely because the von Neumann self replicator [vNSR] required for self replication is not on the table, cutting off the hoped for excuse of the wonderful -- though undemonstrated -- powers of natural selection acting on chance variations. The only empirically warranted explanation for the FSCO/I pivotal to first life is the same as the only observed source of such: design. The problem is not with the logic, it is that the implications run counter to strongly held ideological beliefs. That is why every sort of selectively hyperskeptical dismissal, red herring and strawman argument is resorted to rather than face it. And, once design is at the table as of right not sufferance, then there is no good reason to try to exclude it in dealing with major body plans etc. Ideological blinkers don't count. Dembski et al have compiled models that give an analytical context that gives probabilities. As I and others have shown, by putting in reasonable cut-off points based on atomic resources accessible, we can turn this into a practical system good enough for the sort of inference we need. Onward, we can indeed go on to do a yardstick comparison with the performance of the on-average blind sample, i.e simple random samples. This is because of the search for a search problem that gives us a cascade of higher and higher level searches that are at least as difficult on average as a simple random search. In other words, if a search process is outperforming what flat random search would do, there is active information that has been injected intelligently into it, with all but certainty. The degree of the information added can be estimated from the out-performance of the typical search used as yardstick. And, there are sufficient cases accessible in the literature (just review the recent work of Marks and Dembski) to show that. All of this has been pointed out to you, years ago now. And all of it has been ignored or dismissed and the adequately answered objections have been recirculated over and over again. The name for that, sad to say, is going in closed minded circles, hoping for the desired result to come out. It may feel good (especially in an ideologically congenial circle), but it is not sound. And that too has long since been pointed out over and over and over. It would be funny, if it were not ever so sad. KFkairosfocus
April 20, 2013
April
04
Apr
20
20
2013
10:47 AM
10
10
47
AM
PDT
NR (via Joe): Has it registered that the old piano rolls were programs, carrying out encoded step by step finite sequences of operations stored in a medium and achieving a definite task based on a code reader connected to effectors (complete with different languages!)? Wiki:
A piano roll is a music storage medium used to operate a player piano, piano player or reproducing piano. A piano roll is a continuous roll of paper with perforations (holes) punched into it. The perforations represent note control data. The roll moves over a reading system known as a 'tracker bar' and the playing cycle for each musical note is triggered when a perforation crosses the bar and is read. The majority of piano rolls play on three distinct musical scales. The 65-note (with a playing range of A-1 to C#7) format was introduced in 1896 in the USA specifically for piano music. In 1900 a USA format playing all 88-notes of the standard piano scale was introduced. In 1902 a German 72-note scale (F-1, G-1 to E7) was introduced. All of these scales were subject to being operated by piano rolls of varying dimensions. The 1909 Buffalo Convention of US manufacturers standardized the US industry to the 88-note scale and fixed the physical dimensions for that scale. Piano rolls were in continuous mass production from around 1896 to 2008,[1][2] and are still available today, with QRS Music claiming to have 45,000 titles available with "new titles being added on a regular basis".[3] Largely replacing piano rolls, which are no longer mass-produced today, MIDI files represent a modern way in which musical performance data can be stored. MIDI files accomplish digitally and electronically what piano rolls do mechanically. Software for editing a performance stored as MIDI data often has a feature to show the music in a piano roll representation. The first paper rolls were used commercially by Welte & Sons in their Orchestrions beginning in 1883.[4]
For the player piano:
A player piano (also known as pianola or autopiano) is a self-playing piano, containing a pneumatic or electro-mechanical mechanism that operates the piano action via pre-programmed music perforated paper, or in rare instances, metallic rolls.
Am I the only one to remember punch card stacks and also eight bit paper tape (with a ninth sprocket hole) as computer program storage media? I trust that it is understood that prong height [how a Yale lock key works . . . your keys are physical instantiations of passwords], hole/no hole, dot-dash, etc are all ways to encode data and can be used to encode digital data. In short, we have a case of FSCO/I and again a clear instance that it is designed. And, save for giving the Wiki cite, this has been said more than once before here at UD. KFkairosfocus
April 20, 2013
April
04
Apr
20
20
2013
10:20 AM
10
10
20
AM
PDT
And Neil Rickert continues his cluelessness:
As an example, consider the player pianos that were at one time common. You inserted a roll of paper with punched holes, and the mechanism of the player piano used those holes to trigger the motion of the piano keys. To me, that player roll was never information. It was more like a template. It was something used as part of a causal role.
Neil, piano rolls were created by an agency using INFORMATION.
I see DNA as more like the piano roll than like the sheet music.
Both piano rolls and sheet music are designed, therefor according to Neil, DNA is designed. Nice job Neil.Joe
April 20, 2013
April
04
Apr
20
20
2013
09:13 AM
9
09
13
AM
PDT
Lizzie:
If ID proponents can calculate the probability distribution under a “chance hypothesis that takes into account Darwinian and other material mechanisms”, then, cool.
Actually Lizzie, you guys can't even demonstrate a feasibility pertaining to Darwinian and other material mechanisms. The point being is just by saying there may be a probability is giving you guys a benefit of doubt that you do not deserve.
But until they’ve done that, no matter how many equations they produce, they haven’t given us any definition of CSI that will allow us to detect design in biology,...
That is your opinion. However it is very noticeable that you still cannot produce any positive evidence wrt biology for a chance hypothesis including Darwinian and other material mechanisms. So it appears that all you have left is your whining and your fellow whiners.Joe
April 20, 2013
April
04
Apr
20
20
2013
08:42 AM
8
08
42
AM
PDT
I see that Lizzie is still bellyaching about CSI. Earth to Lizzie- if your position had any positive evidence, you wouldn't need to worry yourself with CSI. So perhaps that is what you should focus on. Attacking ID and CSI will never produce positive evidence for your position.Joe
April 20, 2013
April
04
Apr
20
20
2013
07:09 AM
7
07
09
AM
PDT
OT: Technique Unlocks Design Principles of Quantum Biology - Apr. 19, 2013 Excerpt: University of Chicago researchers have created a synthetic compound that mimics the complex quantum dynamics observed in photosynthesis and may enable fundamentally new routes to creating solar-energy technologies. ,,, The resulting molecules were able to recreate the important properties of chlorophyll molecules in photosynthetic systems that cause coherences to persist for tens of femtoseconds at room temperature. "That may not sound like a very long time -- a femtosecond is a millionth of a billionth of a second," http://www.sciencedaily.com/releases/2013/04/130419120954.htm Despite their confident tone that they have matched what is found in nature, it seems they may have a bit more engineering and 'creating' to do before they truly match what is found in nature,, Life Uses Quantum Mechanics - September 25, 2012 Excerpt: ,,,it looks as if nature has worked out how to preserve (quantum) entanglement at body temperature over time scales that physicists can only dream about.,,, Maintaining the entangled state for 100 microseconds is “an extraordinary figure,” the article states. The best human engineers have achieved is 80 microseconds.,,,, http://crev.info/2012/09/life-uses-quantum-mechanics/bornagain77
April 19, 2013
April
04
Apr
19
19
2013
03:29 PM
3
03
29
PM
PDT
F/N 3: Basic defn CSI, repeatedly drawn to AF's attention: ___________ NFL: >>p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” >> __________kairosfocus
April 19, 2013
April
04
Apr
19
19
2013
11:08 AM
11
11
08
AM
PDT
F/N 2: let me clip my excerpts from Durston et al: _____________ >>Abel and Trevors have delineated three qualitative aspects of linear digital sequence complexity [2,3], Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC). RSC corresponds to stochastic ensembles with minimal physicochemical bias and little or no tendency toward functional free-energy binding. OSC is usually patterned either by the natural regularities described by physical laws or by statistically weighted means. For example, a physico-chemical self-ordering tendency creates redundant patterns such as highly-patterned polysaccharides and the polyadenosines adsorbed onto montmorillonite [4]. Repeating motifs, with or without biofunction, result in observed OSC in nucleic acid sequences. The redundancy in OSC can, in principle, be compressed by an algorithm shorter than the sequence itself. As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5]. FSC includes the dimension of functionality [2,3]. Szostak [6] argued that neither Shannon's original measure of uncertainty [7] nor the measure of algorithmic complexity [8] are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that 'different molecular structures may be functionally equivalent'. For this reason, Szostak suggested that a new measure of information–functional information–is required [6] . . . . Shannon uncertainty, however, can be extended to measure the joint variable (X, F), where X represents the variability of data, and F functionality. This explicitly incorporates empirical knowledge of metabolic function into the measure that is usually important for evaluating sequence complexity. This measure of both the observed data and a conceptual variable of function jointly can be called Functional Uncertainty (Hf) [17], and is defined by the equation: H(Xf(t)) = -[SUM]P(Xf(t)) logP(Xf(t)) . . . (1) where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). For example, a set of 2,442 aligned sequences of proteins belonging to the ubiquitin protein family (used in the experiment later) can be assumed to satisfy the same specified function f, where f might represent the known 3-D structure of the ubiquitin protein family, or some other function common to ubiquitin. The entire set of aligned sequences that satisfies that function, therefore, constitutes the outcomes of Xf. Here, functionality relates to the whole protein family which can be inputted from a database . . . . In our approach, we leave the specific defined meaning of functionality as an input to the application, in reference to the whole sequence family. It may represent a particular domain, or the whole protein structure, or any specified function with respect to the cell. Mathematically, it is defined precisely as an outcome of a discrete-valued variable, denoted as F={f}. The set of outcomes can be thought of as specified biological states. They are presumed non-overlapping, but can be extended to be fuzzy elements . . . Biological function is mostly, though not entirely determined by the organism's genetic instructions [24-26]. The function could theoretically arise stochastically through mutational changes coupled with selection pressure, or through human experimenter involvement [13-15] . . . . The ground state g (an outcome of F) of a system is the state of presumed highest uncertainty (not necessarily equally probable) permitted by the constraints of the physical system, when no specified biological function is required or present. Certain physical systems may constrain the number of options in the ground state so that not all possible sequences are equally probable [27]. An example of a highly constrained ground state resulting in a highly ordered sequence occurs when the phosphorimidazolide of adenosine is added daily to a decameric primer bound to montmorillonite clay, producing a perfectly ordered, 50-mer sequence of polyadenosine [3]. In this case, the ground state permits only one single possible sequence . . . . The null state, a possible outcome of F denoted as 0, is defined here as a special case of the ground state of highest uncertainly when the physical system imposes no constraints at all, resulting in the equi-probability of all possible sequences or options. Such sequencing has been called "dynamically inert, dynamically decoupled, or dynamically incoherent" [28,29]. For example, the ground state of a 300 amino acid protein family can be represented by a completely random 300 amino acid sequence where functional constraints have been loosened such that any of the 20 amino acids will suffice at any of the 300 sites. From Eqn. (1) the functional uncertainty of the null state is represented as H(X0(ti))= - [SUM]P(X0(ti)) log P(X0(ti)) . . . (3) where (X0(ti)) is the conditional variable for all possible equiprobable sequences. Consider the number of all possible sequences is denoted by W. Letting the length of each sequence be denoted by N and the number of possible options at each site in the sequence be denoted by m, W = mN. For example, for a protein of length N = 257 and assuming that the number of possible options at each site is m = 20, W = 20257. Since, for the null state, we are requiring that there are no constraints and all possible sequences are equally probable, P(X0(ti)) = 1/W and H(X0(ti))= - [SUM](1/W) log (1/W) = log W . . . (4) The change in functional uncertainty from the null state is, therefore, delat_H(X0(ti), Xf(tj)) = log (W) - H(Xf(ti)). (5) . . . . The measure of Functional Sequence Complexity, denoted as z, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or z = delta_H (Xg(ti), Xf(tj)) . . . (6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information [6,32]. Eqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. However, the functionality of a sub-sequence or particular sites of a molecule can be substantially different [12]. The functionality of a sub-molecule, though clearly extremely important, has to be identified and discovered . . . . To avoid the complication of considering functionality at the sub-molecular level, we crudely assume that each site in a molecule, when calculated to have a high measure of FSC, correlates with the functionality of the whole molecule. The measure of FSC of the whole molecule, is then the total sum of the measured FSC for each site in the aligned sequences. Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) - H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 1049 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.>> _____________ FYI. Been there all along, two clicks away. KFkairosfocus
April 19, 2013
April
04
Apr
19
19
2013
11:02 AM
11
11
02
AM
PDT
F/N: adequate definitions of CSI and metrics thereof exist, I alluded to the one used by Durston et al c 2007, as that one is the one most closely tied to standard usage in information theory on H, average info per symbol. Follow his discussion on null, ground and functional states and on degrees of redundancy and you will see that. (Cf. discussion in my always linked note here.) It also happens to be peer review published, with 15 values for protein families. No explicit threshold is listed but the sampling theory "needle in haystack" point about the atomic & temporal resources of the solar system or the observed cosmos and the config space of 500 - 1,000 bits is straightforward and applicable. The implication is that the issue is not increments within islands of function with nicely behaved fitness functions etc, but getting to shores of function in such spaces dominated by gibberish as is inevitable. If you want to get into debate games on the Dembski model and metric framework of 2005, the log reduction and deduction here on shows how this can be inserted into reasonable upper limits that boil down to a threshold metric with 500 - 1,000 bits as the threshold. (Someone actually objected to such a metric, but was pointedly reminded that much of Einstein's Nobel Prize was won on an equation that specified a threshold with fairly similar mathematics.) The problem is a matter of selectively hyperskeptically making mountains out of molehills. Let me clip the linked: _______________ >> xix: Later on (2005), Dembski provided a slightly more complex formula, that we can quote and simplify, showing that it boils down to a "bits from a zone of interest [[in a wider field of possibilities] beyond a reasonable threshold of complexity" metric: X = – log2[10^120 ·p_S(T)·P(T|H)]. --> X is "chi" and p_ is "phi" xx: To simplify and build a more "practical" mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = - log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say -- as just one of many examples of a standard result -- Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2: Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by I_k = (def) log_2 1/p_k (13.2-1) xxi: So, since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = p_S(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance," (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and (b) as we can define and introduce a dummy variable for specificity, S, where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a "complex enough" threshold NB: If S = 0, this locks us at Chi = - 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will -- unsurprisingly -- be positive. Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0, i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest. S goes to 1 when we have objective grounds -- to be explained case by case -- to assign that value. That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list. A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member. (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. ) An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings. Arguably -- and of course this is hotly disputed -- DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.) So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv - xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.) xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases: Using Durston’s Fits values -- functionally specific bits -- from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond >> _______________ The Mathematics is straightforward, and the result is intuitively reasonable. On the gamut of our solar system, 500+ bits of FSCO/I is not credibly a product of blind chance and mechanical necessity. But intelligence routinely produces such. As for instance, posts in this thread exemplify. And, in the biological context, specification is cashed out as function. Which is what Dembski, Meyer, Johnson, Durston, Trevors and Abel and others all more or less say. FSCO/I is just a summary description for that. So, it is quite reasonable and inductively well grounded that if we see FSCO/I beyond that threshold, it is best explained on design. Unless, one's real problem is a priori evolutionary materialism as ideology, or one of the fellow traveller ideologies that are operationally indistinguishable therefrom. KFkairosfocus
April 19, 2013
April
04
Apr
19
19
2013
10:54 AM
10
10
54
AM
PDT
Alan, If you read "No Free Lunch" you would read that CSI wrt biology, is biological specification. Specified information is just Shannon information with meaning or function. IOW it is information in the normal use of the word. I don't understand your issue with it. You use it every day.Joe
April 19, 2013
April
04
Apr
19
19
2013
10:28 AM
10
10
28
AM
PDT
WHOOPS, AT LEAST NOTHING BROKE. kfkairosfocus
April 19, 2013
April
04
Apr
19
19
2013
10:28 AM
10
10
28
AM
PDT
TEST: <> test , test. KFkairosfocus
April 19, 2013
April
04
Apr
19
19
2013
10:27 AM
10
10
27
AM
PDT
AF: Okay, you have got the full quote marks to work, I guess you used the commenting feature, prob being when you copy and paste onwards, trouble I bet. KFkairosfocus
April 19, 2013
April
04
Apr
19
19
2013
10:26 AM
10
10
26
AM
PDT
Where's the definition of CSI, Joe?Alan Fox
April 19, 2013
April
04
Apr
19
19
2013
10:21 AM
10
10
21
AM
PDT
Alan, The following is from "No Free Lunch":
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL
You lose, again, as usual. Now what?Joe
April 19, 2013
April
04
Apr
19
19
2013
09:33 AM
9
09
33
AM
PDT
<<It has long since been shown, objections and censorship games notwithstanding, that reasonable quantitative metrics for FSCO/I and so for CSI, can be built and have been built. Indeed Durston et al have used such to provide a published list of values for 15 protein families.>> Now measuring the functional sequence complexity of proteins (if we allow for the sake of argument that Durston can reliably do this) has nothing to do, as far as I can see, with Dembski's "Complex Specified Information" as set out in "No Free Lunch". The concept of FSCO/I appears to be KF's personal invention' unendorsed by any ID theorist. Does a clear unambiguous definition of CSI exist? If so, what is it?Alan Fox
April 19, 2013
April
04
Apr
19
19
2013
07:35 AM
7
07
35
AM
PDT
1 15 16 17 18

Leave a Reply